abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
A radio frequency integrated circuit switch includes a semiconductor die with a transistor having a gate on a first-side (e.g., front-side) of the semiconductor die. The semiconductor die may include a bulk semiconductor substrate or wafer (e.g., silicon substrate or wafer). The semiconductor die may also include a first deep trench isolation (DTI) region that extends from the front-side to a backside opposite the front-side of the semiconductor die. The radio frequency integrated circuit switch further includes a body contact layer on the backside of the semiconductor die. The body contact layer is coupled to a backside of a body of the transistor. The body of the transistor may have a first P-type region (e.g., a P+ region). |
CLAIMSWhat is claimed is:1. A radio frequency integrated circuit switch, comprising:a semiconductor die comprising a transistor having a gate on a front-side of the semiconductor die, a first deep trench isolation region extending from the front-side to a backside opposite the front-side of the semiconductor die; anda body contact layer on the backside of the semiconductor die and coupled to a backside of a body of the transistor, in which the body comprises a first P-type region.2. The radio frequency integrated circuit switch of claim 1, further comprising a backside dielectric layer on the body contact layer, in which the first deep trench isolation region extends through the body contact layer and into the backside dielectric layer.3. The radio frequency integrated circuit switch of claim 1, in which the body of the transistor further comprises an N-type region between the first P-type region and the body contact layer to form an embedded diode.4. The radio frequency integrated circuit switch of claim 1, in which the body of the transistor further comprises a second P-type region between the gate of the transistor and the first P-type region to form an internal body resistor, in which the second P-type region is less doped than the first P-type region.5. The radio frequency integrated circuit switch of claim 1, in which the transistor comprises a Fin field effect transistor (FinFET) or a tri-gate structure.6. The radio frequency integrated circuit switch of claim 1, in which the semiconductor die comprises a bulk semiconductor substrate.7. The radio frequency integrated circuit switch of claim 6, in which the body contact layer comprises a silicide layer on an entire length of the backside of the bulk semiconductor substrate.8. The radio frequency integrated circuit switch of claim 1, integrated into a radio frequency front end module, the radio frequency front end module incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, a mobile phone, and a portable computer.9. A method of constructing a radio frequency integrated circuit switch, comprising:fabricating a transistor having a gate on a front-side of a semiconductor die; forming a first deep trench isolation region extending from the front-side to a backside opposite the front-side of the semiconductor die; anddepositing a body contact layer on the backside of the semiconductor die, in which the body contact layer is coupled to a backside of a body of the transistor, the body comprising a first P-type region.10. The method of claim 9, further comprising depositing a backside dielectric layer on the body contact layer, in which the first deep trench isolation region extends through the body contact layer and into the backside dielectric layer.11. The method of claim 9, further comprising forming an embedded diode within the body of the transistor, in which the body of the transistor comprises an N- type region between the first P-type region and the body contact layer.12. The method of claim 9, further comprising forming an internal body resistor within the body of the transistor, in which the body of the transistor comprises a second P-type region between the gate of the transistor and the first P-type region, in which the second P-type region is less doped than the first P-type region.13. The method of claim 9, in which depositing the body contact layer comprises depositing a silicide layer on the backside of the semiconductor die to form the body contact layer.14. The method of claim 9, further comprising integrating the radio frequency integrated circuit switch into a radio frequency front end module, the radio frequency front end module incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, a mobile phone, and a portable computer.15. A radio frequency front end module, comprising:a wireless transceiver, comprising a semiconductor die comprising a transistor having a gate on a front-side of the semiconductor die, a first deep trench isolation region extending from the front-side to a backside opposite the front-side of the semiconductor die, and a body contact layer on the backside of the semiconductor die and coupled to a backside of a body of the transistor, the body comprising a first P-type region; andan antenna coupled to an output of the wireless transceiver.16. The radio frequency front end module of claim 15, in which the wireless transceiver further comprises a backside dielectric layer on the body contact layer, in which the first deep trench isolation region extends through the body contact layer and into the backside dielectric layer.17. The radio frequency front end module of claim 15, in which the body of the transistor further comprises an N-type region between the first P-type region and the body contact layer to form an embedded diode.18. The radio frequency front end module of claim 15, in which the body of the transistor further comprises a second P-type region between the gate of the transistor and the first P-type region to form an internal body resistor, in which the second P-type region is less doped than the first P-type region.19. The radio frequency front end module of claim 15, in which the transistor comprises a Fin field effect transistor (FinFET) or a tri-gate structure.20. The radio frequency front end module of claim 15, incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, a mobile phone, and a portable computer. |
BULK LAYER TRANSFER BASED SWITCH WITH BACKSIDESILICIDATIONCLAIM OF PRIORITY UNDER 35 U.S.C. §119[0001] The present Application for Patent claims priority to Non-provisional Application No. 15/996,320 entitled“BULK LAYER TRANSFER BASED SWITCH WITH BACKSIDE SILICIDATION” filed June 1, 2018, assigned to the assignee hereof and hereby expressly incorporated by reference herein.TECHNICAL FIELD[0002] The present disclosure generally relates to integrated circuits (ICs). More specifically, the present disclosure relates to a switch implemented on a bulk layer transfer wafer with backside silicidation.BACKGROUND[0003] Designing mobile radio frequency (RF) chips (e.g., mobile RF transceivers) is complicated by added circuit functions for supporting communication enhancements. Designing these mobile RF transceivers may include using semiconductor on insulator technology. Semiconductor on insulator (SOI) technology replaces conventional semiconductor (e.g., silicon) substrates with a layered semiconductor-insulator- semiconductor substrate for reducing parasitic capacitance and improving performance. SOI-based devices differ from conventional, silicon-built devices because a silicon junction is above an electrical isolator, typically a buried oxide (BOX) layer. A reduced thickness BOX layer, however, may not sufficiently reduce artificial harmonics caused by the proximity of an active device on the SOI layer and an SOI substrate supporting the BOX layer.[0004] For example, high performance complementary metal oxide semiconductor (CMOS) radio frequency (RF) switch technologies are currently manufactured using SOI substrates. While SOI substrates may provide some protection against artificial harmonics in mobile RF transceivers, SOI substrates are very expensive. Furthermore, increasing device isolation and reducing RF loss may involve expensive handle wafers. For example, a CMOS switch device may be physically bonded to a high resistivity (HR) handle wafer, such as HR-silicon or sapphire. While the increased spatial separation of the switch device from the underlying substrate dramatically improves the RF performance of the CMOS switch, using HR-silicon or sapphire handle wafer dramatically drives up cost. That is, using SOI wafers and handle substrates is quite expensive relative to the cost of a bulk semiconductor wafer.SUMMARY[0005] A radio frequency integrated circuit switch includes a semiconductor die including a transistor having a gate on a front-side of the semiconductor die, and a first deep trench isolation region that extends from the front-side to a backside opposite the front-side of the semiconductor die. The radio frequency integrated circuit switch also includes a body contact layer on the backside of the semiconductor die. The body contact layer is coupled to a backside of a body of the transistor. The body includes a first P-type region.[0006] A method of constructing a radio frequency integrated circuit switch may include fabricating a transistor having a gate on a front-side of a semiconductor die.The method also includes forming a first deep trench isolation region extending from the front-side to a backside opposite the front-side of the semiconductor die. The method further includes depositing a body contact layer on the backside of the semiconductor die. The body contact layer is coupled to a backside of a body of the transistor. The body includes a first P-type region.[0007] A radio frequency front end module includes a wireless transceiver. The wireless transceiver includes a semiconductor die with a transistor having a gate on a front-side of the semiconductor die, a first deep trench isolation region extending from the front-side to a backside opposite the front-side of the semiconductor die, and a body contact layer on the backside of the semiconductor die. The body contact layer is coupled to a backside of a body of the transistor. The body includes a first P-type region. The radio frequency front end module further includes an antenna coupled to an output of the wireless transceiver. [0008] This has outlined, rather broadly, the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the present disclosure will be described below. It should be appreciated by those skilled in the art that this present disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the present disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the present disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.BRIEF DESCRIPTION OF THE DRAWINGS[0009] For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.[0010] FIGURE 1 is a schematic diagram of a radio frequency (RF) front end module.[0011] FIGURES 2A to 2D show cross-sectional views of a radio frequency integrated circuit (RFIC) during a layer transfer process.[0012] FIGURE 3 is a cross-sectional view of a radio frequency integrated circuit (RFIC) fabricated using a bulk semiconductor layer transfer process according to aspects of the present disclosure.[0013] FIGURE 4 is a cross-sectional view of a radio frequency integrated circuit having a bulk semiconductor wafer including a contact layer on a backside of the bulk semiconductor wafer, according to aspects of the present disclosure. [0014] FIGURES 5A - 5G illustrate a process for fabricating the radio frequency integrated circuit, according to aspects of the present disclosure.[0015] FIGURE 6 illustrates an exemplary layout of a switch having an H-gate structure.[0016] FIGURE 7 is a cross-sectional view of a radio frequency integrated circuit switch having a bulk semiconductor wafer including a body tie on a backside of the bulk semiconductor wafer, according to aspects of the present disclosure.[0017] FIGURE 8 is a cross-sectional view of a radio frequency integrated circuit switch having a bulk semiconductor wafer including a body tie on a backside of the bulk semiconductor wafer, according to aspects of the present disclosure.[0018] FIGURE 9 illustrates an exemplary schematic of a radio frequency integrated circuit switch.[0019] FIGURE 10 is a cross-sectional view of a radio frequency integrated circuit switch having a bulk semiconductor wafer including a body tie on a backside of the bulk semiconductor wafer, according to aspects of the present disclosure.[0020] FIGURE 11 is a cross-sectional view of a radio frequency integrated circuit switch having a bulk semiconductor wafer including a body tie on a backside of the bulk semiconductor wafer, according to aspects of the present disclosure.[0021] FIGURE 12 illustrates an exemplary schematic of a radio frequency integrated circuit switch.[0022] FIGURE 13 illustrates an exemplary layout of a switch according to aspects of the present disclosure.[0023] FIGURE 14 is a process flow diagram illustrating a method of constructing a radio frequency integrated circuit switch using a bulk semiconductor layer transfer process according to aspects of the present disclosure. [0024] FIGURE 15 is a block diagram showing an exemplary wirelesscommunication system in which a configuration of the present disclosure may be advantageously employed.[0025] FIGURE 16 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component according to one configuration of the present disclosure.DETAILED DESCRIPTION[0026] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. It will be apparent, however, to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0027] As described herein, the use of the term“and/or” is intended to represent an “inclusive OR”, and the use of the term“or” is intended to represent an“exclusive OR”. As described herein, the term“exemplary” used throughout this description means “serving as an example, instance, or illustration,” and should not necessarily be construed as preferred or advantageous over other exemplary configurations. As described herein, the term "coupled" used throughout this description means“connected, whether directly or indirectly through intervening connections (e.g., a switch), electrical, mechanical, or otherwise,” and is not necessarily limited to physical connections. Additionally, the connections can be such that the objects are permanently connected or releasably connected. The connections can be through switches. As described herein, the term“proximate” used throughout this description means “adjacent, very near, next to, or close to.” As described herein, the term“on” used throughout this description means“directly on” in some configurations, and“indirectly on” in other configurations. [0028] Designing mobile radio frequency (RF) transceivers may include using semiconductor on insulator technology. Semiconductor on insulator (SOI) technology replaces conventional silicon substrates with a layered semiconductor-insulator- semiconductor substrate for reducing parasitic capacitance and improving performance. While SOI-based devices differ from conventional, silicon-built devices by including a silicon junction above an electrical isolator, typically a buried oxide (BOX) layer, SOI- based devices are more expensive than conventional, silicon-built devices.Furthermore, a reduced thickness BOX layer may not sufficiently reduce artificial harmonics caused by the proximity of an active device on an SOI layer and an SOI substrate supporting the BOX layer.[0029] The active devices on the SOI layer may include high performance complementary metal oxide semiconductor (CMOS) transistors. For example, high performance CMOS RF switch technologies are currently manufactured using SOI substrates. A radio frequency front end (RFFE) module may rely on these high performances CMOS RF switch technologies for successful operation. A process for fabricating an RFFE module, therefore, involves the costly integration of an SOI wafer for supporting these high performances CMOS RF switch technologies. Furthermore, supporting future RF performance enhancements involves increasing device isolation while reducing RF loss.[0030] Transistors fabricated using SOI technology may suffer from the floating body effect. The floating body effect is a phenomenon in which the transistor's body collects charge generated at the junction of the transistor device. Unfortunately, charge that accumulates in the body causes adverse effects, such as parasitic transistors in the structure and off-state leakage. In addition, the accumulated charge also causes dependence of the threshold voltage of the transistor on its previous states. This effect (e.g., a floating body effect) may also generate artificial harmonic frequencies, which are detrimental to communication enhancements such as carrier aggregation.[0031] While SOI wafers may reduce some artificial harmonics, SOI wafers are expensive. Moreover, switch device fabrication using complementary metal oxide semiconductor technology may be complicated by the floating body effect. The floating body effect may be mitigated by tying the body to, for example, the gate in an RF switch device. Unfortunately, the body ties and the gate contacts have to route out and around source/drain metallization, creating area loss in the radio frequency switch device. Furthermore, extraction of charge within the body of radio frequency switch devices is challenging, often resulting in reducing a width of the radio frequency switch devices. Consequently, achieving sufficient switch performance may involve using several narrow switches.[0032] Various aspects of the present disclosure provide techniques for bulk layer transfer processing with backside silicidation. The process flow for semiconductor fabrication of the integrated radio frequency circuit may include front-end-of-line (FEOL) processes, middle-of-line (MOL) processes, and back-end-of-line (BEOL) processes. It will be understood that the term“layer” includes film and is not to be construed as indicating a vertical or horizontal thickness unless otherwise stated. As described herein, the term“substrate” may refer to a substrate of a diced wafer or may refer to a substrate of a wafer that is not diced. Similarly, the terms chip and die may be used interchangeably.[0033] Aspects of the present disclosure include using a bulk semiconductor (e.g., silicon) wafer instead of SOI wafers to fabricate a radio frequency integrated circuit switch. Inexpensive bulk semiconductor wafers may be used to form a semiconductor device layer without using an expensive SOI wafer.[0034] In one aspect of the present disclosure, the radio frequency integrated circuit switch includes a semiconductor die that includes a transistor having a gate on a first- side (e.g., front-side) of the semiconductor die. The semiconductor die may include a bulk semiconductor substrate or wafer (e.g., silicon substrate or wafer). The transistor may have a Fin field effect transistors (FinFET) structure or a tri-gate structure. A first deep trench isolation (DTI) region extends from the front-side to a second-side (e.g., backside) opposite the front-side of the semiconductor die. The radio frequency integrated circuit switch further includes a body contact layer on the backside of the semiconductor die. The body contact layer is coupled to a backside of a body of the transistor. The body of the transistor may include a first P-type region (e.g., a P+ region). In the P+ region, holes are the majority charge carriers whereas in N-type (e.g., N+) regions, free electrons are the majority charge carriers. Furthermore, the body contact layer may be used as a backside body tie. The backside body tie enables flexibility in a width of the radio frequency integrated circuit switch, which can be as narrow or as wide as desirable because there is less limitation on the width of this radio frequency integrated circuit switch relative to an SOI wafer switch.[0035] In some aspects, the body contact layer may be a silicide layer deposited on the backside of the bulk semiconductor substrate or wafer. For example, the body contact layer is on an entire length of the backside of the bulk semiconductor substrate. In one aspect, the P+ region, which is a body of the transistor, may be part of or coupled to the bulk semiconductor substrate. For example, portions of the bulk semiconductor region may be doped to form the P+ region. The radio frequency integrated circuit switch may also include a backside dielectric layer on the body contact layer, in which the first deep trench isolation region extends through the body contact layer and into the backside dielectric layer. In some aspects of the present disclosure, the body of the transistor further includes an N+ region between the P+ region and the body contact layer to form an embedded diode.[0036] The body of the transistor may further include a P- region between the body of the first transistor and the P+ region to form an internal body resistor. For example, the body of the radio frequency integrated circuit switch may include a first section, which is the first P-type region or P+ region, and a second section as a second P-type region or the P- region. The second P-type region is less doped or has less doping concentration than the first P-type region. The diode formed may be a P-N junction diode (e.g., a Schottky diode). The P-N junction is created by doping, for example by ion implantation, diffusion of dopants, or by epitaxy (growing a layer of crystal doped with one type of dopant on top of a layer of crystal doped with another type of dopant).[0037] In addition, the backside of the bulk semiconductor wafer may be supported by a backside dielectric layer (e.g., a second-side dielectric layer) distal from a front side dielectric layer (e.g., a first-side dielectric layer) on the semiconductor device layer. The RFIC may also include a handle substrate on the front-side dielectric layer. The front-side and backside may each be referred to as a first-side or a second-side. In some cases, the front-side will be referred to as the first-side. In other cases, the backside will be referred to as the first-side. [0038] FIGURE 1 is a schematic diagram of a wireless device 100 (e.g., a cellular phone or a smartphone) having a switch implemented on a bulk layer transfer wafer with backside silicidation, according to aspects of the present disclosure. The wireless device 100 may include a wireless local area network (WLAN) (e.g., WiFi) module 150 and an RF front end module 170 for a chipset 110. The WiFi module 150 includes a first diplexer 160 communicably coupling an antenna 162 to a wireless local area network module (e.g., WLAN module 152). The RF front end module 170 includes a second diplexer 190 communicably coupling an antenna 192 to the wireless transceiver 120 (WTR) through a duplexer 180 (DUP).[0039] The wireless transceiver 120 and the WLAN module 152 of the WiFi module 150 are coupled to a modem (MSM, e.g., a baseband modem) 130 that is powered by a power supply 102 through a power management integrated circuit (PMIC) 140. The chipset 110 also includes capacitors 112 and 114, as well as an inductor(s)116 to provide signal integrity. The PMIC 140, the modem 130, the wireless transceiver 120, and the WLAN module 152 each include capacitors (e.g., 142, 132,122, and 154) and operate according to a clock 118. The geometry and arrangement of the various inductor and capacitor components in the chipset 110 may reduce the electromagnetic coupling between the components.[0040] The wireless transceiver 120 of the wireless device generally includes a mobile radio frequency (RF) transceiver to transmit and receive data for two-way communication. A mobile RF transceiver may include a transmit section for data transmission and a receive section for data reception. For data transmission, the transmit section may modulate an RF carrier signal with data to obtain a modulated RF signal, amplify the modulated RF signal using a power amplifier (PA) to obtain an amplified RF signal having the proper output power level, and transmit the amplified RF signal via the antenna 192 to a base station. For data reception, the receive section may obtain a received RF signal via the antenna 192 and may amplify the received RF signal using a low noise amplifier (LNA) and process the received RF signal to recover data sent by the base station in a communication signal.[0041] The wireless transceiver 120 may include one or more circuits for amplifying these communication signals. The amplifier circuits (e.g., LNA/PA) may include one or more amplifier stages that may have one or more driver stages and one or more amplifier output stages. Each of the amplifier stages includes one or more transistors configured in various ways to amplify the communication signals. Various options exist for fabricating the transistors that are configured to amplify the communication signals transmitted and received by the wireless transceiver 120.[0042] The wireless transceiver 120 and the RF front end module 170 may be implemented using a layer transfer process to further separate the active device from a substrate as shown in FIGURES 2A to 2D.[0043] FIGURES 2A to 2D show cross-sectional views of a radio frequency (RF) integrated circuit 200 during a layer transfer process according to aspects of the present disclosure. As shown in FIGURE 2A, an RF device includes an active device 210 on an insulator layer 220 supported by a sacrificial substrate 201 (e.g., a bulk wafer). The RF device also includes interconnects 250 coupled to the active device 210 within a first dielectric layer 204. As shown in FIGURE 2B, a handle substrate 202 is bonded to the first dielectric layer 204 of the RF device. In addition, the sacrificial substrate 201 is removed. Removal of the sacrificial substrate 201 using the layer transfer process enables high-performance, low-parasitic RF devices by increasing the dielectric thickness. That is, a parasitic capacitance of the RF device is proportional to the dielectric thickness, which determines the distance between the active device 210 and the handle substrate 202.[0044] As shown in FIGURE 2C, the RF device is flipped once the handle substrate 202 is secured and the sacrificial substrate 201 is removed. As shown in FIGURE 2D, a post layer transfer metallization process is performed using, for example, a regular complementary metal oxide semiconductor (CMOS) process.[0045] The active device 210 on the BOX layer 220 may be a complementary metal oxide semiconductor (CMOS) transistor. The RFFE module 170 (FIGURE 1) may rely on these high performance CMOS RF switch technologies for successful operation.[0046] FIGURE 3 is a cross-sectional view of a radio frequency integrated circuit (RFIC) fabricated using a bulk semiconductor layer transfer process according to aspects of the present disclosure. Representatively, an RF integrated circuit 300 includes an active device 310 having a gate, source/drain (S/D) regions, and a channel region between the source/drain regions, each formed on a front-side of a bulk semiconductor wafer 320. In contrast to SOI implementations, an active device layer including the source/drain and channel regions is not supported by a buried oxide (BOX) layer. Although shown as an active device, it should be recognized that the active device 310 may be a first active/passive device, as well as a second active/passive device.[0047] The RF integrated circuit 300 also includes middle-of-line (MOL)/back-end- of-line (BEOL) interconnects coupled to the source/drain regions of the active device 310. As described, the MOL/BEOL layers may be referred to as first-side (e.g., front side) layers. By contrast, the layers supporting the bulk semiconductor wafer 320 may be referred to as second-side (e.g., backside) layers. In this example, a front-side metallization layer Ml is coupled to the source/drain regions of the active device 310 and arranged in a front-side dielectric layer 304. In addition, a handle substrate 302 is coupled to the front-side dielectric layer 304. A backside dielectric 340 is adjacent to and possibly supports the bulk semiconductor wafer 320. In addition, a backside Ml metallization layer (e.g., a second-side metallization layer) is coupled to the front-side metallization layer Ml with a trench interconnect 350 through a deep trench isolation (DTI) region 330 extending from the front-side to the backside of the bulksemiconductor wafer 320, as further illustrated in FIGURE 4.[0048] FIGURE 4 is a cross-sectional view of a radio frequency integrated circuit (RFIC) having a bulk semiconductor wafer including a contact layer on a backside of the bulk semiconductor wafer, according to aspects of the present disclosure.Representatively, an RF integrated circuit 400 includes a first active device 410, a second active device 412, and a third active device 414, each having a gate (G), source/drain (S/D) regions, and a channel (C) region between the source/drain regions, each formed on a front-side of a bulk semiconductor wafer 420 (e.g., a bulk silicon wafer). In contrast to SOI implementations, an active device layer including the source/drain and channel regions of the active devices (e.g., 410, 412, and 414) is not supported by a buried oxide (BOX) layer. [0049] Although shown as a first active device, it should be recognized that the first active device 410 may be a first active/passive device, as well as a second active/passive device, such as the second active device 412. In addition, although shown as planar devices, it should be recognized that the active devices (e.g., 410, 412, and 414) are not limited to planar devices. For example, the active devices (e.g., 410, 412, and 414) may include, but are not limited to, planar field effect transistors (FETs), fin-type FETs (FinFETs), nanowire FETs, or other like FETs.[0050] The RF integrated circuit 400 also includes MOL interconnects (M0) as well as BEOL interconnects (Ml) coupled to the gate as well as the source/drain regions of the active devices (e.g., 410, 412, and 414). The MOL interconnects may include trench interconnects (e.g., CA, CB) and vias (e.g., V0) for coupling active devices formed during a front-end-of-line to metallization layers formed during the back-end-of-line processing. In this example, an MOL interconnect M0 is coupled to a gate contact (e.g., a poly contact) of the gate of the first active device 410 and arranged in a front-side dielectric layer 404. In addition, a handle wafer 402 (handle substrate) is coupled to the front-side dielectric layer 404. A backside dielectric layer 440 is adjacent to and possibly supports the bulk semiconductor wafer 420.[0051] In this configuration, a backside Ml metallization layer (e.g., a second-side metallization layer) is coupled to the front-side MOL zero interconnect M0 through a trench interconnect 450. The trench interconnect 450 extends through a first deep trench isolation (DTI) region 430, from the front-side to the backside of the bulk semiconductor wafer 420. The backside metallization Ml may also be coupled to a backside contact layer 460.[0052] According to aspects of the present disclosure, the first DTI region 430 extends though the backside contact layer 460 and into the backside dielectric layer 440. Similarly, a second deep trench isolation (DTI) region 432 extends though the backside contact layer 460 and into the backside dielectric layer 440. In this example, the backside contact layer 460 is deposited along the backside of the bulk semiconductor wafer 420. The backside contact layer 460 may be composed of a silicide material or other like conductive material. The backside contact layer 460 also contacts a portion of the first DTI region 430 that extends from the backside of the bulk semiconductor wafer 420. In addition, the backside dielectric layer 440 contacts the remaining portion of the first DTI region 430 that extends from the backside of the bulk semiconductor wafer 420.[0053] The layer transfer process shown in FIGURES 2A - 2D may be used with bulk semiconductor wafers to create CMOS products (e.g., a CMOS transistor) without using expensive SOI substrates, as shown in FIGURE 4. Various aspects of the present disclosure provide techniques for bulk layer transfer processing with backside silicidation, as described in FIGURES 5A - 5G. One aspect of the present disclosure uses a bulk layer transfer process with backside silicidation to form an RF integrated circuit, for example, as shown in FIGURES 7, 8, 10, and 11.[0054] FIGURES 5A - 5G illustrate a process for fabricating the RF integrated circuit 400 of FIGURE 4, according to aspects of the present disclosure. FIGURE 5A illustrates an initial step for forming the RF integrated circuit 400 of FIGURE 4. This process may begin with a complementary metal oxide semiconductor (CMOS) wafer, such as a bulk silicon wafer. Next, CMOS front-end-of-line integration is performed on the bulk semiconductor wafer 420 to form the first active device 410, the second active device 412, and the third active device 414. In this example, the first active device 410 and the second active device 412 are separated by a shallow trench isolation (STI) region. By contrast, the second active device 412 and the third active device 414 are separated by the second DTI region 432.[0055] According to aspects of the present disclosure, STI regions are used for active device separation, whereas the DTI regions are used for post layer transfer separation. A depth of the first DTI region 430 and the second DTI region 432 may be in the range of 0.4 to 4 micrometers, although the depth of the first DTI region 430 and the second DTI region 432 may be reduced for future processes. The DTI regions as well as the STI regions may be filed with a similar dielectric material, such as silicon dioxide (Si02) and formed prior to the active devices.[0056] Once the active devices are formed, MOL processes connect the active devices to BEOL interconnect layers. In this example, a zero-layer interconnect M0 is coupled to the gate G of the first active device 410. In addition, a first BEOL interconnect Ml is coupled to the zero-layer interconnect MO. The first BEOL interconnect Ml is formed as part of a front-side BEOL process. This process is followed by depositing the front-side dielectric layer 404. Once the front-side dielectric layer 404 is deposited, the handle wafer 402 is bonded to the front-side dielectric layer 404. The handle wafer 402 can be a processed wafer or a bare wafer.[0057] FIGETRE 5B illustrates a backgrind process of the bulk semiconductor wafer 420. This initial backgrind process is applied to the backside of the bulk semiconductor wafer 420, distal from the active device layer. This initial backgrind process may leave a variation of about 5 to 10 micrometers. The backgrind process continues in FIGETRE 5C, in which a chemical mechanical polish (CMP) process is applied to the backside of the bulk semiconductor wafer 420. This CMP process may reduce the surface variation of the backside of the bulk semiconductor wafer 420 to a range of 0.1 micrometers to 0.4 micrometers, but preferably to 0.1 micrometers. This CMP process does not expose the first DTI region 430 or the second DTI region 432.[0058] As shown in FIGETRE 5B, the backgrind process may be applied to the backside of the bulk semiconductor wafer 420 with a surface variation of 5-10 microns. The surface variation may be reduced by polishing the backside of the bulksemiconductor wafer 420 to a predetermined surface variation (e.g., less than 0.3 microns), as shown in FIGETRE 5C. In addition, a silicon etch (e.g., potassium hydroxide (KOH) or tetramethylammonium hydroxide (TMAH)), a CMP (chemical mechanical polish), or combination of CMP and etching may be performed to reduce a thickness of the bulk semiconductor wafer to a thickness equal to or less than a thickness of the DTI regions.[0059] As shown in FIGETRE 5D, the silicon etch/CMP is performed on the backside of the bulk semiconductor wafer 420 for exposing a portion of the first DTI region 430 as well as the second DTI region 432. In a further aspect of the present disclosure, an etch stop layer may be formed in the bulk semiconductor wafer 420 for improving a planarity of the backside of the bulk semiconductor wafer 420. Once the first DTI region 430 and the second DTI region 432 are exposed, a post-layer transfer silicide layer may be deposited on an entire length of the backside of the bulk semiconductor wafer 420 for forming the backside contact layer 460, as shown in FIGURE 5E.[0060] As shown in FIGURE 5F a trench interconnect 450 is formed through the first DTI region 430. In this example, the trench interconnect 450 is coupled to the front-side zero interconnect M0 in the front-side dielectric layer 404. As shown in FIGURE 5G, the RF integrated circuit 400 is completed by forming the backside BEOL interconnects Ml and depositing the backside dielectric layer 440. The backside dielectric layer 440 is deposited on the backside of the bulk semiconductor wafer 420 and exposed sidewalls of the first DTI region 430 that extend from the backside of the bulk semiconductor wafer 420. In this example, the backside dielectric layer 440 is distal from the front-side dielectric layer 404. In this example, the backside BEOL interconnect Ml is coupled to the front-side zero interconnect M0 through the trench interconnect 450.[0061] FIGURE 6 illustrates an exemplary layout of a switch 600 having an H-gate structure 602. The H-gate structure 602 may include a polysilicon gate. Although an H-gate structure is described, other gate structures such as a T-gate structure are equally applicable. The H-gate structure 602 may include multiple cross elements 604 connected between a first parallel element 606 and a second parallel element 608. An active region of the switch 600 includes a source region and a drain region. The source region includes source contacts or metallization 610 and the drain region includes drain contacts or metallization 612. The source contacts 610 and drain contacts 612 may be coupled to a channel region adjacent to or underlying the multiple cross elements 604 of the polysilicon gate. An electrical potential may be established at the source regions and the drain regions.[0062] The active region of the switch 600 can be implanted with a heavy implant of ions to the source and drain regions. As a result, a portion of active region that lies below the gate serves as a body 626 of the switch 600 and the portions that are not below the gate serve as the source and drain. In some implementations, the body 626 extends below the source and drain regions. For example, if the switch 600 is implemented on a bulk substrate, the body 626 includes a region of the substrate below the source and drain that was not altered by the heavy implant. [0063] The switch 600 may also include a conductive body tie or metallization 614 and gate contacts 618. The contacts substantially complicate routing within the switch 600. For example, the body tie 614 and the gate contacts 618 are routed out and around the source and the drain metallization, which creates area losses. Additional width (e.g., 624 and/or 628) of the body 626 of the switch 600 may be created to accommodate the body tie 614. The switch 600 in this case is implemented in accordance with a structure (e.g., SOI wafer structure) where the switch and corresponding body tie 614 are fabricated on a same side (e.g., the front-side) of the SOI wafer. Moreover, the SOI wafers used to implement the switch 600 are very expensive.[0064] In an off (shunt) condition, charge (e.g., current) is generated in the body 626 of the switch 600 and the body 626 becomes resistive. For example, when a voltage is applied to the drain or the source, charges (e.g., 616) are generated beneath a gate of the gate structure 602. To extract these charges, the charge 616 is moved along a path 620 of the gate to the body tie 614. However, the extraction process may generate more charges at the gate. When the generated charges are not extracted, the charges cause a shift in a breakdown voltage of the transistor or switch. In this case, the generated charge 616 increases the electrical potential in the body 626 and thereby reduces a breakdown voltage of the switch 600.[0065] For example, in an SOI substrate transistor or switch, the body underlying the gate contact is isolated from a substrate by an insulating layer. Thus, the body is electrically floating. Most often, this floating body is undesirable because it causes problems in the SOI substrate transistor operation. For example, when an electron-hole pair is formed by ionization of a lattice atom by an electron, the hole migrates towards the source of the transistor. Because the body is not tied to the source, the excess holes generated collect in the body, thereby raising the body potential and, thus, modifying the transistor characteristics. The resulting change in voltage lowers an effective threshold voltage relative to the drain-to- source voltage, and increases the drain current.[0066] One way to mitigate the breakdown voltage issue is to reduce a width of the switch 600 (e.g., to < twenty micrometer (20 pm)) to reduce the voltage drop of the charge and/or use an SOI substrate transistor to achieve a desirable performance.However, the reduction in width results in an increase in area and diminished design flexibility because several narrow switches (e.g., 10 mih to 15 mih) or very long switches may be specified to achieve sufficient switch performance. This follows because the voltage drop across narrow switches is small enough that it may be negligible and thus device performance can be maintained.[0067] To mitigate these issues, aspects of the present disclosure are directed to a radio frequency integrated switch formed in a bulk semiconductor substrate. In contrast to the SOI substrate, hole migration is not a problem with transistors formed in a bulk silicon substrate, because the holes are attracted towards the substrate and away from the body. Aspects of the present disclosure are illustrated in the following figures.[0068] FIGURE 7 is a cross-sectional view of a radio frequency integrated circuit (RFIC) switch 700 having a bulk semiconductor wafer including a body tie on a backside of the bulk semiconductor wafer 420, according to aspects of the present disclosure. For illustrative purposes, some of the labelling and numbering of the devices and features of FIGURE 7 are similar to those of FIGURE 4. A layer transfer process with bulk wafer (e.g., illustrated in FIGURES 5A-5G) may achieve CMOS products such as CMOS transistors (e.g., the first active device 410), bipolar devices (e.g., vertical bipolar devices), CMOS switches, and passive components. The CMOS transistors may include, but are not limited to, planar field effect transistors (FETs), fin- type FETs (FinFETs), nanowire FETs, or other like FETs. The passive components may include resistors (e.g., vertical resistors) and diodes (e.g., vertical diodes).[0069] In one aspect, a contact layer is deposited on the backside of the bulk semiconductor wafer. For example, as shown in FIGURE 7, a backside contact layer 460 is deposited on the backside of the bulk semiconductor wafer 420 using a backside silicide process. The backside contact layer 460 may be used for the body tie or body contact for the radio frequency integrated circuit switch 700. For example, the body tie or backside contact layer 460 is at least one to two micrometers (1-2 pm) away from a gate (G) where the charges are generated. As a result, a switch 700 having a wider width (e.g., to > twenty micrometers (20 pm)) relative to the switch 600 may be achieved. The switch can also be as narrow as desirable. [0070] The radio frequency integrated circuit switch 700 includes MOLinterconnects (MO) as well as BEOL interconnects (Ml). For example, the MOL interconnects (M0) and the BEOL interconnects (Ml) may be coupled to the gate or the source/drain regions of the active devices (e.g., 410, 412, and 414). The MOL interconnects may include trench interconnects (e.g., trench interconnect 450) and vias for coupling active devices formed during a front-end-of-line processing tometallization layers formed during the back-end-of-line processing. The metallization configuration can be different from the metallization configuration of FIGURE 7.[0071] In this example, a zero-layer interconnect M0 is coupled to the drain D of the first active device 410. Another zero-layer interconnect M0 is coupled to the source S of the first active device 410. In addition, a first BEOL interconnect Ml is coupled to the zero-layer interconnect M0. Similarly, another BEOL interconnect Ml is coupled to the other zero-layer interconnect M0. The first BEOL interconnect Ml and the other BEOL interconnect Ml are formed as part of a front-side BEOL process.[0072] In this configuration, a backside Ml metallization layer (e.g., a second-side metallization layer) is coupled to the first BEOL interconnect Ml through the trench interconnect 450. The trench interconnect 450 extends through a first deep trench isolation (DTI) region 430, from the front-side to the backside of the bulksemiconductor wafer 420. The backside metallization Ml may also be coupled to the backside contact layer 460.[0073] When the radio frequency integrated circuit switch 700 is in an on-state (or through state) the body of the switch can be biased similar to a gate (G) of the radio frequency integrated circuit switch 700 to increase a drive current. When the radio frequency integrated circuit switch 700 is in an off-state (or shunt state) the body of the switch can be biased negatively, similar to the gate. This enables the body to collect additional carriers. For example, minority charge carriers (e.g., holes) are created in the body when the radio frequency integrated circuit switch 700 is in an off-state. It is desirable to remove the minority charge carriers from the device area so that the body does not bias independently from the gate by collecting the minority charge carriers (e.g., positive charges) that may turn on the radio frequency integrated circuit switch 700, thereby causing a breakdown in the device. To mitigate this issue, the body is biased using the backside contact layer 460 with a same bias as the gate. Thus, the minority charge carriers can be collected at the backside contact layer 460 when the radio frequency integrated circuit switch 700 is in the off-state to prevent the body from being at a free potential. For example, the backside contact layer 460 is negatively charged to attract the positive minority charge carriers. As a result, the body is maintained at or close to the gate voltage even when the radio frequency integrated circuit switch 700 is in an off-state.[0074] FIGURE 8 is a cross-sectional view of a radio frequency integrated circuit (RFIC) switch 800 having a bulk semiconductor wafer including a body tie on a backside of the bulk semiconductor wafer 420, according to aspects of the present disclosure. For illustrative purposes, some of the labelling and numbering of the devices and features of FIGURE 8 are similar to those of FIGURE 4 and FIGURE 7.For example, FIGURE 8, which is similar to FIGURE 7, further illustrates a P+ body 820 for the radio frequency integrated circuit switch 800. The P+ body 820 can be achieved in multiple ways. For example, the wafer can be an epitaxial wafer with a P+ substrate, which is still less expensive than a SOI wafer. In other aspects, the P+ body 820 of the wafer can be achieved by doping.[0075] To achieve a desirable channel control, a thickness of the P+ body 820 may be within a defined range. For example, the P+ body 820 may be as close as thirty to forty nanometers (40-100 nm) from the gate or channel. The P+ body 820 may also be formed by ion implantation early in the bulk layer transfer process. The active region of the switch 800 can be implanted with a heavy implant of ions (N+) to the source and drain regions. In some aspects, the portion of the active region that lies below the gate G serves as a body of the switch 800 and the portions that are not below the gate G serve as the source S and drain D. In this configuration, the source S and the drain D are implanted with N+ ions as indicated at portions 823. The drain D may be a low doped drain. In certain approaches, the body extends below the source S and drain D regions. For example, if the switch 800 is implemented on a bulk substrate, the body includes the region of the substrate below the source S and drain D that was not altered by the heavy implant. In this case, the P+ region may be below the substrate to form the backside region of the substrate or wafer. [0076] FIGURE 9 illustrates an exemplary schematic of a radio frequency integrated circuit switch 900. The radio frequency integrated circuit switch 900 may include a transistor 902 and a diode 904. A gate 906 of the transistor 902 may be tied to a body 908 of the switch 900 via the diode 904. This improves the performance of the radio frequency integrated circuit switch 900 because in an on state, the diode 904 becomes a capacitor that is charged based on a voltage difference between the gate 906 and the body 908 of the transistor 902. In the off state, when charges are collected in the body, the diode 904 contributes to the extraction of those charges. For example, the diode 904 is forward biased to enable extraction of the charges. In SOI technology, the diode 904 is located beside the switch as an additional device. This configuration, however, increases the area used for the switch implementation. Accordingly, it is desirable to implement a diode for the switch while reducing the area occupied for the implementation of the switch. A desirable implementation is illustrated in FIGURE 10.[0077] FIGURE 10 is a cross-sectional view of a radio frequency integrated circuit (RFIC) switch 1000 having a bulk semiconductor wafer including a body tie on a backside of the bulk semiconductor wafer, according to aspects of the present disclosure. For illustrative purposes, some of the labelling and numbering of the devices and features of FIGURE 10 are similar to those of FIGURE 8 and FIGURE 9.In this aspect, the diode 904 is incorporated into the radio frequency integrated circuit switch 1000 and may be a P-N junction diode.[0078] For example, the body of the radio frequency integrated circuit switch 1000 includes the P+ region or body 820 adjacent to an N+ body 1021 to form a P-N junction diode. A P-N junction diode is a two-terminal or two-electrode semiconductor device, which allows the electric current in only one direction while blocking the electric current in an opposite or reverse direction. When the N+ body is joined with the P+ body, a P-N junction is formed. The N+ body 1021 may be coupled between the P+ body 820 and the backside contact layer 460 to achieve the diode within the radio frequency integrated circuit switch 1000. Thus, there are no additional interconnects or wiring for the diode 904. The radio frequency integrated circuit switch 1000 may be configured in accordance with a FinFET structure or tri-gate structure. [0079] FIGURE 11 is a cross-sectional view of a radio frequency integrated circuit (RFIC) switch 1100 having a bulk semiconductor wafer including a body tie on a backside of the bulk semiconductor wafer, according to aspects of the present disclosure. For illustrative purposes, some of the labelling and numbering of the devices and features of FIGURE 11 are similar to those of FIGURE 8, FIGURE 9, and FIGURE 10. In addition to the P-N junction diode, a resistor is also incorporated into the radio frequency integrated circuit switch 1100. For example, at least a portion of the bulk semiconductor wafer 420 can be formed into a resistor by controlling a distance from the channel C to the P+ body 820 and also controlling a doping concentration of the N+ body 1021 on at least a portion of the bulk semiconductor wafer 420.[0080] In one aspect of the disclosure, the portion of the bulk semiconductor wafer 420 in which the resistor is formed is a section of the body of the radio frequency integrated circuit switch 1100. For example, the body of the radio frequency integrated circuit switch 1100 may include a first section, which is the first P-type region (P+) and a second section may be a second P-type region (P-). The second P-type region may be between the gate G of an active device or transistor (e.g., the third active device 414) and the first P-type region to form an internal body resistor.[0081] FIGURE 12 illustrates an exemplary schematic of a radio frequency integrated circuit switch 1200 including a resistor 1205, according to aspects of the present disclosure. For illustrative purposes, some of the labelling and numbering of the devices and features of FIGURE 12 are similar to those of FIGURE 9. The resistor 1205 may be incorporated into the radio frequency integrated circuit switch 1200. The resistor 1205 may be coupled between the body 908 of the switch (e.g., a body of the transistor) and the diode 904. Optionally, an external gate resistor may be included at portion 1207 and/or a common gate/body resistor may be included at portion 1209 to implement the radio frequency integrated circuit switch 1200.[0082] FIGURE 13 illustrates an exemplary layout of a switch 1300 according to aspects of the present disclosure. For illustrative purposes, some of the labelling and numbering of the devices and features of FIGURE 13 are similar to those of FIGURE 6. The switch 1300 includes the body tie on the backside of the bulk semiconductor wafer. Because the body tie is on the backside of the bulk semiconductor wafer or body 1326 of the switch 1300, the body width may not extend beyond a gate structure 1302 to accommodate the body tie. In FIGURE 6, for example, the width of the body 626 extends (see extended widths 624 and 628) beyond the width of the H-gate structure 602 to accommodate the body tie 614.[0083] Moreover, the gate structure 1302 of FIGURE 13 may not include the second parallel element 608, which is used to separate or isolate the body tie 614 from the active device region (e.g., N+ region). This follows because all of the charges generated in FIGURE 13 are directed toward (e.g., downward) the backside contact layer 460, while the charges generated in the switch 600 traverse in different directions (e.g., sideways) toward the first parallel element 606 and the second parallel element 608. Thus, instead of traversing five micrometers or ten micrometers sideways as in FIGURE 6, the generated charges are extracted by being directed downwardly about half a micrometer or one micrometer. For example, charge generated at the gates after ion implantation where the source S and drain D of the switch 1300 are doped with an N-type (N+) dopant are channeled downwardly through the P+ body 820.[0084] The placement of the body tie at the backside of the bulk semiconductor wafer reduces the number of interconnections and frees up space. For example, the width of the switch 1300 can be increased to a desirable size (e.g., up to one hundred micrometers) with improved switch performance. For example, parasitic capacitance is reduced and the total area for the switch is reduced because a single switch can achieve performance of multiple switches.[0085] FIGURE 14 is a process flow diagram illustrating a method 1400 of a bulk layer transfer process with second-side (e.g., backside) silicidation for constructing a radio frequency integrated circuit (RFIC) switch, according to an aspect of the present disclosure. In block 1402, a transistor having a gate is fabricated on a first-side of a semiconductor die (e.g., a bulk semiconductor substrate or wafer). For example, as shown in FIGURE 4, a first active device 410 is fabricated on a first-side of a bulk semiconductor wafer 420. In block 1404, a first deep trench isolation region extending from the front-side to a backside opposite the front-side of the semiconductor die is formed. For example, as shown in FIGURE 4, the first DTI region 430 extends from the first-side to the second-side of the bulk semiconductor wafer 420. [0086] In block 1406, a body contact layer is deposited on the backside of the semiconductor die. The body contact layer is coupled to a backside of a body of the transistor. The body includes a first P-type region. For example, as shown in FIGURE 5E, the backside contact layer 460 is deposited on the backside of the bulksemiconductor wafer 420 using a backside silicide process. Furthermore, FIGURE 8 illustrates the backside contact layer 460 coupled to a backside of the P+ body 820.[0087] According to a further aspect of the present disclosure, a radio frequency integrated circuit switch, including a bulk semiconductor wafer having an active device on a first-side and a deep trench isolation region extending from the first-side to a second-side opposite the first-side of the bulk semiconductor wafer, is described. The radio frequency integrated circuit includes means for collecting minority charge carriers channeled from the body of the active device (e.g., transistor) when the radio frequency integrated circuit switch is in an off-state. The minority charge carriers collecting means may be the backside contact layer 460, shown in FIGURES 5E, 5F, 7, 8, 10, and 11. In another aspect, the aforementioned means may be any module or any apparatus configured to perform the functions recited by the aforementioned means.[0088] FIGURE 15 is a block diagram showing an exemplary wirelesscommunication system 1500 in which an aspect of the disclosure may beadvantageously employed. For purposes of illustration, FIGURE 15 shows three remote units 1520, 1530, and 1550 and two base stations 1540. It will be recognized that wireless communication systems may have many more remote units and base stations. Remote units 1520, 1530, and 1550 include IC devices 1525A, 1525C, and 1525B that include the disclosed RFIC switch. It will be recognized that other devices may also include the disclosed RFIC switch, such as the base stations, switching devices, and network equipment. FIGURE 15 shows forward link signals 1580 from the base station 1540 to the remote units 1520, 1530, and 1550 and reverse link signals 1590 from the remote units 1520, 1530, and 1550 to base stations 1540.[0089] In FIGURE 15, remote unit 1520 is shown as a mobile telephone, remote unit 1530 is shown as a portable computer, and remote unit 1550 is shown as a fixed location remote unit in a wireless local loop system. For example, a remote units may be a mobile phone, a hand-held personal communication systems (PCS) unit, a portable data unit such as a personal digital assistant (PDA), a GPS enabled device, a navigation device, a set top box, a music player, a video player, an entertainment unit, a fixed location data unit such as a meter reading equipment, or other communications device that stores or retrieve data or computer instructions, or combinations thereof. Although FIGURE 15 illustrates remote units according to the aspects of the disclosure, the disclosure is not limited to these exemplary illustrated units. Aspects of the disclosure may be suitably employed in many devices, which include the disclosed RFIC switch.[0090] FIGURE 16 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component, such as the RF devices disclosed above. A design workstation 1600 includes a hard disk 1601 containing operating system software, support files, and design software such as Cadence or OrCAD. The design workstation 1600 also includes a display 1602 to facilitate a circuit design 1610 or an RFIC switch design 1612. A storage medium 1604 is provided for tangibly storing the circuit design 1610 or the RFIC switch design 1612. The circuit design 1610 or the RFIC switch design 1612 may be stored on the storage medium 1604 in a file format such as GDSII or GERBER. The storage medium 1604 may be a CD- ROM, DVD, hard disk, flash memory, or other appropriate device. Furthermore, the design workstation 1600 includes a drive apparatus 1603 for accepting input from or writing output to the storage medium 1604.[0091] Data recorded on the storage medium 1604 may specify logic circuit configurations, pattern data for photolithography masks, or mask pattern data for serial write tools such as electron beam lithography. The data may further include logic verification data such as timing diagrams or net circuits associated with logic simulations. Providing data on the storage medium 1604 facilitates the circuit design 1610 or the RFIC switch design 1612 by decreasing the number of processes for designing semiconductor wafers.[0092] For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. A machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in a memory and executed by a processor unit. Memory may be implemented within the processor unit or external to the processor unit. As used herein, the term“memory” refers to types of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to a particular type of memory or number of memories, or type of media upon which memory is stored.[0093] If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be an available medium that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0094] In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims.[0095] Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the technology of the disclosure as defined by the appended claims. For example, relational terms, such as“above” and“below” are used with respect to a substrate or electronic device. Of course, if the substrate or electronic device is inverted, above becomes below, and vice versa. Additionally, if oriented sideways, above and below may refer to sides of a substrate or electronic device.Moreover, the scope of the present application is not intended to be limited to the particular configurations of the process, machine, manufacture, and composition of matter, means, methods, and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding configurations described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps. |
Apparatuses and methods including memory commands for semiconductor memories are described. An example method includes receiving a data clock signal responsive to receiving a timing command, performing an access operation responsive to receiving an access command associated with the timing command, providing an access data clock signal based on the data clock signal, and providing an access data clock signal based on the data clock signal. The access command may be separated in time from the associated timing command by at least one clock cycle of a system clock signal. In some examples, the access command may precede the associated timing command or may follow the associated timing command. In some examples, the access command may immediately follow or precede the associated timing command. |
1.A device that contains:A data clock path that includes an input buffer configured to receive a data clock signal when enabled, and the data clock path is configured to provide an internal clock signal based on the data clock signal;An input/output circuit configured to receive an internal clock signal from the data clock path and provide an access data clock signal based on the internal clock signal;A command input circuit configured to receive an access command and a timing command associated with the access command, and is further configured to provide an internal access command in response to receiving the access command, in response to receiving the access command The first timing command of the timing command provides a first internal timing command, and the second internal timing command is provided in response to receiving the second timing command of the timing command;A command decoder, which is coupled to the command input circuit, and is configured to decode the internal access commands and provide internal access control commands to perform corresponding access operations, and is further configured to decode the first internal timing Command and the second internal timing command, and provide internal timing control signals to enable the input buffer of the data clock path, and control the input/output circuit to provide the access data clock signal.2.The device of claim 1, wherein each timing command is associated with a corresponding access command.3.The device of claim 1, wherein the first timing command and the second timing command each include an operation code.4.3. The device of claim 3, wherein the operation code includes a first operation code for clock synchronization mode, and includes a second operation code for accessing data clock mode.5.The apparatus of claim 1, wherein the access command includes a read command.6.The apparatus of claim 1, wherein the second timing command is restricted to immediately before the associated access command.7.The apparatus of claim 1, wherein the first command decoder is configured to provide internal timing control signals to enable the data clock in response to the first timing command following the associated access command The input buffer of the path.8.The apparatus of claim 1, wherein the data clock path includes a clock divider circuit configured to provide a multi-phase clock signal based on the data clock signal.9.8. The apparatus of claim 8, wherein the input/output circuit includes a clock circuit configured to provide an internal access data clock signal based on the multi-phase clock signal.10.The apparatus of claim 1, further comprising a clock path configured to receive a system clock signal and provide an internal system clock signal.11.A device that contains:Command busAddress busData Bus;Clock busA controller configured to provide access commands and timing commands to the command bus, addresses to the address bus, and data clock signals to the clock bus;A memory system that is coupled to the controller through the command bus, the address bus, the data bus, and the clock bus, and the memory system is configured to transfer the memory system based on the timing of the corresponding access command The data with timing is provided to the data bus, and is further configured to provide the access data clock signal with timing based on the timing of the timing command, wherein the timing command associated with the corresponding access command is time-dependent The corresponding access command is separated by at least one clock cycle of the system clock signal.12.The apparatus of claim 11, wherein the memory system includes a plurality of memories, each memory of the plurality of memories being coupled to the command bus, the address bus, the data bus, and the clock bus .13.The device of claim 12, wherein the plurality of memories of the memory system are organized into a memory hierarchy.14.The apparatus of claim 11, further comprising a plurality of selection signal lines, wherein each selection signal line of the plurality of selection signal lines is coupled to a corresponding one of the plurality of memories of the memory system .15.The apparatus of claim 11, wherein the controller is configured to:Providing a first timing command to a first memory of the plurality of memories to enable the input buffer of the first memory;Providing a second timing command to a second memory of the plurality of memories to enable the input buffer of the second memory;Providing a first access command associated with the first timing command to the first memory;Providing a valid data clock signal to the first memory and the second memory; andA second access command associated with the second timing command is provided to the second memory, wherein the second timing command and the second access command are separated in time by at least the system clock signal One clock cycle.16.The device according to claim 15, wherein:The first memory is configured to:Generating a first access data clock signal at the first memory, wherein the first access data clock signal is based on the valid data clock signal;Providing the first access data clock signal; andProviding first data from the first memory in response to the first access command; andThe second memory is configured to:Generating a second access data clock signal at the second memory, wherein the second access data clock signal is based on the valid data clock signal;Providing the second access data clock signal; andThe second data is provided from the second memory in response to the second access command.17.A method that includes:Receiving a data clock signal in response to receiving a timing command;Performing an access operation in response to receiving an access command associated with the timing command, wherein the access command is separated in time from the associated timing command by at least one clock cycle of the system clock signal; andAn access data clock signal is provided based on the data clock signal.18.18. The method of claim 17, wherein the timing command is received before the associated access command.19.18. The method of claim 17, wherein the timing command is received after the associated access command.20.18. The method of claim 17, further comprising: enabling an input buffer to receive the data clock signal in response to the timing command.21.18. The method of claim 17, further comprising: providing read data in response to the access command, wherein providing the read data is synchronized with the access data clock signal.22.22. The method of claim 21, wherein the access data clock signal is provided before the read data is provided.23.18. The method of claim 17, further comprising: performing a clock synchronization operation between the system clock signal and the data clock signal in response to the timing command.24.A method that includes:Receive system clock signal;Enabling the input buffer of the first memory in response to receiving the first timing command;Enabling the input buffer of the second memory in response to receiving the second timing command;Receiving a first access command associated with the first timing command at the first memory;Receiving valid data clock signals at the first memory and the second memory;A second access command associated with the second timing command is received at the second memory, wherein the second timing command and the second access command are separated in time by at least the system clock signal One clock cycle;Generating a first access data clock signal at the first memory, wherein the first access data clock signal is based on the valid data clock signal;Providing the first access data clock signal;Providing first data from the first memory in response to the first access command;Generating a second access data clock signal at the second memory, wherein the second access data clock signal is based on the valid data clock signal;Providing the second access data clock signal; andThe second data is provided from the second memory in response to the second access command.25.The method according to claim 24, wherein the first timing command and the second timing command are the same type of timing command.26.The method of claim 24, wherein the first timing command and the second timing command are two different types of timing commands.27.The method of claim 24, further comprising:When receiving the first timing command and the first access command, receiving a first valid selection signal; andWhen receiving the second timing command and the second access command, a second valid selection signal is received.28.The method of claim 24, wherein the second timing command is received at the second memory before the first timing command is received at the first memory.29.The method of claim 28, wherein the first timing command immediately precedes the first access command.30.The method of claim 28, further comprising: receiving a third timing command associated with the second access command, wherein the timing of providing the second access data clock signal is based on the third timing command.31.The method of claim 30, wherein the third timing command is received after the second access command.32.The method of claim 30, wherein the third timing command is separated in time by at least one clock cycle.33.A method that includes:Provide timing commands to the memory;Providing an access command to the memory, wherein the access command is associated with the timing command;Waiting for at least one clock cycle of the system clock signal between providing the timing command to the memory and providing the access command to the memory;Provide a data clock signal at a time relative to when the timing command is provided;Receiving and accessing the data clock signal based on the data clock signal; andReceiving data synchronized with the access data clock signal.34.The method according to claim 33, wherein the timing command is provided before the access command.35.The method of claim 33, wherein the timing command is provided after the access command.36.The method of claim 33, wherein the access command is a first access command, and the timing command is a first timing command, and the method further comprises: before providing the first access command , Providing a second timing command to the second memory, and providing a second access command to the second memory, wherein the second access command is associated with the second timing command.37.The method of claim 36, further comprising: providing a third timing command associated with the second timing command, wherein the first timing command and the third timing command include enabling fast synchronization mode and enabling Operation code for early access data clock signal mode. |
Apparatus and method containing memory commands for semiconductor memoryBackground techniqueSemiconductor memory is used in many electronic systems to store data that can be retrieved at a later time. With the increasing demand for electronic systems that are faster, have greater computing power, and consume less power, semiconductor memories that can access faster, store more data, and use less power have been continuously developed to meet changing needs . The part of the development includes the establishment of new specifications for controlling and accessing semiconductor memories, where the generation-to-generation changes in the specifications are aimed at improving the performance of memories in electronic systems.Generally, semiconductor memories are controlled by providing memories with command signals, address signals, and clock signals. Various signals can be provided by, for example, a memory controller. The command signal can control the semiconductor memory to perform various memory operations, such as a read operation for retrieving data from the memory and a write operation for storing data to the memory. A known timing relative to the receipt of the associated command by the memory can be used to provide data between the controller and the memory. The known timing is usually defined by the delay information. The delay information can be defined by several clock cycles of the system clock signals CK and CKF.In the case of a newly developed memory, the memory may have a system clock signal for timing, for example, command signals and address signals, and further have a system clock signal for timing read data provided by the memory and write data provided to the memory. Timed data clock signal. The memory may also provide a clock signal to the controller to time the provision of data provided to the controller.The timing of various memory commands provided by the controller and received by the memory can be used to control the performance of the memory, including timing of providing clock signals, providing data, and so on. The timing constraints of the various memory commands relative to each other can lead to suboptimal memory performance. Therefore, it is desirable to have memory commands with flexible timing to provide the desired memory performance.Summary of the inventionThis invention describes example devices. An example device may include a data clock path including an input buffer. The input buffer may be configured to receive a data clock signal when enabled, and the data clock path may be configured to provide an internal clock signal based on the data clock signal. The example device may further include: an input/output circuit configured to receive an internal clock signal from the data clock path and to provide an access data clock signal based on the internal clock signal; and a command input circuit configured To receive an access command and a timing command associated with the access command, and to be further configured to provide an internal access command in response to receiving the access command, a first timing command in response to receiving the timing command The first internal timing command is provided and the second internal timing command is provided in response to receiving the second timing command of the timing command. The example device may further include a command decoder coupled to the command input circuit and configured to decode the internal access command and provide internal access control signals to perform corresponding access operations, and further configured to decode The first internal timing command and the second internal timing command and provide internal timing control signals to enable the input buffer of the data clock path and control the input/output circuit to provide the access data clock signal . In some examples, each timing command can be associated with a corresponding access command. In some examples, the first timing command and the second timing command each include an opcode. In some examples, the operation code includes a first operation code for clock synchronization mode and includes a second operation code for accessing data clock mode. In some examples, the access command includes a read command. In some examples, the second timing command is restricted to immediately before the associated access command. In some examples, the first command decoder is configured to provide internal timing control signals to enable the input buffer of the data clock path in response to the first timing command following the associated access command Device. In some examples, the data clock path includes a clock divider circuit configured to provide a multi-phase clock signal based on the data clock signal. In some examples, the input/output circuit includes a clock circuit configured to provide an internal access data clock signal based on the multi-phase clock signal. In some examples, the example device may further include a clock path configured to receive a system clock signal and provide an internal system clock signal.Another example device may include: a command bus; an address bus; a data bus; a clock bus; a controller configured to provide access commands and timing commands to the command bus, addresses to the address bus, and A data clock signal is provided to the clock bus; and a memory system, which is coupled to the controller through the command bus, the address bus, the data bus, and the clock bus. The memory system may be configured to provide data with timing to the data bus based on the timing of a corresponding access command, and further configured to provide an access data clock signal with timing based on the timing of the timing command . The timing command associated with the corresponding access command is separated in time from the corresponding access command by at least one clock cycle of the system clock signal. In some examples, the memory system includes a plurality of memories each coupled to the command bus, the address bus, the data bus, and the clock bus. In some examples, the multiple memories of the memory system are organized into memory hierarchies. In some examples, the example device may further include a plurality of selection signal lines. Each selection signal line of the plurality of selection signal lines may be coupled to a corresponding one of the plurality of memories of the memory system. In some examples, the controller is configured to: provide a first timing command to a first memory of the plurality of memories to enable the input buffer of the first memory; provide a second timing command to the plurality of A second memory of the memory to enable the input buffer of the second memory; provide a first access command associated with the first timing command to the first memory; provide a valid data clock signal to the A first memory and the second memory; and providing a second access command associated with the second timing command to the second memory. The second timing command and the second access command may be separated in time by at least one clock cycle of the system clock signal. In some examples, the first memory is configured to: generate a first access data clock signal based on the valid data clock signal at the first memory; provide the first access data clock signal; and The first data is provided from the first memory in response to the first access command. In some examples, the second memory is configured to: generate a second access data clock signal based on the valid data clock signal at the second memory; provide the second access data clock signal; and The second data is provided from the second memory in response to the second access command.This invention describes example methods. An example method includes: receiving a data clock signal in response to receiving a timing command; performing an access operation in response to receiving an access command associated with the timing command, wherein the access command is related in time to the The connection timing command is separated by at least one clock cycle of the system clock signal; and the access data clock signal is provided based on the data clock signal. In some examples, the timing command is received before the associated access command. In some examples, the timing command is received after the associated access command. In some examples, the example method may further include enabling an input buffer to receive the data clock signal in response to the timing command. In some examples, the method may further include: providing read data in response to the access command. The provision of the read data can be synchronized with the access data clock signal. In some examples, the access data clock signal is provided before the read data is provided. In some examples, the method may further include: performing a clock synchronization operation between the system clock signal and the data clock signal in response to the timing command.Another example method may include: receiving a system clock signal; enabling the input buffer of the first memory in response to receiving the first timing command; enabling the input buffer of the second memory in response to the second timing command; A first access command associated with the first timing command is received at a memory; and a valid data clock signal is received at the first memory and the second memory; and a clock signal is received at the second memory. The second access command associated with the second sequential command. The second timing command and the second access command may be separated in time by at least one clock cycle of the system clock signal. The example method may further include generating a first access data clock signal at the first memory. The first access data clock signal may be based on the valid data clock signal. The example method may further include: providing the first access data clock signal; providing first data from the first memory in response to the first access command; and generating a first data at the second memory 2. Access data clock signal. The second access data clock signal is based on the valid data clock signal. The example method may further include: providing the second access data clock signal; and providing second data from the second memory in response to the second access command. In some examples, the first timing command and the second timing command are the same type of timing command. In some examples, the first timing command and the second timing command are two different types of timing commands. In some examples, the method may further include: receiving a first valid selection signal when receiving the first timing command and the first access command; and receiving the second timing command and the second Receive the second valid selection signal when accessing the command. In some examples, the second timing command is received at the second memory before the first timing command is received at the first memory. In some examples, the first timing command immediately precedes the first access command. In some examples, the method may further include: receiving a third timing command associated with the second access command. The timing of providing the second access data clock signal is based on the third timing command. In some examples, the third timing command is received after the second access command. In some examples, the third timing command is separated in time by at least one clock cycle.Another example method may include: providing a timing command to a memory; providing an access command associated with the timing command to the memory; providing the timing command to the memory and performing the access Waiting for at least one clock cycle of the system clock signal between commands provided to the memory; providing a data clock signal relative to the time when the timing command is provided; receiving an access data clock signal based on the data clock signal; and receiving and The access data synchronizes data with a clock signal. In some examples, the timing command is provided before the access command. In some examples, the timing command is provided after the access command. In some examples, the access command is a first access command and the timing command is a first timing command, and the example method further includes: before providing the first access command, adding a second timing command Providing to the second memory; and providing the second access command to the second memory. The second access command may be associated with the second timing command. In some examples, the example method further includes providing a third timing command associated with the second timing command. The first timing command and the third timing command may include operation codes for enabling fast synchronization mode and enabling early access data clock signal mode.Description of the drawingsFig. 1 is a block diagram of a system according to an embodiment of the present invention.Fig. 2 is a block diagram of a device according to an embodiment of the present invention.Fig. 3 is a block diagram of a clock path and a data clock path according to an embodiment of the present invention.4 is a timing diagram showing a first phase relationship and a second phase relationship between clock signals according to an embodiment of the invention.Fig. 5 is a block diagram of a part of an IO circuit according to an embodiment of the present invention.6A to 6D are timing diagrams of various signals during an access operation according to an embodiment of the present invention.7A to 7D are timing diagrams of various signals during an access operation according to an embodiment of the present invention.8 and 9 are timing diagrams showing various signals during access operations of two memory levels according to various embodiments of the present invention.10A-1 and 10A-2, 10B and 10C are timing diagrams showing various signals during access operations of two memory levels according to various embodiments of the present invention.11A-1 and 11A-2 and 11B-1 and 11B-2 are timing diagrams showing various signals during the access operation of two memory levels according to various embodiments of the present invention.detailed descriptionSpecific details are set forth below to provide a sufficient understanding of examples of the present invention. However, it should be clear to those skilled in the art that the examples of the present invention can be practiced without these specific details. In addition, the specific examples of the present invention described herein should not be construed as limiting the scope of the present invention to these specific examples. In other examples, well-known circuits, control signals, timing protocols, and software operations are not shown in detail so as not to unnecessarily obscure the present invention. In addition, terms such as "coupled" mean that two components can be directly or indirectly electrically coupled. "Indirect coupling" may imply that two components are coupled through one or more intermediate components.Fig. 1 is a block diagram of a system 100 according to an embodiment of the present invention. The memory system 100 includes a controller 10 and a memory system 105. The memory system 105 includes memories 110(0) to 110(p) (eg, "device 0" to "device p"), where p is a non-zero integer. The memories 110(0) to 110(p) are respectively coupled to a command bus, an address bus, a data bus, and a clock bus. In some embodiments of the present invention, the memories 110(0) to 110(p) are organized as a memory hierarchy. In such embodiments, the memory can be accessed by the memory hierarchy. The controller 10 and the memory system 105 communicate through several buses. For example, the memory system 105 receives commands and addresses on the command bus 115 and the address bus 120, respectively, and provides data between the controller 10 and the memory system 105 through the data bus 125. Various clock signals can be provided between the controller and the memory system 105 through the clock bus 130. The clock bus 130 may include the system clock signals CK and CKF received by the memory system 105, the data clock signals WCK and WCKF received by the memory system 105, and the access data clock signal RDQS provided by the memory system 105 to the controller 10. Signal line. Each bus may include one or more signal lines on which signals are provided.The CK and CKF signals provided by the controller 10 to the memory system 105 are used to time the provision and reception of commands and addresses. The WCK and WCKF signals and the RDQS signal are used to time the data. The CK and CKF signals are complementary, and the WCK and WCKF signals are complementary. When the rising edge of the first clock signal and the falling edge of the second clock signal occur simultaneously and when the rising edge of the second clock signal and the falling edge of the first clock signal occur simultaneously, the clock signal is complementary. The WCK and WCKF signals provided by the controller 10 to the memory system 105 may be synchronized with the CK and CKF signals also provided by the controller 10 to the memory system 105. In addition, the WCK and WCKF clock signals may have higher clock frequencies than the CK and CKF signals. For example, in some embodiments of the present invention, the WCK and WCKF signals have a clock frequency that is four times the clock frequency of the CK and CKF signals. The controller 10 may continuously provide the WCK and WCKF signals to the memory system 105 during the access operation (for example, enable the "WCK always on" option) to improve the timing performance of the access operation. However, continuously providing WCK and WCKF signals increases the power consumption of the system. When the power consumption is worrying, the controller 10 will not continuously provide the WCK and WCKF signals (for example, disable the "WCK always on" option).The controller 10 provides commands to the memory system 105 to perform memory operations. Non-limiting examples of memory commands include timing commands for controlling the timing of various operations, access commands for accessing the memory (such as read commands for performing read operations and write commands for performing write operations). Input command), mode register write and read commands used to perform mode register write and read operations, and other commands and operations. The command signal provided by the controller 10 to the memory system 105 further includes a selection signal (eg, chip selection CS signals CS0, CS1, CSp). When commands, addresses, data, and clock signals are provided to all the memories 110, the selection signals provided on the corresponding selection signal lines are used to select which memories 110 will respond to the commands and perform corresponding operations. In some embodiments of the present invention, a corresponding selection signal is provided to each memory 110 of the memory system 105. The controller 10 provides a valid selection signal to select the corresponding memory 110. When the corresponding selection signal is valid, the corresponding memory 100 is selected to receive the commands and addresses provided on the command bus 115 and the address bus 120.In operation, when the read command and the associated address are provided by the controller 10 to the memory system 105, the memory 110 selected by the selection signal receives the read command and the associated address and performs a read operation to provide the controller 10 Read data from the memory location corresponding to the corresponding address. The read data is provided to the controller 10 by the selected memory 110 according to the timing relative to the reception of the read command. For example, when the selected memory 110 provides read data to the controller 10, the timing may be based on a read delay (RL) value (referred to as the number of clock cycles of the CK and CKF signals after the read command). The clock period of the CK and CKF signals of tCK). The RL value is programmed in the memory 110 by the controller 10. For example, the RL value can be programmed in the corresponding mode register of the memory 110. As is well known, the mode register included in each of the memories 110 can be programmed using information for setting various operation modes and/or operation selection characteristics of the memory. One of the settings may be for the RL value.When the selected memory 110 is ready to provide read data to the controller 10, the controller provides valid WCK and WCKF signals to the memory system 105. The WCK and WCKF signals can be used by the selected memory 110 to generate the access data clock signal RDQS. When the clock signal periodically transitions between the low clock level and the high clock level, the clock signal is valid. Conversely, when the clock signal maintains a constant clock level and does not change periodically, the clock signal is invalid. The RDQS signal is supplied to the controller 10 by the memory 110 performing the read operation to time the supply of the read data to the controller 10.The controller 10 may use the RDQS signal to receive the read data. In some embodiments of the present invention, the controller 10 has two modes of using the RDQS signal to receive read data. In the first mode, the controller 10 can use the RDQS signal to control the timing of the circuitry used to collect read data from the selected memory 110. In the second mode, the controller 10 can recover the clock timing from the RDQS signal and generate internal timing signals based on the recovered timing. Then, the internal timing signal can be used by the controller 10 to control the timing of the circuitry for collecting and reading data from the selected memory 110.The controller 10 provides information to the memory system 105 (e.g., in the form of commands) to indicate in which mode the controller 10 will use the RDQS signal. The memory system 105 uses different timings depending on the mode indicated by the controller 10 to provide the RDQS signal to the controller 10. For example, as will be described in more detail below, the first timing for the first mode may be used to provide the RDQS signal to the controller 10 and the second timing for the second mode may be used to provide the RDQS signal to the controller. 10. Compared with the first mode, the second timing is relatively earlier (for example, faster). The earlier timing of the memory system 105 providing the RDQS signal to the controller 10 may allow the controller 10 to have more time to recover the clock timing from the RDQS signal before the data is provided by the memory system 105 to meet the data determined by the read delay value RL Timing.In operation, when a write command and an associated address are provided by the controller 10 to the memory system 105, the memory 110 selected by the selection signal receives the write command and associated address and performs a write operation to transfer data from the controller 10 is written to the memory location corresponding to the corresponding address. The write data is supplied to the selected memory 110 by the controller 10 according to the timing with respect to the reception of the write command. For example, when the controller 10 provides write data to the selected memory 110, the timing may be based on the write delay (WL) value indicating the number of clock cycles of the CK and CKF signals after the write command. The WL value is programmed in the memory 110 by the controller 10. For example, the WL value can be programmed in the corresponding mode register of the memory 110.When the selected memory 110 is ready to receive write data from the controller 10, the controller provides valid WCK and WCKF signals to the memory system 105. The WCK and WCKF signals can be used by the selected memory 110 to generate internal clock signals for timing the operation of the circuit receiving write data. Data is provided by the controller 10 and the selected memory 110 receives write data written to the memory corresponding to the memory address.Fig. 2 is a block diagram of a device according to an embodiment of the present invention. The device may be a semiconductor device 200 and will be referred to as a semiconductor device 200. For example, in some embodiments, the semiconductor device 200 may include (but is not limited to) a DRAM device, such as a low power DDR (LPDDR) memory integrated into a single semiconductor chip. In some embodiments of the present invention, the semiconductor device 200 may be included in the memory system 205 of FIG. 2. For example, each of the memories 210 may include the semiconductor device 200. The semiconductor device 200 includes a memory die. The die can be mounted on an external substrate (such as a memory module substrate, a motherboard, etc.). The semiconductor device 200 may further include a memory array 250. The memory array 250 includes a plurality of memory banks, and each memory bank includes a plurality of word lines WL, a plurality of bit lines BL, and a plurality of memory cells MC arranged at intersections of the plurality of word lines WL and the plurality of bit lines BL. The selection of the word line WL is performed by the row decoder 240 and the selection of the bit line BL is performed by the column decoder 245. The sense amplifier (SAMP) is positioned for its corresponding bit line BL and connected to at least one corresponding local I/O line pair (LIOT/B), the local I/O line pair (LIOT/B) then passes through the transfer gate acting as a switch (TG) is coupled to at least one corresponding main I/O pair (MIOT/B).The semiconductor device 200 may use multiple external terminals, including: command and address terminals, which are respectively coupled to the command bus and the address bus to receive the command signal COMMAND and the address signal ADDRESS; clock terminals, which are used to receive the clock signals CK and CKF; Data clock terminals, which are used to receive data clock signals WCK and WCKF; data terminals DQ, RDQS, DBI, and DMI; power supply terminals VDD, VSS, VDDQ, and VSSQ; and ZQ calibration terminals (ZQ).The address signal and memory bank address signal can be supplied to the command terminal and address terminal from the outside. The address signal and the memory bank address signal supplied to the address terminal are transferred to the address decoder 212 via the command/address input circuit 205. The address decoder 212 receives the address signal and supplies the decoded row address signal to the row decoder 240 and supplies the decoded column address signal to the column decoder 245. The address decoder 212 also receives the memory bank address signal and supplies the memory bank address signal to the row decoder 240 and the column decoder 245.The command signal COMMAND can be supplied to the command and address terminals from, for example, a memory controller. The command signal COMMAND may be provided as an internal command signal ICMD to the command decoder 215 via the command/address input circuit 205. The command decoder 215 includes a circuit for decoding the internal command signal ICMD to generate various internal signals and commands for performing operations (for example, a row command signal for selecting a word line and a column command signal for selecting a bit line). Another example may be to provide an internal signal to enable a circuit for performing an operation, such as a control signal for enabling a signal input buffer that receives a clock signal. Internal commands also include output and input activation commands, such as the synchronization command CMDSYNC.When a read command is issued and the read command is immediately supplied to the row address and column address, the read data is read from the memory cells in the memory array 250 specified by the row address and column address. The read command is received by the command decoder 215, and the command decoder 215 provides the internal command to the input/output circuit 260 so that the read data is transmitted from the data terminal via the read/write amplifier 255 and the input/output circuit 260 according to the RDQS clock signal. DQ, RDQS, DBI and DMI are output to the outside. The read data is provided at a time defined by the read delay information RL that can be programmed in the semiconductor device (for example, in a mode register (not shown in FIG. 2)). The read delay information RL can be defined according to the clock period of the CK clock signal. For example, when the associated read data is provided, the read delay information RL may be the number of clock cycles of the CK signal after the semiconductor device 200 receives the read command.When a read command is issued and the command is supplied to the row address and the column address in real time, the write data is supplied to the data terminals DQ, DBI, and DMI according to the WCK and WCKF clock signals. The write command is received by the command decoder 215, and the command decoder 215 provides the internal command to the input/output circuit 260 so that the write data is received by the data receiver in the input/output circuit 260 and passed through the input/output circuit 260 and read The fetch/write amplifier 255 is supplied to the memory array 250. Write the write data into the memory cell specified by the row address and column address. The write data is provided to the data terminal at the time defined by the write delay WL information. The delay WL information can be programmed in the semiconductor device 200 (for example, in a mode register (not shown in FIG. 2)). The write delay WL information can be defined according to the clock cycle of the CK clock signal. For example, when the associated write data is provided, the write delay information WL may be the number of clock cycles of the CK signal after the semiconductor device 200 receives the write command.Turning to the explanation of the external terminals included in the semiconductor device 200, the external clock signal and the complementary external clock signal are supplied to the clock terminal and the data clock terminal. The external clock signals CK, CKF, WCK, WCKF can be supplied to the clock input circuit 220. The input buffer included in the clock input circuit 220 receives an external clock signal when it is enabled. For example, the input buffer receives the CK and CKF signals when enabled by the CKE signal from the command decoder 215, and the input buffer receives the WCK and WCKF signals when enabled by the WCKIBEN signal from the command decoder 215. The clock input circuit 220 can receive an external clock signal to generate internal clock signals ICK and IWCK and IWCKF. The internal clock signals ICK, IWCK, and IWCKF are supplied to the internal clock circuit 230.The internal clock circuit 230 includes a circuit that provides various phase and frequency controlled internal clock signals based on the received internal clock signal. For example, the internal clock circuit 230 may include a clock path (not shown in FIG. 2) that receives the ICK clock signal and supplies the internal clock signals ICK and ICKD to the command decoder 215. The internal clock circuit 230 may further include a data clock path that receives the IWCK and IWCKF clock signals and provides the multi-phase clock signal IWCKn based on the internal clock signals IWCK and IWCKF. As will be described in more detail below, the multi-phase clock signal IWCKn has phases relative to each other and has a phase relationship with the WCK and WCKF clock signals. The multi-phase clock signal IWCKn may also be provided to the input/output circuit 260 to control the output timing of read data and the input timing of write data. The input/output circuit 160 may include a clock circuit and a driver circuit for generating and providing an RDQS signal. The data clock path may also provide a delayed multiphase clock signal IWCKD, which is one of the further delayed multiphase clock signals IWCKn.The delayed multi-phase clock signal IWCKD and the synchronization command CMDSYNC are provided to the clock synchronization circuit 275. The clock synchronization circuit 275 provides an output signal SYNCINFO having a logic level indicating the phase relationship between the multiphase clock signal IWCKn and the WCK and WCKF clock signals.The power supply potentials VDD and VSS are supplied to the power supply terminal. These power supply potentials VDD and VSS are supplied to the internal voltage generator circuit 270. The internal voltage generator circuit 270 generates various internal potentials VPP, VOD, VARY, VPERI, etc. and the reference potential ZQVREF based on the power supply potentials VDD and VSS. The internal potential VPP is mainly used in the row decoder 240, the internal potentials VOD and VARY are mainly used in the sense amplifier included in the memory array 250, and the internal potential VPERI is used in many other circuit blocks. The reference potential ZQVREF is used in the ZQ calibration circuit 265.The power supply potential VDDQ is also supplied to the power supply terminal. The power supply potential VDDQ is supplied to the input/output circuit 260 together with the power supply potential VSS. In an embodiment of the present invention, the power supply potential VDDQ may be the same potential as the power supply potential VDD. In another embodiment of the present invention, the power supply potential VDDQ may be a potential different from the power supply potential VDD. However, the dedicated power supply potential VDDQ is used for the input/output circuit 260 so that the power supply noise generated by the input/output circuit 260 does not propagate to other circuit blocks.The calibration terminal ZQ is connected to the ZQ calibration circuit 265. The ZQ calibration circuit 265 refers to the impedance RZQ and the reference potential ZQVREF to perform a calibration operation when activated by the ZQ calibration command ZQ_com. The impedance code ZQCODE obtained by the calibration operation is supplied to the input/output circuit 260, and thus the impedance of the output buffer (not shown) included in the input/output circuit 260 is specified.FIG. 3 is a block diagram of a clock path 310 and a data clock path 330 according to an embodiment of the present invention. In some embodiments of the present invention, the clock path 310 and the data clock path 330 may be included in the semiconductor device 300 of FIG. 3. For example, the data clock path 330 may be included in the clock input circuit 220 and the internal clock circuit 230 of the semiconductor device 200 of FIG. 2. One or both of the clock path 310 and the data clock path 330 may be modified without departing from the scope of the invention.The clock path 310 may include an input buffer 312 that receives complementary clock signals CK and CKF and provides an internal clock signal ICK. The input buffer 312 may be included in the clock input circuit 220 of FIG. 2. The internal clock signal ICK is based on the CK and CKF clock signals. The repeater circuit 314 receives the ICK clock signal and provides the ICK′ clock signal to the delay circuit 316. The repeater circuit 314 drives the ICK′ clock signal through the clock line from the input buffer 312 to the delay circuit 316. The delay circuit 316 delays the ICK′ clock signal to provide a delayed ICK clock signal ICKD. The ICK' and ICKD signals can be used by the command path (not shown) to clock decoding and provide internal command signals to perform memory operations (such as reading, writing, etc.).The data clock path 330 includes an input buffer 352. The input buffer 352 receives the complementary clock signals WCK and WCKF when activated by the valid enable signal WCKIBEN (for example, an active high logic level), and provides complementary internal clock signals IWCK and IWCKF based on the WCK and WCKF clock signals. The receiver circuit 352 may be enabled in response to a memory command by the command decoder, for example. In the embodiment of the present invention, the IWCK and IWCK clock signals have the same clock frequency as the WCK and WCKF clock signals, and the IWCK clock signal corresponds to the WCK clock signal, and the IWCKF clock signal corresponds to the WCKF clock signal. The input buffer 352 may be included in the clock input circuit 220 of FIG. 2.The IWCK and IWCKF clock signals are provided to a clock divider circuit 354 configured to provide multi-phase clock signals IWCK0, IWCK90, IWCK180, IWCK270 (collectively referred to as multi-phase clock signals IWCKn). The multi-phase clock signals have phases relative to each other and have a clock frequency that is less than the clock frequency of the WCK and WCKF clock signals (and the IWCK and IWCKF signals). In the embodiment of the present invention, the IWCK0, IWCK90, IWCK180, and IWCK270 clock signals have a clock frequency that is half the clock frequency of the WCK and WCKF clock signals.In the embodiment of the present invention, the IWCK0, IWCK90, IWCK180, and IWCK270 clock signals have a relative phase of 90° to each other. For example, the IWCK90 clock signal has a 90° phase relative to the IWCK0 clock signal, the IWCK180 clock signal has a 180° phase relative to the IWCK0 clock signal (and a 90° phase relative to the IWCK90 clock signal), and the IWCK270 clock signal has relative It is at the 270° phase of the IWCK0 clock signal (and relative to the 90° phase of the IWCK180 clock signal). In this case, the multi-phase clock signals IWCK0, IWCK90, IWCK180, IWCK270 may be referred to as "quadrature" phase clock signals.The multi-phase clock signal is provided to the repeater circuit 356. The repeater circuit 356 includes a repeater circuit for each of the multi-phase clock signals IWCKn. The repeater circuit 356 drives the multi-phase clock signal IWCKn through the clock line from the clock divider circuit 354 to the clock distribution circuit 358. The clock distribution circuit 358 supplies the multi-phase clock signal IWCKn to various circuit systems that operate according to the multi-phase clock signal. For example, the multi-phase clock signal IWCKn can be provided to a clock input/output circuit (not shown in FIG. 3) to provide and receive data (referred to as "to DQ block" in FIG. 3).As previously described, the IWCK0, IWCK90, IWCK180, and IWCK270 signals provided by the clock divider circuit 354 are based on the IWCK and IWCKF signals. The IWCK0, IWCK90, IWCK180, and IWCK270 signals can have a phase relationship with respect to the IWCK and IWCKF signals, and also have a phase relationship with the WCK and WCKF signals (IWCK and IWCKF signals are based on the WCK and WCKF signals). For example, the multi-phase clock signals IWCK0, IWCK90, IWCK180, and IWCK270 provided by the clock divider circuit 354 may have one of two phase relationships with respect to the WCK and WCKF clock signals. The first phase relationship and the second phase relationship are illustrated in FIG. 4.In the first phase relationship, the rising edge 420 of the IWCK0 clock signal is associated with the first rising edge 410 of the IWCK clock signal (and the WCK signal, not shown in FIG. 4) and the first rising edge of the CK signal. The rising edge 422 is associated with the first falling edge 412 of the IWCK clock signal, the rising edge 424 of the IWCK180 clock signal is associated with the second rising edge 414 of the IWCK clock signal and the first falling edge of the CK signal, and the rising edge of the IWCK270 clock signal Edge 426 is associated with the second falling edge 416 of the IWCK clock signal. The first phase relationship can be referred to as an "ordered" phase relationship.In the second phase relationship, the falling edge 430 of the IWCK0 clock signal is associated with the first rising edge 410 of the IWCK clock signal (and WCK signal) and the first rising edge of the CK signal, and the falling edge 432 of the IWCK90 clock signal is associated with the IWCK clock The first falling edge 412 of the signal is associated, the falling edge 434 of the IWCK180 clock signal is associated with the second rising edge 414 of the IWCK clock signal and the first falling edge of the CK signal, and the falling edge 436 of the IWCK270 clock signal is associated with the IWCK clock signal Is associated with the second falling edge 416. The second phase relationship can be referred to as a "disordered" phase relationship.Even when the clock frequencies of the WCK and WCKF (and IWCK and IWCKF) clock signals change (for example, as shown in FIG. 4, the clock frequency increases following the falling edge 416 of the IWCK clock signal), the first phase relationship and the second The phase relationship is also maintained.The phase relationship of the multi-phase clock signal IWCKn provided by the clock divider circuit 254 cannot be known before the determination is made. The phase relationship of the multiphase clock signal IWCKn can be determined by, for example, evaluating at least one of the multiphase clock signals. The phase relationship can be determined during the WCK-CK synchronization process (described in more detail below).Since the proper operation of the semiconductor device 100 is based on multiphase clock signals having a phase relationship, the phase relationship between the multiphase clock signal IWCKn and the WCK and WCKF signals needs to be determined. For example, the semiconductor device 100 may appropriately provide read data when the multi-phase clock signal has an "ordered" phase relationship. In this example, when it is determined that the multi-phase clock signal IWCKn has a "disordered" phase relationship, various multi-phase clock signals can be switched to provide an "ordered" multi-phase clock signal. As an example, the IWCK180 clock signal and the IWCK0 clock signal of the disordered multiphase clock signal can be switched, and the IWCK270 clock signal and the IWCK90 clock signal of the disordered multiphase clock signal can be switched. Therefore, the "unordered" multiphase clock signal is switched to the "ordered" multiphase clock signal.Fig. 5 is a block diagram of a part of an IO circuit according to an embodiment of the present invention. The RDQS clock circuit 510 and the data latch and shift circuit 530 receive multi-phase clock signals IWCK0, IWCK90, IWCK180, and IWCK270 (collectively referred to as IWCKn signals). The IWCKn signal can be a quadrature clock signal, and each clock signal has a 90° phase relative to another clock signal (for example, a 0° clock signal, a 90° clock signal, a 180° clock signal, and a 270° clock signal). The IWCKn signal may be based on the data clock signals WCK and WCKF and have a clock frequency lower than that of the WCK and WCKF signals. In some embodiments of the present invention, the IWCKn signal has half the clock frequency of the WCK and WCKF signals. The multi-phase clock signal IWCKn can be provided by the data clock path that receives the WCK signal. For example, in some embodiments of the present invention, the IWCKn signal can be provided by the data clock path 330 shown in FIG. 3.The RDQS clock circuit 510 provides an internal strobe signal IRDQS based on the IWCKn signal. The IRQDS signal is provided to the driver circuit 520. The driver circuit 520 provides a data strobe signal RDQS based on the IRDQS signal. The RDQS signal may be provided to the device (e.g., controller 10) to time data received by the device. The clock frequency of the RDQS signal may be greater than the clock frequency of the IWCKn signal. In some embodiments of the present invention, the RDQS signal has a clock frequency that is twice the clock frequency of the IWCKn signal. When the clock frequency of the IWCKn signal is half of the clock frequency of the WCK and WCKF signals, the RDQS signal may have the same clock frequency as the WCK and WCKF signals.In addition to the IWCKn signal, the data latch and shift circuit 530 receives internal data ID0 to IDr, where r is a non-zero integer. ID0 to IDr data can be provided from the memory array. For example, in some embodiments of the present invention, ID data is provided from the memory array 250 to the input/output circuit 260 including the data latch and shift circuit 530. The data latch and shift circuit 530 latches and shifts internal data ID0 to IDr based on the IWCKn signal to provide data IDQ0 to IDQs, where s is a non-zero integer. IDQ0 to IDQs are provided to the data driver circuit 540 that drives the IDQ0 to IDQs data into DQ0 to DQs data. The data driver circuit 540 may include (s+1) data driver circuits, and in particular, one data driver circuit is used for each of the IDQ0 to IDQs data.In operation, the data latch and shift circuit 530 shifts (r+1) bit width ID0 to IDr data into (s+1) bit width IDQ0 to IDQs data based on the IWCKn signal. Next, the data driver circuit 540 provides the IDQ0 to IDQs data as (s+1) bit width DQ0 to DQs data. The DQ0 to DQs data may have timing corresponding to the RDQS signal. For example, one bit for each of DQ0 to DQs data can be provided at the rising and falling clock edges of the RDQS signal. Therefore, at each edge of the RDQS signal, (s+1) bits are output in parallel. In this way, the (s+1) bits of the DQ0 to DQs data received by the device, for example, can be clocked according to the RDQS signal.As will be described in more detail below, the controller provides memory system memory commands to access memory (e.g., read or write memory). The memory commands provided for accessing the memory include timing commands and access commands. As previously described, timing commands can be used to control the timing of various operations, for example, for corresponding access commands. Examples of access commands include read commands and write commands. Examples of timing commands include CAS commands and MPC commands. The timing command may include operation codes to set various operation modes during the access operation of the access command. For example, the bits of information associated with various opcodes are included in timing commands. The opcode may include one or more bits of the timing command. The operation code can be identified by the bit position of the timing command. For example, as will be described in more detail below, the opcode OP6 of the timing command may be associated with the RDQS early mode and the opcode OP7 may be associated with the WCK-CK fast synchronization mode. The associated bit included in the timing command can be enabled by providing "1" and the corresponding mode can be disabled by providing "0".6 to 11 are examples of various access operations according to embodiments of the present invention. The embodiment illustrates the use of timing commands (such as CAS commands and MPC commands) and access commands (such as read commands). Although the embodiments of FIGS. 6 to 11 are described in the context of a read operation, it should be understood that timing commands can be used in the context of a write operation without departing from the scope of the present invention.6A to 6D are timing diagrams of various signals during an access operation according to an embodiment of the present invention. 6A to 6D will be described with reference to the read operation of the system including the controller and the memory system. In some embodiments of the present invention, the system 100 of FIG. 1 may be used for the operations described with reference to FIGS. 6A to 6D. 6A to 6D will be described with reference to the system 100 of FIG. 1, but the scope of the present invention is not limited to the specific system 100. The read delay of the read operation of FIGS. 6A to 6D is 12 tCK (for example, 12 clock cycles of the CK signal).Referring to FIG. 6A, at time Ta0, the selection signal CS0 provided by the controller 10 is valid to select the memory 110 of the memory system 105 (for example, "device 0" of the memory system 105) associated with the CS0 signal. Therefore, device 0 receives the read command READ in response to the rising clock edge of the CK signal at time Ta0. The command/address input circuit of device 0 receives the READ command and provides it to the command decoder to generate internal control signals to perform the read operation. For example, the command decoder can generate internal control signals to enable the WCK/WCKF input buffer of device 0 to prepare to receive WCK and WCKF signals from the controller 10. The WCKF signal is not shown in Figures 6A to 6D. As previously described, the WCKF signal is complementary to the WCK signal. For simplicity, the WCK and WCKF signals may be collectively referred to as WCK signals as appropriate for the description of FIGS. 6A to 6D. The WCK signal remains static from time Ta7 to Ta9 (for example, the static period tWCKPREstatic). That is, the WCK signal is maintained at a known clock level (for example, at a low clock level) in the period between the time Ta7 and Ta9. At time Ta9, the valid WCK signal provided by the controller 10 is received by the device 0. The WCK signal may have a first clock frequency followed by a second higher clock frequency (at time Ta10), as illustrated in the embodiment of FIG. 6A.Between the time Ta9 when device 0 receives a valid WCK signal and the time Ta12 when device 0 provides a valid access data clock signal RDQS (for example, the period tWCKPREtoggle), device 0 performs WCK-CK synchronization and starts to generate internal clock signals based on the WCK signal . For example, an internal clock circuit (such as a clock divider circuit) can generate a multi-phase clock signal for timing internal operations and determine the phase relationship with the WCK signal. For example, the RDQS clock circuit uses an internal clock signal to provide the RDQS signal, and the RDQS clock circuit uses a multi-phase clock signal based on the WCK signal to generate the RDQS signal. At time Ta12, device 0 provides a valid RDQS signal to controller 10. Also at time Ta12 or in the period tWCKDQO, the data DQ is supplied from the device 0 by the input/output circuit. Provide data DQ with timing synchronized with the RDQS signal. For example, as shown in the embodiment of FIG. 6A, a bit of data DQ is provided for each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 6A). FIG. 6A shows data DQ supplied from one data terminal of device 0. Although not shown in FIG. 6A, data can be simultaneously provided from other data terminals of device 0 with the same relative timing.Referring to FIG. 6B, at time Ta-1, the selection signal CS0 provided by the controller 10 is valid to select device 0. Therefore, the command/address input circuit of device 0 receives the CAS command according to the rising clock edge of the CK signal at time Ta-1 and receives the read command READ according to the rising clock edge of the CK signal at time Ta0. The CAS command represents the sequence command previously described. The CAS command immediately precedes the access command (such as the READ command), where the CAS command and the associated access command are provided as a pair of sequential commands. The CAS command includes the operation code OP6=0 to disable the early mode of RDQS and the operation code OP7=0 to disable the WCK-CK fast synchronization mode. The RDQS early mode and WCK-CK fast synchronization mode will be described in more detail below. The command decoder decodes CAS and READ commands and therefore generates internal control signals. The operation of FIG. 6B proceeds similarly to the operation described with reference to FIG. 6A.After the READ command, the WCK/WCKF input buffer of device 0 is enabled to prepare to receive WCK and WCKF signals from the controller 10. The WCK signal remains static during the static period tWCKPREstatic between the time Ta7 and Ta9. At time Ta9, the device 0 receives the valid WCK signal provided by the controller 10, and the device 0 performs WCK-CK synchronization and generates an internal clock signal for providing the RDQS signal based on the WCK signal. At time Ta12, device 0 provides a valid RDQS signal to the controller and provides data DQ in the period tWCKDQO of time Ta12. As in FIG. 6A, the input/output circuit of slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed. Although FIG. 6B shows data DQ supplied from one data terminal of device 0, data can be supplied from other data terminals of device 0 at the same time having the same relative timing.Referring to FIG. 6C, at time Ta-1, the selection signal CS0 provided by the controller 10 is valid to select device 0. Therefore, the device 0 receives the CAS command according to the rising clock edge of the CK signal at time Ta-1 and the read command READ according to the rising clock edge of the CK signal at time Ta0. The CAS command includes the operation code OP6=0 to disable the early mode of RDQS and the operation code OP7=1 to enable the WCK-CK fast synchronization mode. The command decoder decodes CAS and READ commands and generates internal control signals to enable WCK-CK fast synchronization mode and perform read operations.When the WCK-CK fast synchronization mode is enabled, the WCK signal can be provided earlier relative to the timing shown in FIGS. 6A and 6B. When the WCK-CK fast synchronization mode is enabled, the WCK/WCKF input buffer of device 0 is initially enabled at time Ta-1 (ie, when the CAS command is received by device 0) to prepare to receive WCK and WCKF signals from the controller 10. As shown in FIG. 6C, enabling the WCK/WCKF input buffer occurs in the period WCKENL between times Ta-1 and Ta2. Starting from time Ta2, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between times Ta2 and Ta4. At time Ta4, device 0 receives a valid WCK signal provided by controller 10, and device 0 performs WCK-CK synchronization and generates an internal clock signal that can be used to provide an RDQS signal based on the WCK signal.When the WCK-CK fast synchronization mode is enabled, the device 0 is ready to receive the WCK signal from the controller 10 earlier than the WCK timing shown in FIGS. 6A and 6B in which the WCK-CK fast synchronization mode is not enabled. For example, as shown in the example of FIG. 6C, the WCK signal is provided 5 tCK earlier than the example of FIGS. 6A and 6B. The controller 10 can enable the WCK-CK fast synchronization mode to provide the WCK signal earlier to allow the device 0 to start generating internal signals based on the WCK signal.At time Ta12, device 0 provides a valid RDQS signal to the controller and provides data DQ in the period tWCKDQO of time Ta12. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed. Although FIG. 6C shows data DQ provided from one data terminal of device 0, data can also be provided from other data terminals of device 0 with the same relative timing at the same time.Referring to FIG. 6D, at time Ta-1, the selection signal CS0 provided by the controller 10 is valid to select device 0. Therefore, the device 0 receives the CAS command according to the rising clock edge of the CK signal at time Ta-1 and the read command READ according to the rising clock edge of the CK signal at time Ta0. The CAS command includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The command decoder decodes CAS and READ commands and generates internal control signals to enable WCK-CK fast synchronization mode and RDQS early mode for read operations.When the RDQS early mode is enabled, the RDQS signal can be provided by device 0 earlier with respect to the timing shown in FIGS. 6A to 6C. In addition, when the WCK-CK fast synchronization mode is enabled, the WCK signal can be provided faster than the timing shown in FIGS. 6A and 6B. When the WCK-CK fast synchronization mode is enabled, start to enable the WCK/WCKF input buffer of device 0 at time Ta-1 (which is the time when the device 0 receives the CAS command) to prepare to receive the WCK and WCKF signals from the controller 10. As shown in FIG. 6D, enabling the WCK/WCKF input buffer occurs in the period WCKENL between times Ta-1 and Ta2. Starting from the time Ta2, the WCK signal remains static during the static period tWCKPREstatic between the time Ta2 and Ta4. At time Ta4, the device 0 receives the valid WCK signal provided by the controller 10, and during the period tWCKPREtoggle, the device 0 performs WCK-CK synchronization and generates an internal clock signal for providing the RDQS signal based on the WCK signal.At time Ta6 or in the period tWCKDQO of time Ta6, device 0 provides a valid RDQS signal to the controller. When the RDQS early mode is enabled, the RDQS signal is provided earlier than the RDQS signal timing shown in FIGS. 6A to 6C in which the RDQS early mode is not enabled. For example, as shown in the example of FIG. 6D, the RDQS signal is provided 5 to 6 tCK earlier than the example of FIGS. 6A to 6C. The controller 10 may enable the RDQS early mode to receive the RDQS signal from the device 0 and restore timing from the RDQS signal and generate internal timing signals based on the restored timing. The internal timing signal generated by the controller 10 can be used to time the data DQ received from the device 0.At time Ta12, device 0 provides data DQ in the period tWCKDQO at time Ta12. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed. Although FIG. 6D shows data DQ provided from one data terminal of device 0, data can also be provided from other data terminals of device 0 with the same relative timing at the same time.In FIGS. 6A to 6D, the period WCKENL is shown as 3 clock cycles (3 tCK) of the WCK signal, the period tWCKPREstatic is shown as 2 tCK, and the period tWCKPREtoggle is shown as 3 tCK. In other embodiments of the present invention, each of the time periods WCKENL, tWCKPREstatic, and tWCKPREtoggle may be the same or different.7A to 7D are timing diagrams of various signals during an access operation according to an embodiment of the present invention. 7A to 7D will be described with reference to the read operation of the system including the controller and the memory system. In some embodiments of the present invention, the system 100 of FIG. 1 may be used for the operations described with reference to FIGS. 7A to 7D. 7A to 7D will be described with reference to the system 100 of FIG. 1, but the scope of the present invention is not limited to the specific system 100. The read delay of the read operation of FIGS. 7A to 7D is 9 tCK (for example, 9 clock cycles of the CK signal).Referring to FIG. 7A, at time Ta0, the selection signal CS0 provided by the controller 10 is valid to select the memory of the memory system 105 (eg, "device 0" of the memory system 105) associated with the CS0 signal. Therefore, device 0 receives the read command READ in response to the rising clock edge of the CK signal at time Ta0. The command/address input circuit of device 0 receives the READ command and provides it to the command decoder to generate internal control signals to perform the read operation. For example, the command decoder can generate an internal control signal to enable the WCK/WCKF input buffer of device 0 to prepare to receive WCK and WKCF signals from the controller 10. The WCKF signal is not shown in Figures 7A to 7D. As previously described, the WCKF signal is complementary to the WCK signal. For simplicity, the WCK and WCKF signals may be collectively referred to as WCK signals as appropriate for the description of FIGS. 7A to 7D. The WCK signal remains static between time Ta4 and Ta6 (for example, the static period tWCKPREstatic). That is, the WCK signal is maintained at a known clock level (for example, at a low clock level) in the period between the times Ta4 and Ta6. At time Ta6, the valid WCK signal provided by the controller 10 is received by the device 0. The WCK signal may have a first clock frequency followed by a second higher clock frequency (at time Ta7), as illustrated in the embodiment of FIG. 7A.Between the time Ta6 when device 0 receives a valid WCK signal and the time Ta9 when device 0 provides a valid access data clock signal RDQS (for example, time period tWCKPREtoggle), device 0 performs WCK-CK synchronization and starts to generate an internal clock signal based on the WCK signal . For example, an internal clock circuit (such as a clock divider circuit) can generate a multi-phase clock signal for timing internal operations and determine the phase relationship with the WCK signal. For example, the RDQS clock circuit uses an internal clock signal to provide the RDQS signal, and the RDQS clock circuit uses a multi-phase clock signal based on the WCK signal to generate the RDQS signal. At time Ta9, device 0 provides a valid RDQS signal to controller 10. Also at time Ta9 or in the period tWCKDQO, the data DQ is supplied from the device 0 by the input/output circuit. Provide data DQ with timing synchronized with the RDQS signal. For example, as shown in the embodiment of FIG. 7A, a bit of data DQ is provided for each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 7A). FIG. 7A shows data DQ provided from one data terminal of device 0. Although not shown in FIG. 7A, data can be provided from other data terminals of device 0 with the same relative timing at the same time.Referring to FIG. 7B, at time Ta-3, the selection signal CS0 provided by the controller 10 is valid to select the device 0. Therefore, the command/address input circuit of device 0 receives the MPC command according to the rising clock edge of the CK signal at time Ta-3. The MPC command represents the timing command previously described. The MPC command includes the operation code OP6=0 for disabling the early mode of RDQS and the operation code OP7=0 for disabling the WCK-CK fast synchronization mode. The command decoder decodes MPC commands and therefore generates internal control signals. The selection signal CS0 is also valid at time Ta0 to select device 0. The command/address input circuit of the device 0 receives the read command READ provided at the time Ta0 according to the rising clock edge of the CK signal at the time Ta0. When the RDQS early mode and the WCK-CK fast sync mode are disabled, the operation of FIG. 7B proceeds similarly to the operation described with reference to FIG. 7A.As shown in FIG. 7B, unlike the CAS command, the MPC command is not limited to immediately before the READ command. The 3 tCKs before the READ command provide the MPC command in FIG. 7B to the device 0. As will be described in more detail below, decoupling the MPC command from immediately before the READ command may allow sufficient clock cycles of the RDQS signal to be provided to the RDQS signal timing of the controller 10 to recover the timing from the RDQS signal and also meet the requirements for slower CK clock The read delay timing of the frequency.After the READ command, the WCK/WCKF input buffer of device 0 is enabled to prepare to receive WCK and WCKF signals from the controller 10. The WCK signal remains static during the static period tWCKPREstatic between the time Ta4 and Ta6. At time Ta6, the valid WCK signal provided by the controller 10 is received by the device 0, and the device 0 performs WCK-CK synchronization and generates an internal clock signal for providing the RDQS signal based on the WCK signal. At time Ta9, device 0 provides a valid RDQS signal to the controller and provides data DQ in the period tWCKDQO of time Ta9. As in FIG. 7A, slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed. Although FIG. 7B shows the data DQ provided from one data terminal of device 0, data can be provided from other data terminals of device 0 with the same relative timing at the same time.Referring to FIG. 7C, at time Ta-3, the selection signal CS0 provided by the controller 10 is valid to select device 0. Therefore, device 0 receives the MPC command according to the rising clock edge of the CK signal at time Ta-3. The MPC command includes the operation code OP6=0 to disable the early mode of RDQS and the operation code OP7=1 to enable the WCK-CK fast synchronization mode. The command decoder decodes MPC commands and generates internal control signals to enable WCK-CK fast synchronization mode. When the WCK-CK fast synchronization mode is enabled, the WCK signal can be provided earlier relative to the timing shown in FIGS. 7A and 7B. The selection signal CS0 is also valid at time Ta0 to select device 0. The device 0 receives the read command READ provided at time Ta0 according to the rising clock edge of the CK signal at time Ta0.When the WCK-CK fast sync mode is enabled, the WCK/WCKF input buffer is enabled by the command decoder of device 0 at time Ta-3 (ie, when the MPC command is received by device 0) to prepare to receive WCK from the controller 10 And WCKF signal. As shown in FIG. 7C, enabling the WCK/WKCF input buffer occurs during the period WCKENL between time Ta-3 and Ta-1. Starting from the time Ta-1, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between the time Ta-1 and Ta1. At time Ta1, device 0 receives a valid WCK signal provided by controller 10, and device 0 performs WCK-CK synchronization and generates an internal clock signal that can be used to provide an RDQS signal based on the WCK signal.When the WCK-CK fast sync mode is enabled, the device 0 receives the WCK signal from the controller 10 earlier than the WCK timing shown in FIGS. 7A and 7B in which the WCK-CK fast sync mode is not enabled. For example, as shown in the example of FIG. 7C, the WCK signal is provided 5 tCK earlier than the example of FIGS. 7A and 7B.At time Ta9, device 0 provides a valid RDQS signal to the controller and provides data DQ in the period tWCKDQO of time Ta9. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed. Although FIG. 7C shows data DQ provided from one data terminal of device 0, data can also be provided from other data terminals of device 0 with the same relative timing at the same time.Referring to FIG. 7D, at time Ta-3, the selection signal CS0 provided by the controller 10 is valid to select device 0. Therefore, device 0 receives the MPC command according to the rising clock edge of the CK signal at time Ta-3. The MPC command includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The command decoder decodes MPC commands and generates internal control signals to enable WCK-CK fast synchronization mode and RDQS early mode for access operations.When the RDQS early mode is enabled, the RDQS signal can be provided earlier by device 0 relative to the timing shown in FIGS. 7A to 7C. In addition, when the WCK-CK fast synchronization mode is enabled, the WCK signal can be provided faster than the timing shown in FIGS. 7A and 7B. The selection signal CS0 is also valid at time Ta0 to select device 0. The device 0 receives the read command READ provided at time Ta0 according to the rising clock edge of the CK signal at time Ta0.When the WCK-CK fast synchronization mode is enabled, start to enable the WCK/WCKF input buffer of device 0 at time Ta-3 (which is the time when the CAS command is received by device 0) to prepare to receive the WCK and WCKF signals from the controller 10. As shown in FIG. 7D, enabling the WCK/WCKF input buffer occurs in the period WCKENL between time Ta-3 and Ta-1. Starting from the time Ta-1, the WCK signal remains static during the static period tWCKPREstatic between the time Ta-1 and Ta1.At time Ta1, the device 0 receives the valid WCK signal provided by the controller 10, and during the period tWCKPREtoggle, the device 0 performs WCK-CK synchronization and generates an internal clock signal for providing the RDQS signal based on the WCK signal. At the time Ta3 or in the period tWCKDQO of the time Ta3, the device 0 provides the valid RDQS signal to the controller 10. When the RDQS early mode is enabled, the RDQS signal is provided earlier than the RDQS signal timing shown in FIGS. 7A to 7C in which the RDQS early mode is not enabled. For example, as shown in the example of FIG. 7D, the RDQS signal is provided 5 to 6 tCK earlier than the example of FIGS. 7A to 7C. The controller 10 may enable the RDQS early mode to receive the RDQS signal from the device 0 and restore timing from the RDQS signal and generate internal timing signals based on the restored timing. The internal timing signal generated by the controller 10 can be used to time the data DQ received from the device 0.At time Ta9, device 0 provides data DQ in a period tWCKDQO at time Ta9. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 7D). Although FIG. 7D shows data DQ provided from one data terminal of device 0, data can also be provided from other data terminals of device 0 with the same relative timing at the same time.With reference to the timing of the example in FIG. 7D and the application periods WCKENL, tWCKPREstatic, and tWCKPREtoggle, using CAS commands instead of MPC commands for read operations will result in the RDQS signal provided by device 0 at time Ta6 (instead of using MPC commands at time Ta4) . In this example, the CAS command will be received by device 0 at time Ta-1 (ie, immediately before the READ command at time Ta0). When the total of the time periods WCKENL, tWCKPREstatic, and tWCKPREtoggle is 7 tCK, the RDQS signal is provided at the earliest time Ta6 (for example, time Ta-1+7 tCK=Ta6).In some systems, the controller 10 requires a minimum number of RDQS clock cycles from the RDQS signal recovery timing and generates internal timing signals based on the recovery timing. Provide the RDQS signal at time Ta6 (which is caused by the timing of using the CAS command of the read operation to present the example of FIG. Time) 16 clock cycles before the RDQS signal. In contrast, as shown in FIG. 7D, the MPC command causes device 0 to provide the RDQS signal at time Ta4, which provides 24 clock cycles of the RDQS signal before device 0 provides data DQ at time (Ta9+tWCKDQO). Providing additional clock cycles of the RDQS signal before the data DQ can be beneficial for some clock frequencies and the use of a controller with a minimum number of RDQS clock cycles for data clock recovery.In FIGS. 7A to 7D, the period WCKENL is shown as 2 clock cycles (2 tCK) of the WCK signal, the period tWCKPREstatic is shown as 2 tCK, and the period tWCKPREtoggle is shown as 3 tCK. In other embodiments of the present invention, each of the time periods WCKENL, tWCKPREstatic, and tWCKPREtoggle may be the same or different.8 and 9 are timing diagrams showing various signals during access operations of two memory levels according to various embodiments of the present invention. Each level is represented by a corresponding device. Specifically, level 0 corresponds to device 0 selected by the valid selection signal CS0 and level 1 corresponds to device 1 selected by the valid selection signal CS1. In other embodiments of the present invention, there may be more than two levels. In addition, in some embodiments of the present invention, the hierarchy may include multiple devices.8 and 9 will be described with reference to the read operation of the system including the controller and the memory system. In some embodiments of the present invention, the system 100 of FIG. 1 may be used for the operations described with reference to FIGS. 8 and 9. 8 and 9 will be described with reference to the system 100 of FIG. 1, but the scope of the present invention is not limited to the specific system 100. The read delay of the read operation of FIGS. 8 and 9 is 17 tCK (for example, 17 clock cycles of the CK signal). The timing diagram of FIG. 8 assumes that the "WCK always on" option is enabled (for example, WCKaon=1 for the corresponding mode register setting of the memory 110). When the "WCK always on" option is enabled, the controller 10 provides a continuously valid WCK signal after preparing both the device 0 and the device 1 to receive the WCK signal, as described in more detail below.Referring to FIG. 8, at time Ta-2, the selection signal CS0 provided by the controller 10 is valid to select device 0 (level 0). Therefore, the command/address input circuit of device 0 receives the MPC command according to the rising clock edge of the CK signal at time Ta-2. At time Ta-1, the selection signal CS1 provided by the controller 10 is valid to select the device 1 (level 1). Therefore, the command/address input circuit of the device 1 receives the MPC command according to the rising clock edge of the CK signal at time Ta-1. The MPC commands at times Ta-2 and Ta-1 include the operation code OP6=0 to disable the early mode of RDQS and the operation code OP7=1 to enable the WCK-CK fast synchronization mode. As previously described, when the WCK-CK fast synchronization mode is enabled, the WCK signal can be provided earlier than when the WCK fast synchronization mode is not enabled.When the WCK-CK fast synchronization mode is enabled, the WCK/WCKF input buffers of device 0 and device 1 are enabled by receiving the MPC command to prepare to receive the WCK and WCKF signals from the controller 10. The WCKF signal is not shown in Figures 8 and 9. As previously described, the WCKF signal is complementary to the WCK signal. For simplicity, the WCK and WCKF signals may be collectively referred to as WCK signals as appropriate for the description of FIGS. 8 and 9.The WCK/WCKF buffer of device 0 is enabled from time Ta-2 and the WCK/WCKF buffer of device 1 is enabled from time Ta-1. FIG. 8 illustrates the timing of the device 1, but does not illustrate the timing of the device 0 in order to simplify the diagram. It should be understood that the timing for enabling the WCK/WCKF buffer of device 0 is the same as the timing for enabling the WCK/WCKF buffer of device 1, except that it starts and ends 1 tCK earlier than device 1. As shown in FIG. 8, the WCK/WCKF input buffer of the activation device 1 occurs in the time period WCKENL between the time Ta-1 and Ta3 (as shown in FIG. 8 from the WCK IB that makes the level 1 effective at about time Ta3) Enabled), and enabling the WCK/WCKF input buffer of device 0 occurs in the period WCKENL between time Ta-2 and Ta2 (as shown in Figure 8 by enabling level 0 to become valid at about time Ta2). Represented).The controller 10 provides a static WCK signal after the newly enabled WCK/WCKF input buffer (in the example of FIG. 8, it is the WCK/WCKF input buffer of the device 1). Specifically, starting from the time Ta3, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between the time Ta3 and Ta6. At time Ta6, device 0 and device 1 receive the valid WCK signal provided by controller 10. Both the device 0 and the device 1 perform WCK-CK synchronization and generate an internal clock signal for providing the RDQS signal based on the WCK signal.Returning to the reference time Ta0, the selection signal CS0 is valid to select the device 0, so that the read command READ provided at the time Ta0 is received by the device 0 according to the rising clock edge of the CK signal. When the read delay is 17 tCK, device 0 will provide the data of the READ command at time Ta0 after time Ta17. The selection signal CS1 is valid at time Ta4 to select the device 1 so that the read command READ is received by the device 1 according to the rising clock edge of the CK signal at time Ta4. When the read delay is 17 tCK, the device 1 will provide the data of the READ command at time Ta4 after time Ta21.After the time Ta17, and for the READ command at the time Ta0 (for level 0), the device 0 provides the valid RDQS signal to the controller 10, and provides the data DQ in the period tWCKDQO of the time Ta17. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 8). Although FIG. 8 shows data DQ provided from one data terminal of device 0, data can also be provided from other data terminals of device 0 with the same relative timing at the same time.After the time Ta21 and for the READ command at the time Ta4 (for the level 1), the device 1 provides the valid RDQS signal to the controller 10 and provides the data DQ in the period tWCKDQO of the time Ta21. The slave device 1 provides the data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 8). Although FIG. 8 shows the data DQ provided from one data terminal of the device 1, data can also be provided from other data terminals of the device 1 having the same relative timing at the same time.The input buffers of device 0 and device 1 remain enabled, but no read command is provided to device 0 and device 1 after the corresponding read command. That is, when the "WCK always on" option is enabled in the example of FIG. 8, as described previously, the WCK/WCKF input buffers of device 0 and device 1 remain enabled. However, although not shown in FIG. 8, CAS commands or MPC commands can be used to disable the WCK/WCKF input buffers of device 0 and device 1, where the opcode OP7=0, that is, where WCK-CK fast synchronization is disabled mode.Referring to FIG. 9, the timing diagram of FIG. 9 assumes that the "WCK is always on" option is disabled (for example, WCKaon=0 for the corresponding mode register setting of the memory 110). When the "WCK is always on" option is disabled, the WCK/WCKF input buffers of device 0 and device 1 are disabled after completing the read command. When another read command is received by the device before the read command before completion, the WCK/WCKF input buffer can remain enabled. In contrast, as previously described with reference to Figure 8, when the "WCK always on" option is enabled, the WCK/WCKF input buffers of device 0 and device 1 remain enabled and can be used when the corresponding device receives a CAS command or MPC command Is disabled, where the opcode OP7=0 to disable the WCK-CK fast synchronization mode.Except for the WCK IB enable signals of device 0 and device 1 (level 0 and level 1), the timing of the signals shown in FIGS. 8 and 9 are similar. For example, after time Ta19, the WCK IB enable signal of level 0 becomes inactive (inactive low logic level) to instruct the WCK/WCKF input buffer of device 0 to be disabled. Similarly, after the time Ta23, the WCKIB enable signal of level 1 becomes inactive (inactive low logic level) to instruct the WCK/WCKF input buffer of device 1 to be disabled. After completing the corresponding read command, the WCK/WCKF input buffers of device 0 and device 1 are disabled, as previously described for disabling the "WCK always on" option (WCKaon=0). However, although not shown in FIG. 9, when a read command is received by the device before the read command is completed, the WCK/WCKF input buffers of device 0 and device 1 remain enabled.Although FIGS. 8 and 9 show separate MPC commands provided to device 0 and device 1, in some embodiments of the present invention, one MPC command may be provided to be received by device 0 and device 1 at the same time. Specifically, the device 0 and the device 1 can receive one MPC command at the same time by enabling both the selection signals CS0 and CS1 to be effective when the MPC command is provided. Therefore, both device 0 and device 1 receive the MPC command at the same time.10A-1 and 10A-2, 10B and 10C are timing diagrams showing various signals during access operations of two memory levels according to various embodiments of the present invention. Each level is represented by a corresponding device. Specifically, level 0 corresponds to device 0 selected by the valid selection signal CS0 and level 1 corresponds to device 1 selected by the valid selection signal CS1. In other embodiments of the present invention, there may be more than two levels. In addition, in some embodiments of the present invention, the hierarchy may include multiple devices.10A-1 and 10A-2, 10B and 10C will be described with reference to the read operation of the system including the controller and the memory system. In some embodiments of the present invention, the system 100 of FIG. 1 may be used for the operations described with reference to FIGS. 10A-1 and 10A-2, 10B and 10C. 10A-1 and 10A-2, 10B and 10C will be described with reference to the system 100 of FIG. 1, but the scope of the present invention is not limited to the specific system 100. The timing diagrams of FIGS. 10A-1 and 10A-2, 10B and 10C assume that the "WCK always on" option is enabled (for example, WCKaon=1 for the corresponding mode register setting). As previously described, when the "WCK always on" option is enabled, the controller 10 provides a continuously valid WCK signal after preparing both the device 0 and the device 1 to receive the WCK signal, as will be described in more detail below. In addition, as previously described, when the "WCK is always on" option is enabled, the input buffers of device 0 and device 1 remain enabled after completing the access command. Also as previously described, the WCK/WCKF input buffers of device 0 and device 1 can be disabled using CAS commands or MPC commands, where the operation code OP7=0, that is, where the WCK-CK fast synchronization mode is disabled.The read delays of the read operations of FIGS. 10A-1 and 10A-2, 10B and 10C are different, as will be described in more detail below. The different read delays of the three read operations are caused by the different clock frequencies of the CK signal (and the CKF signal). The clock frequency of the CK signal in Figures 10A-1 and 10A-2 is the fastest of the three read operations (and the tCK count of the read delay is the highest), and the clock frequency of the CK signal in Figure 10C is three reads The slowest in operation (and the tCK count of the read delay is the lowest).10A-1 and 10A-2, the read delay of the read operation is 17 tCK (for example, 17 clock cycles of the CK signal). Figure 10A-1 continues to Figure 10A-2 (collectively referred to as Figure 10A). At time Ta-2, the selection signal CS1 provided by the controller 10 is valid to select the device 1 (level 1). Therefore, the device 1 receives the MPC command according to the rising clock edge of the CK signal at time Ta-2. At time Ta-1, the selection signal CS0 provided by the controller 10 is valid to select device 0 (level 0). Therefore, device 0 receives the CAS command according to the rising clock edge of the CK signal at time Ta-1. The MPC command at time Ta-2 includes the operation code OP6=0 for deactivating the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The CAS command at time Ta-1 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. As previously described, when the RDQS early mode is enabled, the device 0 may provide an earlier RDQS signal than when the RDQS early mode is not enabled. In addition, when the WCK-CK fast sync mode is enabled, the WCK signal can be provided earlier than when the WCK fast sync mode is not enabled. The selection signal CS0 is valid at time Ta0 to select device 0 so that the read command READ is received by device 0 according to the rising clock edge of the CK signal at time Ta0. When the read delay is 17 tCK, device 0 will provide the data of the READ command at time Ta0 after time Ta17.Figure 10A illustrates the use of MPC commands and CAS commands for access operations. The MPC command at time Ta-2 is provided to set the RDQS early mode and WCK-CK fast synchronization mode of the device 1. The CAS command at time Ta-1 is used to set the RDQS early mode and the WCK-CK fast synchronization mode of device 0, and is immediately before the READ command at time Ta0 of device 0.When the WCK-CK fast sync mode is enabled for both device 0 and device 1, the WCK/WCKF input buffers of device 0 and device 1 are enabled by receiving CAS commands and MPC commands respectively to prepare to receive WCK and WCKF from the controller 10 signal. The WCKF signal is not shown in Figures 10A-1 and 10A-2, 10B and 10C. As previously described, the WCKF signal is complementary to the WCK signal. For simplicity, the WCK and WCKF signals may be collectively referred to as WCK signals as appropriate for the description of FIGS. 10A-1 and 10A-2, 10B and 10C.The WCK/WCKF buffer of device 1 is enabled from time Ta-2 and the WCK/WCKF buffer of device 0 is enabled from time Ta-1. FIG. 10A illustrates the timing of device 0, but does not illustrate the timing of device 1 in order to simplify the diagram. It should be understood that the timing for enabling the WCK/WCKF buffer of device 1 is the same as the timing for enabling the WCK/WCKF buffer of device 0, except that it starts and ends 1 tCK earlier than device 0. As shown in FIG. 10A, the WCK/WCKF input buffer of enabling device 0 occurs in the time period WCKENL between time Ta-1 and Ta3 (as shown in FIG. 10A, the WCK IB that makes level 0 become valid approximately at time Ta3 Enable means), and enable the WCK/WCKF input buffer of device 1 to occur in the time period WCKENL between time Ta-2 and Ta2 (as shown in Fig. 10A by WCK IB which makes level 1 become valid at about time Ta2) Represented).The controller 10 provides a static WCK signal after the newly enabled WCK/WCKF input buffer (in the example of FIG. 10A, it is the WCK/WCKF input buffer of device 0). Specifically, starting from the time Ta3, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between the time Ta3 and Ta6. At time Ta6, device 0 and device 1 receive the valid WCK signal provided by controller 10. Both the device 0 and the device 1 perform WCK-CK synchronization simultaneously and generate an internal clock signal for providing the RDQS signal based on the WCK signal. Simultaneous WCK-CK synchronization through both device 0 and device 1 may take less time than the sequential execution of WCK-CK synchronization between device 0 and device 1.At time Ta8 or in the period tWCKDQO of time Ta8, the device 0 provides the valid RDQS signal to the controller 10. As previously described, when the RDQS early mode is enabled, the device 0 may provide an earlier RDQS signal than when the RDQS early mode is not enabled. As previously described, the controller 10 may enable the RDQS early mode to receive the RDQS signal from the device 0 and restore timing from the RDQS signal and generate internal timing signals based on the restored timing. The internal timing signal generated by the controller 10 can be used to time the data DQ received from the device 0.The selection signal CS1 is valid at time Ta9 to select the device 1 so that the read command READ is received by the device 1 according to the rising clock edge of the CK signal at time Ta9. When the read delay is 17 tCK, the device 1 will provide the data of the READ command at time Ta9 after time Ta26. The selection signal CS1 is also valid at time Ta11 to select the device 1. The device 1 receives the MPC command provided at time Ta11 according to the rising clock edge of the CK signal at time Ta11. The MPC command at time Ta11 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode.In the period tWCKDQO at the time Ta8, the device 0 provides the valid RDQS signal to the controller 10. Device 0 also provides data DQ in the period tWCKDQO at time Ta17. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 10A). Although FIG. 10A shows the data DQ provided from one data terminal of device 0, data can also be provided from other data terminals of device 0 with the same relative timing at the same time.In the period tWCKDQO of the time Ta20, the device 1 provides the valid RDQS signal to the controller 10. The device 1 also provides data DQ in the period tWCKDQO at time Ta26. The slave device 1 provides the data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed. Although FIG. 10A shows the data DQ provided from one data terminal of the device 1, it is also possible to provide data from other data terminals of the device 1 having the same relative timing at the same time.Figure 10A illustrates the use of the MPC command at time Ta11 after the associated READ command at time Ta9. The MPC command may have a timing relative to the associated access command to reduce unnecessary timing of the RDQS signal provided by the device 1. For example, if the CAS command immediately before the READ command at time Ta9 is used instead of the MPC command at time Ta11, then the device 1 will start to provide the RDQS signal at time Ta17 (for example, Ta8 that provides the CAS command before the RDQS signal) And 9 tCKs of the CK signal (WCKENL+tWCKPREstatic+tWCKPREtoggle)). However, the RDQS signal of the device 1 is not needed until a later time. Therefore, in this example, using a pair of sequential CAS commands and READ commands instead of MPC commands will result in unnecessary RDQS timing of 3 tCKs.Referring to FIG. 10B, the read delay of the read operation is 12 tCK (for example, 12 clock cycles of the CK signal). At time Ta-2, the selection signal CS1 provided by the controller 10 is valid to select the device 1 (level 1). Therefore, the device 1 receives the MPC command according to the rising clock edge of the CK signal at time Ta-2. At time Ta-1, the selection signal CS0 provided by the controller 10 is valid to select device 0 (level 0). Therefore, device 0 receives the CAS command according to the rising clock edge of the CK signal at time Ta-1. The MPC command at time Ta-2 includes the operation code OP6=0 for deactivating the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The CAS command at time Ta-1 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The selection signal CS0 is valid at time Ta0 to select device 0 so that the read command READ is received by device 0 according to the rising clock edge of the CK signal at time Ta0. When the read delay is 12 tCK, the device 0 will provide the data of the READ command at the time Ta0 after the time Ta12.Like the read operation of FIG. 10A, FIG. 10B illustrates the use of MPC commands and CAS commands for access operations. At time Ta-2, the MPC command is provided to set the RDQS early mode and WCK-CK fast synchronization mode of the device 1. The CAS command at time Ta-1 is used to set the RDQS early mode and WCK-CK fast sync mode of device 0 and immediately before the READ command at time Ta0 for device 0.When the WCK-CK fast sync mode is enabled for both device 0 and device 1, the WCK/WCKF input buffers of device 0 and device 1 are enabled by receiving CAS commands and MPC commands respectively to prepare to receive WCK and WCKF from the controller 10 signal. The WCK/WCKF buffer of device 1 is enabled from time Ta-2 and the WCK/WCKF buffer of device 0 is enabled from time Ta-1. Like FIG. 10A and FIG. 10B, the timing of device 0 is illustrated, but the timing of device 1 is not illustrated in order to simplify the diagram. As shown in FIG. 10B, enabling the WCK/WCKF input buffer of device 0 occurs in the period WCKENL between time Ta-1 and Ta2 (as shown in FIG. Enabled), and the WCK/WCKF input buffer of the device 1 is enabled in the period WCKENL between time Ta-2 and Ta1 (as shown in Fig. 10B by the WCK IB that makes the level 1 effective at about time Ta1) Represented).The controller 10 provides a static WCK signal after the newly enabled WCK/WCKF input buffer (in the example of FIG. 10B, it is the WCK/WCKF input buffer of device 0). Specifically, starting from the time Ta2, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between the time Ta2 and Ta4. At time Ta4, the valid WCK signal provided by the controller 10 is received by the device 0 and the device 1. Both the device 0 and the device 1 perform WCK-CK synchronization simultaneously and generate an internal clock signal for providing the RDQS signal based on the WCK signal. At time Ta6 or in the period tWCKDQO of time Ta6, the device 0 provides the valid RDQS signal to the controller 10.The selection signal CS1 is valid at time Ta8 to select the device 1. The device 1 receives the CAS command provided at time Ta8 according to the rising clock edge of the CK signal at time Ta8. The CAS command at time Ta8 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The selection signal CS1 is also valid at time Ta9 to select the device 1 so that the read command READ is received by the device 1 according to the rising clock edge of the CK signal at time Ta9. When the read delay is 12 tCK, the device 1 will provide the data of the READ command at time Ta9 after time Ta21.Compared with the read operation of FIG. 10A, FIG. 10B shows the use of a sequential pair of CAS commands and READ commands for the access operation of the device 1 (instead of using the MPC command). In the example of FIG. 10B, unnecessary timing of the RDQS signal is avoided, because the read delay makes the RDQS signal have the required timing of using the CAS command and the READ command.The device 0 provides the data DQ in the period tWCKDQO of the time Ta12. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 10B). Although FIG. 10B shows data DQ provided from one data terminal of device 0, data can also be provided from other data terminals of device 0 having the same relative timing at the same time.In the period tWCKDQO of the time Ta15, the device 1 provides the valid RDQS signal to the controller 10. The device 1 also provides the data DQ in the period tWCKDQO of the time Ta21. The slave device 1 provides the data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed. Although FIG. 10B shows the data DQ supplied from one data terminal of the device 1, data can also be supplied from other data terminals of the device 1 having the same relative timing at the same time.Referring to FIG. 10C, the read delay of the read operation is 9 tCK (for example, 9 clock cycles of the CK signal). At time Ta-4, the selection signal CS1 provided by the controller 10 is valid to select the device 1 (level 1) so that the device 1 receives the MPC command according to the rising clock edge of the CK signal at the time Ta-4. At time Ta-3, the selection signal CS0 provided by the controller 10 is valid to select device 0 (level 0) so that device 0 receives the CAS command according to the rising clock edge of the CK signal at time Ta-3. The MPC command at time Ta-4 includes the operation code OP6=0 for disabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The CAS command at time Ta-3 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The selection signal CS0 is also valid at time Ta0 to select device 0 so that the read command READ is received by device 0 according to the rising clock edge of the CK signal at time Ta0. When the read delay is 9 tCK, device 0 will provide the data of the READ command at time Ta0 after time Ta9.Compared with the access operations of FIGS. 10A and 10B, the MPC command is used to start the initialization of the WCK/WCKF input buffers of the device 0 and the device 1 to start the corresponding RDQS signal while the WCK-CK is synchronized and generated.When the WCK-CK fast sync mode is enabled for both device 0 and device 1, the WCK/WCKF input buffers of device 0 and device 1 are enabled by receiving CAS commands and MPC commands respectively to prepare to receive WCK and WCKF from the controller 10 signal. The WCK/WCKF buffer of device 1 is enabled from time Ta-2 and the WCK/WCKF buffer of device 0 is enabled from time Ta-1. Like FIGS. 10A and 10B, FIG. 10C illustrates the timing of device 0, but does not illustrate the timing of device 1 in order to simplify the diagram. As shown in Fig. 10C, the WCK/WCKF input buffer of enabling device 0 occurs in the period WCKENL between time Ta-3 and Ta-1 (as shown in Fig. 10C, from making level 0 to about time Ta-1 A valid WCK IB activation is indicated), and the WCK/WCKF input buffer of the activation device 1 occurs in the period WCKENL between the time Ta-4 and Ta-2 (as shown in Fig. 10C by making the level 1 approximately at the time Ta- 2 becomes valid as indicated by WCK IB activation).The controller 10 provides a static WCK signal after the newly enabled WCK/WCKF input buffer (in the example of FIG. 10C, it is the WCK/WCKF input buffer of device 0). Specifically, starting from the time Ta-1, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between the time Ta-1 and Ta1. At time Ta1, the valid WCK signal provided by the controller 10 is received by the device 0 and the device 1. Both the device 0 and the device 1 perform WCK-CK synchronization simultaneously and generate an internal clock signal for providing the RDQS signal based on the WCK signal. At the time Ta3 or in the period tWCKDQO of the time Ta3, the device 0 provides the valid RDQS signal to the controller 10.The selection signal CS1 is valid at time Ta6 to select device 1 so that the MPC command provided at time Ta6 is received by device 1 according to the rising clock edge of the CK signal at time Ta6. The MPC command at time Ta6 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The selection signal CS1 is also valid at time Ta9 to select the device 1 so that the read command READ is received by the device 1 according to the rising clock edge of the CK signal at time Ta9. When the read delay is 9 tCK, the device 1 will provide the data of the READ command at time Ta9 after time Ta9.Device 0 provides data DQ in the period tWCKDQO at time Ta9. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 10C). Although FIG. 10C shows data DQ provided from one data terminal of device 0, data can also be provided from other data terminals of device 0 having the same relative timing at the same time.In the period tWCKDQO of the time Ta12, the device 1 provides the valid RDQS signal to the controller 10. The device 1 also provides the data DQ in the period tWCKDQO at the time Ta18. The slave device 1 provides the data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed. Although FIG. 10C shows the data DQ supplied from one data terminal of the device 1, data can also be supplied from other data terminals of the device 1 having the same relative timing at the same time.11A-1 and 11A-2 and 11B-1 and 11B-2 are timing diagrams showing various signals during the access operation of two memory levels according to various embodiments of the present invention. Each level is represented by a corresponding device. Specifically, level 0 corresponds to device 0 selected by the valid selection signal CS0 and level 1 corresponds to device 1 selected by the valid selection signal CS1. In other embodiments of the present invention, there may be more than two levels. In addition, in some embodiments of the present invention, the hierarchy may include multiple devices.11A-1 and 11A-2 and 11B-1 and 11B-2 will be described with reference to the read operation of the system including the controller and the memory system. In some embodiments of the present invention, the system 100 of FIG. 1 may be used for the operations described with reference to FIGS. 11A-1 and 11A-2 and 11B-1 and 11B-2. 11A-1 and 11A-2 and 11B-1 and 11B-2 will be described with reference to the system 100 of FIG. 1, but the scope of the present invention is not limited to the specific system 100. The timing diagrams of FIGS. 11A-1 and 11A-2 and 11B-1 and 11B-2 assume that the "WCK always on" option is disabled (for example, WCKaon=0 for the corresponding mode register setting). As described previously, when the "WCK always on" option is disabled, the input buffers of the WCK signals of the device 0 and the device 1 are disabled after the access command is completed.FIGS. 11A-1 and 11A-2 and 11B-1 and 11B-2 illustrate the sequential execution of WCK-CK synchronization of level 0 and level 1, which is in contrast with the simultaneous execution of WCK-CK synchronization of level 0 and level 1.Referring to FIGS. 11A-1 and 11A-2, the read delay of the read operation is 17 tCK (for example, 17 clock cycles of the CK signal). Figure 11A-1 continues to Figure 11A-2 (collectively referred to herein as Figure 11A). At time Ta-1, the selection signal CS0 provided by the controller 10 is valid to select device 0 (level 0). Therefore, device 0 receives the CAS command according to the rising clock edge of the CK signal at time Ta-1. The CAS command at time Ta-1 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The selection signal CS0 is valid at time Ta0 to select device 0 so that the read command READ is received by device 0 according to the rising clock edge of the CK signal at time Ta0. When the read delay is 17 tCK, device 0 will provide the data of the READ command at time Ta0 after time Ta17.The WCK/WCKF buffer of device 0 is enabled from time Ta-1 in response to the CAS command. As shown in FIG. 11A, the WCK/WCKF input buffer of enabling device 0 occurs in the time period WCKENL between time Ta-1 and Ta3 (as shown in FIG. 11A, the WCK IB which makes level 0 become valid at approximately time Ta3) Enable indicated). The controller 10 provides a static WCK signal after the WCK/WCKF input buffer of the device 0. Specifically, starting from the time Ta3, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between the time Ta3 and Ta6. At time Ta6, the valid WCK signal provided by the controller 10 is received by the device 0. Device 0 performs WCK-CK synchronization and generates an internal clock signal for providing the RDQS signal based on the WCK signal.At time Ta8 or in the period tWCKDQO of time Ta8, the device 0 provides the valid RDQS signal to the controller 10. As described previously, when the RDQS early mode is enabled (operation code OP7=1), the device 0 can provide an earlier RDQS signal than when the RDQS early mode is not enabled. Device 0 also provides data DQ in the period tWCKDQO at time Ta17. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 10A). The input buffer of device 0 is disabled at about time Ta20, as represented by WCK IB activation that makes level 0 inactive at about time Ta20 in FIG. 11A.The selection signal CS1 is valid at time Ta15, so that the CAS command is received by the device 1 (level 1). The CAS command at time Ta15 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode. The selection signal CS1 is valid at time Ta16 to select the device 1 so that the read command READ is received by the device 1 according to the rising clock edge of the CK signal at time Ta16. When the read delay is 17 tCK, the device 1 will provide the data of the READ command at time Ta16 after time Ta33.The WCK/WCKF buffer of the device 1 is enabled from time Ta15 in response to the CAS command. The activation of the WCK/WCKF input buffer of the device 1 occurs in the period WCKENL between the time Ta15 and the time Ta19 (indicated by the WCK IB activation that makes the level 1 effective at about the time Ta19 in FIG. 11A). The controller 10 provides a static WCK signal after the WCK/WCKF input buffer of the device 1. Specifically, starting from the time Ta19, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between the time Ta19 and Ta22. At time Ta22, the valid WCK signal provided by the controller 10 is received by the device 1. The device 1 performs WCK-CK synchronization and generates an internal clock signal for providing the RDQS signal based on the WCK signal.At time Ta24 or in the period tWCKDQO of time Ta24, the device 1 provides a valid RDQS signal to the controller 10. The device 1 provides the data DQ in the period tWCKDQO of the time Ta33. The slave device 1 provides the data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 11A). The input buffer of the device 1 is disabled at about time Ta36, as represented by the WCK IB activation that makes the level 1 invalid at about time Ta36 in FIG. 11A.Compared with the timing of FIG. 11A, the timing of FIGS. 11B-1 and 11B-2 causes the device 1 (level 1) to provide data faster and avoid unnecessary clock cycles of the RDQS signal. Figure 11B-1 continues to Figure 11B-2 (collectively referred to as Figure 11B). As will be described in more detail below, the timing of FIG. 11B uses the MPC command, and the timing of FIG. 11A uses the CAS command.Referring to FIG. 11B, the selection signal CS0 is valid at time Ta0 to select device 0 (level 0) so that the read command READ is received by device 0 according to the rising clock edge of the CK signal at time Ta0. When the read delay is 17 tCK, device 0 will provide the data of the READ command at time Ta0 after time Ta17. At time Ta2, the selection signal CS0 provided by the controller 10 is valid to select the device 0. Therefore, device 0 receives the MPC command according to the rising clock edge of the CK signal at time Ta2. The MPC command at time Ta-1 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode.The WCK/WCKF buffer of device 0 is enabled from time Ta2 in response to the MPC command. As shown in FIG. 11B, enabling the WCK/WCKF input buffer of device 0 occurs in the time period WCKENL between time Ta2 and Ta6 (as shown in FIG. 11B by enabling level 0 to become valid at about time Ta6, WCK IB is enabled. Said). The controller 10 provides a static WCK signal after the WCK/WCKF input buffer of the device 0. Specifically, starting from time Ta6, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between times Ta6 and Ta9. At time Ta9, the valid WCK signal provided by the controller 10 is received by the device 0. Device 0 performs WCK-CK synchronization and generates an internal clock signal for providing the RDQS signal based on the WCK signal.At the time Ta11 or within the period tWCKDQO of the time Ta11, the device 0 provides the valid RDQS signal to the controller 10. The device 1 also provides data DQ in the period tWCKDQO at time Ta17. Slave 0 provides data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 10A). The input buffer of device 0 is disabled at about time Ta20, as represented by WCK IB activation that makes level 0 inactive at about time Ta20 in FIG. 11A.Before the data is provided by the device 0, the number of clock cycles of the RDQS signal in FIG. 11B is less than the timing of FIG. 11A. The timing of FIG. 11B has 12 RDQS signal clock cycles less than the timing of FIG. 11A (for example, 36 clock cycles versus 24 clock cycles). Fewer clock cycles of the RDQS signal can reduce power consumption, wherein a clock cycle exceeding the clock cycle provided between the time Ta11 to Ta17 is unnecessary for the proper operation of the controller 10.The selection signal CS1 is valid at time Ta13 to select the device 1 so that the read command READ is received by the device 1 according to the rising clock edge of the CK signal at time Ta13. When the read delay is 17 tCK, the device 1 will provide the data of the READ command at time Ta13 after time Ta30. The selection signal CS1 is also valid at time Ta15, so that the MPC command is received by the device 1 (level 1). The MPC command at time Ta15 includes the operation code OP6=1 for enabling the early mode of RDQS and the operation code OP7=1 for enabling the WCK-CK fast synchronization mode.The WCK/WCKF buffer of the device 1 is enabled from time Ta15 in response to the CAS command. The activation of the WCK/WCKF input buffer of the device 1 occurs in the time period WCKENL between the time Ta15 and the time Ta19 (as shown in FIG. 11B by the WCK IB activation that makes the level 1 effective at about the time Ta19). The controller 10 provides a static WCK signal after the WCK/WCKF input buffer of the device 1. Specifically, starting from the time Ta19, the WCK signal remains static (at a low clock level) during the static period tWCKPREstatic between the time Ta19 and Ta22. At time Ta22, the valid WCK signal provided by the controller 10 is received by the device 1. The device 1 performs WCK-CK synchronization and generates an internal clock signal for providing the RDQS signal based on the WCK signal.At time Ta24 or in the period tWCKDQO of time Ta24, the device 1 provides a valid RDQS signal to the controller 10. The device 1 provides the data DQ in the period tWCKDQO at the time Ta30. The slave device 1 provides the data DQ synchronized with the RDQS signal, so that the bits of the data DQ are provided to each clock edge of the RDQS signal until the data burst is completed (for example, a 16-bit data burst is shown in FIG. 11B). The input buffer of the device 1 is disabled at about time Ta33, as represented by the WCK IB activation that makes the level 1 invalid at about time Ta33 in FIG. 11B.The timing of FIG. 11B allows the device 1 to receive the READ command significantly earlier than the timing of FIG. 11A (for example, time Ta13 versus time Ta16). Therefore, the timing of FIG. 11B allows the device 1 to provide data earlier than the timing of FIG. 11A (for example, time Ta30 versus time Ta33). In addition, the MPC command at time Ta15 causes the number of clock cycles of the RDQS signal before the device 1 to provide data to decrease. Because the MPC command is not restricted to immediately before the associated access command like the CAS command, the READ command can be received by the device 1 earlier. The MPC command can be received at a time before or after the associated READ command to enable the input buffer of the device 1 so that unnecessary clock cycles of the RDQS signal can be avoided.Therefore, as illustrated by FIGS. 11A and 11B, due to the use of the MPC command, the device 1 can provide data faster than when using the CAS command (for example, FIG. 11A) and therefore can avoid unnecessary clock cycles of the RDQS signal by timing the MPC command .Although the above-described embodiments of FIGS. 6 to 11 have been described in the context of a read operation, the embodiments of the present invention can also be applied in the context of other memory access operations. For example, the usage of MPC and CAS commands can be used for write operations. Devices 0 and 1 do not receive a read command from the controller and provide data to the controller, but receive a write command from the controller and receive data for storage in the memory from the controller.6 to 11 illustrate the flexibility provided by the use of MPC commands to perform access operations (which include, for example, single-level access operations and inter-level access operations) to accommodate different clock frequencies of the CK signal. Unlike the CAS command that immediately precedes the associated access command (for example, READ command, WRITE command, etc.), it can be spaced from the associated access command (for example, not immediately before or associated with the associated access command). Provide and receive MPC commands at the time after the access command. As previously explained and described, the MPC command can be before the associated access command or can be after the associated access command, and can be separated in time from the associated access command by at least one of the system clock signals (such as the CK signal) Clock cycle. However, the MPC command may also immediately precede the associated access command or may also immediately follow the associated access command. Therefore, MPC commands can be used to provide flexible timing.It should be understood from the foregoing that although specific embodiments of the present invention have been described herein for illustrative purposes, various modifications can be made without departing from the spirit and scope of the present invention. Therefore, the scope of the present invention should not be limited to any specific embodiments described herein. |
A method for coating free-standing micromechanical devices (302) using spin-coating. A solution with high solids loading but low viscosity can penetrate the free areas (304) of a micromachined structure. Spinning this solution off the wafer or die results in film formation over the devices without the expected damage from capillary action. If an organic polymer is used as the solid component, the structures may be re-released by a traditional ash process. This method may be used as a process in the manufacture of micromechanical devices to protect released and tested structures, and to overcome stiction-related deformation of micromechanical devices associated with wet release processes. <IMAGE> |
1. A method of coating free-standing micromechanical devices, the method comprising: depositing an organic resin coating material on said micromechanical device, said coating material comprised of at least 25% solids in a solvent, said coating material having a viscosity no greater than 120 centistokes; and curing said coating material. 2. The method of Claim 1, wherein said depositing step comprises: depositing a coating material having a viscosity of 118 centistokes. 3. The method of Claim 1 or Claim 2, wherein said depositing step comprises: depositing a coating material having a surfactant. 4. The method of any of Claims 1 to 3, wherein said depositing step comprises: depositing said coating material in a layer thick enough to cover structures on said micromechanical device after the removal of said solvent. 5. The method of any of Claims 1 to 4, further comprising: rotating said micromechanical device to distribute said organic coating material. 6. The method of any of Claim 5, wherein said rotating step comprises: rotating said micromechanical device at 3000 rpm. 7. The method of any of Claims 1 to 6, wherein said curing step comprises: heating said micromechanical device. 8. The method of Claim 7, wherein said curing step comprises: heating said micromechanical device to 100 DEG C. 9. The method of Claim 7 or Claim 8, wherein said curing step comprises: heating said micromechanical device to a first elevated temperature to remove a majority of said solvent, and then lowering said temperature to remove additional solvent. 10. The method of any of Claims 1 to 9, wherein said depositing step comprises: depositing an organic resin coating material comprised of at least 40% solids in a solvent and having a viscosity no greater than 120 centistokes. 11. The method of any of Claims 1 to 10, wherein said depositing step comprises: depositing a coating material comprised of between 40% and 50% solids. |
CROSS REFERENCES TO RELATED APPLICATIONS The following patents and/or commonly assigned patent applications are hereby incorporated herein by reference: <tb><TABLE> Columns=4 <tb> <tb>Head Col 1: Patent No. <tb>Head Col 2: Filing Date <tb>Head Col 3: Issue Date <tb>Head Col 4: Title <tb>5,061,049<SEP>Sept. 13, 1990<SEP>Oct. 29, 1991<SEP>Spatial Light Modulator and Method <tb>5,583,688<SEP>Dec. 21, 1993<SEP>Dec. 10, 1996<SEP>Multi-Level Digital Micromirror Device <tb></TABLE> TECHNICAL FIELD OF THE INVENTION This invention relates to the field of micro electromechanical systems (MEMS), more particularly to methods used to coat the devices, more particularly to methods used to coat the devices with dissolved resins without structural damage. BACKGROUND OF THE INVENTION Micro-electro-mechanical systems (MEMS) or micromechanical devices are micron-scale devices, often with moving parts, and are fabricated using traditional semiconductor processes such as optical lithography, doping, metal sputtering, oxide deposition, and plasma etching which have been developed for the fabrication of integrated circuits. Micromirrors, such as the DMD TM micromirror array from Texas Instruments, are a type of micromechanical device. Other types of micromechanical devices include accelerometers, pressure and flow sensors, gears and motors. While some micromechanical devices, such as pressure sensors, flow sensors, and micromirrors have found commercial success, other types have not yet been commercially viable. MEMS devices are extremely robust on their own scale, but are easily destroyed by macroscopic forces such as capillary attraction. A MEMS device caught in the surface tension of a liquid will move with that liquid, bending or even breaking in the process. A droplet of water or organic solvent on a MEMS device will pull the device down as it evaporates. Even if the device is not irreversibly deformed, it is likely to be trapped in a bent state by surrounding devices. The fragile nature of the MEMS devices can make them difficult to manufacture in a cost effective manner. In the case of micromirror arrays, once the sacrificial layers beneath the micromirror have been removed, the mirrors are very fragile and very susceptible to damage due to particles. The particles become trapped in the mechanical structure of the micromirror array and can prevent the micromirror from operating. Because the particles cannot be washed out of the array without destroying the array, it is necessary to separate the wafers on which the devices are formed, and wash the debris off the devices, prior to removing the sacrificial layers under the mirrors-also called undercutting the mirrors. Furthermore, because the chip bond-out process also creates particles, it is desirable to mount the device in a package substrate and perform the chip bond-out process prior to undercutting the mirrors. Unfortunately, it is only after the mirrors have been undercut that the micromirror array is able to be tested. Following a standard micromirror production flow, all of the devices manufactured are mounted on package substrates, bonded-out to the substrates, and undercut prior to testing the devices. Additionally, micromirrors typically require some sort of lubrication to prevent the micromirror from sticking to the landing surfaces when it is deflected. Therefore, the devices must also be lubricated and the package lid or window applied prior to testing the devices. Because a typical micromirror package is very expensive, the packaging costs associated with devices that do not function greatly increase the cost of production and must be recovered by the devices that do function. What is needed is a method of testing the micromechanical structure of a micromirror array prior to packaging the micromirror array. This method would enable a production flow that would only package the known good devices. Thus, the significant cost associated with the packaging the failed die would be eliminated. SUMMARY OF THE INVENTION The present invention provides a method and system for recoating MEMS devices using dissolved resins. One embodiment of the invention provides a method of coating free-standing micromechanical devices. The method comprising: depositing an organic resin coating material on a micromechanical device, the coating material comprised of at least 25% solids in a solvent, said coating material having a viscosity no greater than 120 centistokes; and curing the coating material. Another embodiment of the invention provides a method of coating free-standing micromechanical devices. The method comprising: depositing an organic resin coating material on a micromechanical device, the coating material comprised of at least 25% solids in a solvent, the coating material having a viscosity no greater than 120 centistokes; rotating the micromechanical device to distribute the organic coating material; and curing the coating material. Another embodiment of the invention provides a method of coating free-standing micromechanical devices. The method comprising: depositing an organic resin coating material on a micromechanical device, the coating material comprised of at least 40% solids in a solvent, the coating material having a viscosity no greater than 120 centistokes; and curing the coating material. BRIEF DESCRIPTION OF THE DRAWINGS For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which: FIGURE 1 is a perspective view of a small portion of a micromirror array of the prior art. FIGURE 2 is an exploded perspective view of a single micromirror element from the micromirror array of Figure 1. FIGURE 3 is a cross-section side view of a micromirror device. FIGURE 4 is a cross-section side view of a micromirror device showing damage to the structure due to an improper recoat. FIGURE 5 is a cross-section side view of a micromirror device showing inadequate fill of the recoat resin after evaporation of the solvent. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS A new process has been developed that allows a fully fabricated and tested MEMS device to be covered with a protective resin. The protected MEMS device is durable enough to withstand device separation and cleanup steps as well as shipping stresses, thus allowing the devices to be completed and tested in wafer form, then coated with the protective layer, diced, and shipped for packaging. After mounting the functional devices in a package substrate, the protective covering is removed and the package sealed. For the purpose of illustration, but not for the purpose of limitation, the following description will describe the recoating of a micromirror array. Figure 1 is a perspective view of a portion of a micromirror array 100. In Figure 1, some of the mirrors have been removed to illustrate the underlying structure of the device. Figure 2 is an exploded view of a single micromirror element. Micromirrors are especially challenging to recoat without damage due to the relatively large thin mirror 102. The mirror is roughly 13 to 17 mu m on each side, and only about 200nm thick. Each array contains between 500,000 and 1.3 million micromirrors, each separated by a gap slightly less than 1 mu m. The micromirrors are supported approximately 3.3 microns above the substrate 104, by a mirror support spacervia 126 attached to a hinge yoke 114 supported by a torsion hinge 120 less than 100nm thick. The mirror and associated structures are aluminum and aluminum alloys. The hinges are attached to a hinge cap 122 supported by hinge spacervias 116. The hinge spacervias are formed on a mirror bias metal layer 112 over an insulating oxide layer 106. Other structures include the address electrodes 110, upper address electrodes 124, electrode support spacervias 118. The address electrodes are connected to circuitry on the underlying semiconductor substrate through vias 108 through the oxide layer. The micromirror is designed to twist about the torsion hinge axis until stopped by contact between springtips 128 and the mirror bias metalization layer 112 when an attractive electrostatic force is created between an address electrode and the hinge yoke, and between the upper address electrode and the mirror. Because of the very small dimensions of the micromirror and other MEMS structures, very small forces are sufficient to destroy the devices. Conventional wisdom has been that the structures are too weak to withstand contact with any solid or liquid, or strong flow of gas once the mirrors have been undercut. This view was reinforced through experiments in which application of various liquids, including photoresist and water, destroyed the mirrors on a micromirror. The fluids were assumed to have torn the mirrors away from the underlying structure due to the mass of the fluids and the centrifugal force created by spinning the wafer during the application and spreading of the liquid. Previously, the only way to apply a solution to a MEMS device and remove that solution without damage resorted to esoteric solvents, such as supercritical carbon dioxide. MEMS devices that used wet release processes could chase way the original solvent by gradually replacing it with a liquid such as cyclohexane, freezing the liquid, and subliming the frozen liquid away. This method can be effective, but suffers from slow cycle time and non-traditional handling techniques. Overcoming these limitations would not only allow structures to be released without regard to stiction, but also would allow solution phase chemistry, such as electrodeless deposition, to be applied to an entire MEMS device after release. Such a technique might be used to deposit an anti-stiction coating, for example, but is difficult given current technology. Recoating of MEMS devices after release is also prohibited by problems associated with capillary attraction. A recoat technique would allow devices to be tested and inspected after release, then re-encapsulated and shipped or subjected to further processing. A recoated device could even be further patterned and processed. One technique for recoating has been to use PARYLENE TM , which can be deposited from the gas phase and thus does not incur capillary action. This technique is slow, expensive, cannot be directly patterned, and relies on specialized equipment not found in most semiconductor fabrication facilities. What follows is a simple, universal method of overcoming problems associated with device deformation by capillary forces. This method is fast, uses standard semiconductor equipment, and can be adapted to a manufacturing environment. It can be applied both to general release problems, and more specifically to intentionally coating a free-standing MEMS device with a solid film. A free-standing MEMS is considered to be any micromechanical device with sufficient sacrificial materials removed to leave parts supported or extending above a device substrate. The present invention is based on the novel insight that the damage to the device not only occurs during the application and smoothing of the protective layer, but also from the capillary forces created by the surface tension of the coating solvents as the solvents evaporate from the coating. With a proper understanding of the damage mechanisms, coatings are applied without damage to very sensitive micromechanical devices. Previous recoating attempts using spin-coating methods have failed because the properties of the coating solution must be specifically tuned for the geometry of the device. In particular: 1. The solid loading of the solution should be as high as possible. 2. The viscosity of the solution must be as low as possible. 3. The surface tension of the solution should be minimized and the device surface wetting maximized. 4. The solution should rapidly dissolve trapped gasses, and not allow gasses to bubble out either during the coating process or on baking. 5. The coating should be uniform. 6. The solid should ash cleanly, leaving little to no residue on the substrate. It is impossible to optimize all these variables simultaneously, and compromises can be allowed based on the specifics of the device. Devices are damages through several mechanical mechanisms. First, if an extremely viscous coating is applied to the device, the coating may not be able to enter very small crevices in the device. In the case of micromirror arrays, shown in Figure 3, the coating material may not be able to seep through the gaps 300 between the mirrors 302 in a time that is practical for production process flows. If the coating hardens without entering the region 304 beneath the mirrors, an air pocket is formed. While the coating above the mirrors may protect the mirrors from debris generated during wafer separation, the air pocket can damage the device when the air trapped in the region 304 is heated and expands. Damage from too viscous of a coating also occurs when the wafer is spun after the coating has seeped through the mirror gap. As the coating material under the mirrors surges to the edge of the wafer it is forced back through the mirror gaps and deforms or breaks the mirrors. Thus, a low viscosity coating fluid helps to avoid damage to the micromechanical structures. The capillary forces created by the coating fluid also have the potential to damage the MEMS device. Since good solvents typically have a higher surface tension than the dissolved filler material, and because the solvent is the primary component of most of the coating fluids of interest, the solvent contained in the coating fluid is primarily responsible for the damage that occurs when the coating is curing. For example, common AZ-P3D-SF photoresist contains approximately 85% PGMEA as a solvent. As this solvent evaporates, the capillary forces pull on the mirror and the underlying mechanical structures, and can easily bend or break the structures. This damage mechanism, which accounts for much of the damage to micromechanical devices, previously has not been understood. Figure 4 is a cross-section side view of the mirrors shown in Figure 3 after the mirrors have been damaged by the capillary forces of a recoating material. In Figure 4, the mirrors 302 are bent toward the substrate. Several solutions to the problem of damage to the MEMS device caused by the capillary action of the solvent exist. The first solution is to add a surfactant to the coating material to lower the surface tension and capillary forces generated by the coating material. A second option is to select a solvent that provides a low surface tension. If the surface tension of the coating material is low enough, the capillary forces will not be able to deform the mirrors. High quantities of solvent also contribute to mirror damage. Without sufficient resin filler under the mirrors, the capillary forces of the evaporating solvent are able to pull the mirror into the voids left by the evaporating solvent. Figure 5 is a cross-sectional side view of two micromirror elements showing voids 502 where the solvent has evaporated and the mirror damaged. In addition to changing the surface tension of the coating material, and controlling the viscosity of the coating material, the viscosity of the coating solution also is changed by changing the molecular weight of the resin. The viscosity can also be changed by changing the quantity of the solvent used to dissolve the resin. Another method of varying the viscosity is to adjust the temperature of the coating material. Even very viscous coating materials can be used with some MEMS devices by nebulizing the coating material and depositing the coating as a droplets instead of as a fluid. The proper selection and mix of resin and solvent depends on the type of device being coated since the geometry of the device will determine the areas on which the capillary forces operate and the strength of the device. For a typical micromirror device using a coating of AZ-P3D-SF filler resin and PGMEA solvent, a resin content of at least 25% avoids damage to the micromirror superstructure. Higher levels of resin are preferable provided the viscosity of the coating material is controlled. Resin contents of 30%, 45.5%, and 50% show excellent results and increasingly prevent damage from the evaporation of the solvent. The higher the resin filler content of the coating material, the less the capillary forces generated by the solvent damage the device. If there is a high resin loading, the space 304 under the mirrors are essentially reinforced by the resin filler and the evaporating solvent is unable to pull the mirrors into the space 304. Since the resin filler content is related to the viscosity of the coating material, however, the resin filler content cannot be raised arbitrarily without leaving voids under the mirrors cause by insufficient seepage of the resin filler through gaps in the MEMS structure. Once the proper coating is selected, the coating material is typically deposited on the wafer by expelling the coating material from a syringe and then spinning the wafer. For purposes of illustration and not limitation, the wafer is typically spun at 1500 RPM for 120 seconds. After the coating material is deposited, the material is cured. Various curing methods can be used to evaporate the solvent from the coating material. One method involves the gradual heating of the wafer to evaporate the solvent. The rate of solvent evaporation is controlled by controlling the temperature of the wafer. For example, the solvent could be evaporated gradually until the risk of deformation has passed, and then the rate of evaporation increased. Alternatively, a large excess quantity of solvent is rapidly evaporated and the wafer then cooled to limit the rate of evaporation during a critical phase of the cure when the solvent is most likely to deform the mirrors. One embodiment of the methods described herein applies an overcoat to a micromirror device. The overcoat is spun onto the micromirror to achieve a uniform coating. The force that is most damaging to the structures during the spin process is capillary attraction. Contrary to popular belief in the field, there is no damage caused to the devices by rapidly moving liquids, even at the interface between the mirrors and the scribe streets. Some of this misapprehension arises from the knowledge that rapidly moving gasses do damage MEMS structures; compressed nitrogen, aimed at the mirrors, will rip them off the substrate. Moving droplets or jets of liquid will similarly destroy the device. In the spinning process the pressures on either side of a mirror are much more even, and little damage occurs. This conclusion could not previously be established, however, because damage due to capillary attraction occurs simultaneously. While spinning, material is flung off the side of the wafer by centripetal force, and the solids and solvent are dispersed evenly across the wafer. Solvent begins to evaporate from the solution, and the height of the wet area begins to drop. If the height of the wet area drops near to that of the device structure, capillary forces begin to tug on the device. However, if the resulting coating is thick enough to completely encapsulate the structure, the capillary forces tug nearly equally from all sides of the structure. These opposing forces cancel, and the net force on the device is nearly zero, allowing it to remain unharmed as the solvent evaporates. By contrast, if the resulting coating is the less than or nearly equal to the height of the device, the capillary forces will tend to pull downwards. These forces will bend the device, reversibly or irreversibly, and can interlock adjacent structures of the device in a landed or collapsed position even if the bending is reversible. As such, as long as there is no undissolved material or bubbles in the solution, and the solution coats the device evenly, a wide variety of choices of solids are acceptable. In one embodiment, the solution is a photoresist, and the solid consists of resin, PAC, surfactants, and adhesion promoters, as is common. Most solutions that are thick enough to fully coat the device in spinning are too viscous to penetrate the pores of the structure. The strategy of increasing the solids loading of the solution to increase the coating thickness will result in a more viscous liquid, which, if it is able to penetrate the pores of the structure, will do so too slowly to be of value in a manufacturing environment. In one embodiment, a photoresist with a solids loading of 40% or better is necessary. At less than 40% loading, significant bending of the mirror or hinge occurs. This bending is irreversible for these structures. It may be possible to use thinner resins for devices that bend reversibly, and only need to be protected from directly landing and stiction. The photoresist is applied with a syringe, pipette, or automatic dispenser. The wafer can be spinning slowly or stationary. After dispense, the solution will slowly flow over the device, and a wait period is prescribed that is specific to the resist used. This procedure tends to trap bubbles under the devices. The bubbles must be dissipated before the spin process starts. Generally, resists with viscosities less than 120 centistokes were able to penetrate the pores of a micromirror array in two minutes or less. Two minutes is deemed as the maximum acceptable delay for a standard semiconductor manufacturing environment. Three other factors that can influence the rate at which the solution even covers the structure are its surface tension, the wetting of the structures, and the ability of the solution to dissolve gasses. Some resists, for instance, flowed down in between the mirrors, while others wicked underneath them. Gasses are often trapped underneath the micromirrors. Any bubbles left in the device can cause damage during the spin process, or lead to cracking as the gasses expand and escape from the spun wafer during the bake process. Great care should be taken to assure that no bubbles or other particles are present before spinning. Some recoat solutions are relatively more efficient at dissipating these trapped gasses than others. There are no commercial resists that have the requisite solids loading and still can penetrate the device pores in a short enough period for the process to be acceptable in a manufacturing environment. A custom resist with a solids loading of 49% and a viscosity of 118 centistokes is an ideal coating solution. This resist met the need for a high manufacturing environment throughput, coating thickness, and coating uniformity. The spinning process is tolerant of adjustment as long as the above coating parameters are met. The spin speed and ramp rate affect coating thickness and uniformity. A spin speed of 3000 rpm was used, followed by a 100 DEG C hotplate bake to harden the resist and allow the wafer to survive further processing. After processing and mounting into a package, the devices can be ashed cleanly. The recoat method does not have any effect of the process on such parameters as the hinge period, mirror planarity, or any device dynamics after packaging. This process described above in relation to micromirror fabrication is intended as an example and in no way limits the utility of the invention for other applications. Prior to application of the coating solution, a device may be immersed in another solution as part of an etch procedure or other chemical modification to the superstructure. An alternative recoat method applies a coating of pure solvent (PGMEA) or a thin resist, followed by application of a thicker coating resist. If the wafer is slowly spun, the thinner solution is displaced by the thicker one, and flows to the edge of the wafer due to centripetal force. This process can occur quickly, with minimal mixing of the two solutions. Proper control of the spinning parameters (appropriate to the coating solution) provides a high-quality, uniform coating similar to that discussed above. Although intended as a method of enabling wafer level testing of the micromechanical structures prior to device separation, the same methods may be used to apply an overcoat to a micromachined part mounted in a package substrate. Thus, although there has been disclosed to this point a particular embodiment for a method and system of re-coating MEMS devices using dissolved resins, it is not intended that such specific references be considered as limitations upon the scope of this invention. Furthermore, having described the invention in connection with certain specific embodiments thereof, it is to be understood that further modifications may now suggest themselves to those skilled in the art, it is intended to cover all such modifications. |
To provide a flexible on-die fabric interface.SOLUTION: An interface for coupling an agent to a fabric supports a set of coherent interconnect protocols. The interface includes: a global channel to communicate control signals to support the interface; a request channel to communicate messages associated with requests to other agents on the fabric; a response channel to communicate responses to other agents on the fabric; and a data channel to couple to communicate messages associated with data transfer to other agents on the fabric, where the data transfer includes payload data.SELECTED DRAWING: Figure 1 |
A device that is configured to support an agent circuit that supports a set of coherent interconnect protocols and an interface that couples to the interconnect fabric to support the set of coherent interconnect protocols. A global channel including an interface, the interface is a global channel using a first plurality of physical lanes, the global channel for communicating control signals supporting the interface, and a second plurality of physical lanes. A request channel to be used, a request channel for communicating a request-related message to another agent on the fabric, and a response channel using a third plurality of physical lanes. Uses a response channel and a plurality of fourth physical lanes, a channel for communicating messages related to the response to other agents on the fabric, the response containing a response without payload data. A data channel, which is a channel for communicating a message related to data transfer to another agent on the fabric, and the data transfer includes a payload data. Including equipment.1. The request channel, the response channel, and the data channel each include a plurality of signals thereof, and each signal of the plurality of signals is assigned to each subset of the physical lanes of the channel. Equipment described in.The device of claim 2, wherein the first portion of the plurality of signals is transmitted to the fabric and the second portion of the plurality of signals is received from the fabric.Each of the request channel, the response channel, and the plurality of signals of the data channel includes a respective valid signal, a protocol identifier signal, a virtual channel identifier field, and a header signal, and the valid signal is the header signal. Asserted for a valid instance of, the header signal contains the header of a particular message, the protocol identifier signal identifies the protocol associated with the header, and the virtual channel identifier signal is used for the particular message. The device according to claim 2 or 3, which identifies a virtual channel to be used.The device of claim 4, wherein the set of coherent interconnect protocols comprises a plurality of protocols, and the protocol identifier signal identifies one of the plurality of protocols as being associated with the header.The plurality of protocols include a CXL (Compute Express Link) protocol, and the CXL protocol is described in CXL. cache protocol and CXL. The device of claim 5, comprising the mem protocol.The device according to claim 5 or 6, wherein the header signal has a width that supports the maximum header format of the plurality of protocols.The plurality of signals of the data channel further include a payload data signal for transmitting the payload data, and the payload data signal is assigned to a part of the physical lane of the data channel, according to claims 4 to 7. The equipment described in any one item.The payload data signal corresponds to the header signal, and the payload data signal is transmitted in a plurality of clock cycles after the transmission of the header signal, and the plurality of clock cycles include configurable parameters of the interface. 8. The device according to claim 8.Each of the request channel, the response channel, and the plurality of signals of the data channel further includes a credit return signal that supports receipt of a credit return associated with each channel, and the credit is a message on the same channel. The device according to any one of claims 4 to 9, which is returned to the credit return signal in parallel with the transmission of the above.The device according to claim 10, wherein the credit return includes a virtual channel dedicated credit and a shared credit return.Each of the request channel, the response channel, and the plurality of signals of the data channel further includes a blocking signal for receiving the blocking request, and the blocking request causes the deassertion of the valid signal of the corresponding channel. The device according to any one of claims 4 to 11.12. The apparatus of claim 12, wherein the active signal is deasserted in a specific number of clock cycles after the blocking signal is asserted, the specific number of clock cycles including configurable parameters of the interface.The device according to any one of claims 1 to 13, wherein the global channel includes a set of signals for initializing the interface.The initialization of the interface is by a state machine, which includes a plurality of initialization states of the interface, the value of the set of signals causing a transition between the plurality of initialization states. The device according to claim 14.A method, the method of which is, in the first clock cycle, a valid signal asserted in a set of valid lanes of a particular channel of the interface, a first header in a set of header lanes of said particular channel. The interface is a step of receiving a signal, a VC ID signal in a set of virtual channel identifier (VC ID) lanes of the particular channel, and a protocol identifier signal in a set of protocol identifier lanes of the particular channel. The agent is coupled to the fabric, the first header signal is matched to the active signal, the first header signal includes at least a portion of the packet header, and the protocol identifier signal is supported on the interface. A particular one of a plurality of coherent protocols is identified and applied to the packet, the particular channel comprising one of a plurality of channels of the interface, the plurality of channels being request channels, data. The asserted active signal, the EOP signal asserted in a set of last packet (EOP) lanes of the particular channel, and the asserted active signal in the receiving step, including the channel and the response channel, and subsequent clock cycles. A step of receiving a second header signal in a set of header lanes, wherein the second header signal includes at least a portion of the header of the packet, a receiving step, and the asserted valid signal. A method comprising: determining the last packet based on the asserted EOP signal in the subsequent clock cycle comprising.The first clock cycle further comprises the step of receiving a shared credit signal on a set of shared credit lanes on the particular channel, with the shared credit signal using either shared credit or dedicated credit with the header. 16. The VC ID signal identifies a particular virtual channel associated with a dedicated credit when the shared credit signal identifies that a dedicated credit is used. Method.Further including a step of determining back pressure in the queue and a step of transmitting a blocking signal in the blocking signal lane of the particular channel, the blocking signal is transmitted based on the back pressure and by the blocking signal. The method of claim 16 or 17, wherein the active signal is deasserted in the set of valid lanes.A system comprising means for performing the method according to any one of claims 16-18.A system, the system comprising a fabric and a plurality of compute blocks communicatively coupled through the fabric, the particular compute blocks within the plurality of compute blocks being a set of coherent interconnects. The interface comprises a first plurality of physics, including an agent circuit that supports the protocol and an interface that is coupled to the fabric and is configured to support the set of coherent interconnect protocols. A global channel coupled to a lane, a global channel for communicating control signals supporting the interface, and a plurality of request channels coupled to a second plurality of physical lanes, each request channel being a request. A request channel, which is a channel for communicating a message related to the above to other agents on the fabric, and a plurality of response channels that connect to a plurality of third physical lanes, each of which responds to a response. A channel for communicating related messages to other agents on the fabric, the response including a response without payload data, a response channel, and a plurality of couplings to a plurality of fourth physical lanes. A data channel, each of which is a channel for communicating a message related to data transfer to another agent on the fabric, and the data transfer includes a payload data. Including, system.20. The system of claim 20, wherein the system comprises a system on chip (SoC), the SoC comprising said fabric and said plurality of computational blocks.The system of claim 20 or 21, wherein the fabric comprises a network-on-chip device.The system according to any one of claims 20 to 22, further comprising computer memory, wherein the request is a request for the computer memory.The system according to any one of claims 20 to 23, wherein the interface comprises an unequal number of request channels, response channels, and data channels.The system according to any one of claims 20 to 24, wherein the interface comprises at least one of each of the request channel, the response channel, and the data channel. |
フレキシブルなオンダイファブリックインターフェイスRelated Application This application claims a benefit to US Provisional Patent Application No. 62 / 944,773 filed on December 6, 2019, the disclosure of which is considered part of the disclosure of the present application. The whole is incorporated herein by reference.The present disclosure relates to computer systems, in particular to (but not limited to) point-to-point interconnects.Advances in semiconductor processing and logic design have made it possible to increase the amount of logic that can exist on integrated circuit devices. As a corollary, a computer system configuration can consist of a single or multiple integrated circuits in a system, as well as multiple cores, multiple hardware threads, and multiple logical processors present on the individual integrated circuits. It has also evolved into other interfaces integrated within such processors. A processor or integrated circuit typically includes a single physical processor die, which may include any number of cores, hardware threads, logical processors, interfaces, memory, controller hubs, and the like.Small computer devices are becoming more popular as a result of the greater capacity to adapt more processing power to smaller packages. Smartphones, tablets, ultra-thin notebooks and other user devices are growing exponentially. However, these small devices rely on servers for both data storage that exceeds the form factor and complex processing. As a result, demand in the high-performance computer market (that is, server space) is also increasing. For example, modern servers typically have not only a single processor with multiple cores, but also multiple physical processors (also known as multi-sockets) to increase computing power. However, as processing power increases with the number of devices in a computer system, communication between sockets and other devices becomes more important.In fact, interconnects have grown from the more traditional multi-drop buses that primarily handled telecommunications to full-fledged interconnect architectures that facilitate high-speed communications. Unfortunately, the demand for future processors to consume the corresponding demand at higher rates is imposed on the capabilities of existing interconnect architectures.FIG. 6 is a simplified block diagram illustrating an exemplary embodiment of a system on chip (SoC) device. It is a simplified block diagram of an exemplary logic flow diagram of an exemplary computer system. FIG. 6 is a simplified block diagram showing an exemplary flexible on-die fabric interface. It is a simplified block diagram which shows the exemplary CXL (Compute Express Link) topology. FIG. 6 is a simplified block diagram illustrating an embodiment of a flexible on-die fabric interface. FIG. 6 is a simplified block diagram illustrating an embodiment of a flexible on-die fabric interface. FIG. 5 is a timing diagram showing signaling through a channel of an exemplary flexible on-die fabric interface. FIG. 5 is a timing diagram showing signaling through a channel of an exemplary flexible on-die fabric interface, including the use of blocking signals. It is a timing diagram which shows the credit return signaling through the channel of an exemplary flexible on-die fabric interface. It is a simplified block diagram showing a part of the global channel of an exemplary flexible on-die fabric interface. It is a figure which shows the exemplary initialization state machine of an exemplary flexible on-die fabric interface. It is a timing diagram which shows the initialization of an exemplary flexible on-die fabric interface. FIG. 5 is a timing diagram showing a first example of a cutting flow in an exemplary flexible on-die fabric interface. FIG. 6 is a timing diagram showing a second example of a cutting flow in an exemplary flexible on-die fabric interface. It is a flow diagram which shows an exemplary technique for signaling using an exemplary flexible on-die fabric interface. It is a flow diagram which shows an exemplary technique for signaling using an exemplary flexible on-die fabric interface. It is a block diagram of an embodiment of a computer system including a multi-core processor. It is a block diagram of another embodiment of a computer system including a multi-core processor. It is a block diagram of the embodiment of a processor. It is a block diagram of another embodiment of a computer system including a processor. It is a block diagram of an embodiment of a computer system including a plurality of processors. It is a figure which shows the exemplary system implemented as a system on chip (SoC).In the following description, in order to give a complete understanding of the present disclosure, specific types of processor and system configurations, specific hardware structures, specific architecture and microarchitecture details, specific register configurations, specific instruction types, Many specific details are presented, such as examples of specific system components, specific measurements / heights, specific processor pipeline stages and operations, etc. However, it will be apparent to those skilled in the art that these particular details may not be used to implement the embodiments of the present disclosure. In other examples, to avoid unnecessarily obscuring the disclosure, specific and alternative processor architectures, specific logic circuits / codes of the algorithms described, specific firmware code, specific interconnect behaviors. , Specific logic configurations, specific manufacturing techniques and materials, specific compiler implementations, specific representations in algorithm code, specific power-down and gating techniques / logic, and other specific behavioral details of computer systems, etc. The well-known components or methods of are not described in detail.The following embodiments may describe efficient high speed data transmission and configurability in certain integrated circuits such as computer platforms or microprocessors, while other embodiments may include other types of integrated circuits and logic devices. Applicable. Similar techniques and teachings of the embodiments described herein can be applied to other types of circuits or semiconductor devices that can also benefit from better energy efficiency and energy conservation. For example, the disclosed embodiments include servers, blades, desktop computer systems, system-on-chip (SoC) devices, handheld devices, tablets, set-top boxes, in-vehicle computer systems, computer vision systems, game systems, machine learning systems, and embedded. It can be applied to a computer system embodied as an application. As will be readily apparent in the description below, the methods, equipment, and system embodiments described herein (with respect to hardware, firmware, software, or combinations thereof) are high-performance computer interconnects and, respectively. It is useful for the development of the system.As computer systems have evolved, the components within them have become more complex. As a result, interconnect architectures that couple and communicate between components are also becoming more complex to ensure that they meet bandwidth requirements for optimal operation of the components. In addition, different market segments require different aspects of the interconnect architecture to meet the needs of the market. For example, servers require higher performance, while mobile ecosystems sometimes sacrifice overall performance for power savings. However, the sole purpose of most fabrics is to maximize power savings and provide the best performance. Although some interconnects are described below, these interconnects potentially benefit from the aspects of the solution described herein.An example of an interconnect fabric architecture includes a Peripheral Component Interconnect (PCI) Express (PCIe) architecture. The main goal of PCIe is to allow components and devices from various vendors to interoperate in an open architecture that spans multiple market segments (clients (desktop and mobile), servers (standard and enterprise), embedded and communications equipment). To do so. PCI Express is a high performance general purpose I / O interconnect defined for various future computing and communication platforms. Some PCI attributes such as usage model, load / store architecture, software interface, etc. have been maintained throughout the revision, but previous parallel bus implementations have been replaced by highly scalable, fully serial interfaces. The latest version of PCI Express leverages advances in point-to-point interconnection, switch-based technology, and packetization protocols to offer new levels of performance and features. Advanced features supported by PCI Express include power management, quality of service (QoS), hot-plug / hot-swap support, data integrity, and error handling.Traditionally, dedicated wired interfaces are provided separately for each protocol supported by the system (eg, IDI, CMI, CXL, etc.). For example, in a SoC, each IP block agent may have one or more SoC components (eg, fabric blocks, network-on-chip (NOC) devices, buses, switches, etc.) that implement the SoC's interconnect fabric and application layer. It may be equipped with its own unique wired interface that allows it to combine and communicate with it. Each dedicated wired interface can have various ways to implement link layer functions such as connect, reset, disconnect, and flow control. Such dedicated interfaces also utilize a large number of wires for agents using multiple protocols. If the number of wires is large, the design area and power consumption of the system will increase. Fabric-specific interfaces are also used, which can allow for multiple protocols and higher wire efficiency, but the fabric has been redesigned for subsequent generations or system modifications, among other exemplary drawbacks. There is little scalability because it is a custom-designed complex interface for a particular system that needs to be done.Flexible wired interfaces, such as those described herein, can be configured to meet potential fabric interconnect needs, including, for example, interconnects in server applications, client CPU SoC development, among other examples. Thereby, these and other problems existing in the conventional system can be dealt with. In some implementations, such a flexible on-die wired interface (or link layer) may be specified to support a number of different protocols, such as IDI, UPI, and memory protocols, among other examples. .. In one example, an interface specification can be applied to implement an interface that supports CXL (Compute Express Link) sub-protocols (such as CXL.mem and CXL.cache protocols) for external IP development. Interface provisions may support upstream (eg, device) and downstream (eg, host) directions. Some implementations may further support switches and non-host fabric extensions, among other examples.Simplified Block of FIG. 1 Moving to FIG. 100, a simplified example of a system-on-chip (SoC) device 105 is shown. A SoC map can be implemented as an integrated circuit that incorporates multiple components of a computer, or computational blocks (or intellectual property (IP) blocks). Such blocks (eg, 110, 115, 120, 125, 130, 135, 140, 145) are dedicated to one or more CPU components 110, 115, 120, 125 (eg, microprocessor or microcontroller). Processors 130, 135 (eg, graphics processing unit (GPU), image signal processor (ISP), tensor processing unit, accelerator device, etc.), memory components, input / output (I / O) ports, secondary storage blocks, and silicon dies. Can include components such as a single die or other computing block on the substrate.An exemplary SoC 105 computational block (eg, 110, 115, 120, 125, 130, 135, 140, 145) can be interconnected by a SoC fabric (eg, 150). Fabric 150 implements itself with a set of one or more IP blocks that facilitate communication between computational blocks (eg, 110, 115, 120, 125, 130, 135, 140, 145). be able to. In some implementations, the fabric 150 may be mounted as a network on chip (NOC), such as one or more circuit blocks of the network on chip (NOC) mounting type.Communication by various blocks (eg 110, 115, 120, 125, 130, 135, 140, 145) is provided on the blocks (eg 110, 115, 120, 125, 130, 135, 140, 145). Can be facilitated via a protocol agent (eg, 160a-h). Each agent (160a-h, etc.) has one or more interconnect protocols (eg, PCIe, CXL (Compute Express Link), Gen-Z, OpenCAPI, Indie Interface (IDI), CCIX (Cache Coherent Interconnect for). It can include logic (eg, implemented in hardware circuits, firmware, software) that implements all or some layers of (Acelerators), UPI (UltraPathInterconnect), etc., via interconnect protocols. , The corresponding compute block communicates with other compute blocks in the system. As described herein, agents can bind to fabric 150 through their respective interfaces. Such agents have traditionally been coupled to the fabric via their own wired interface, but one or more agents (eg, 160a-h) have each instance of a configurable and flexible on-die wired interface. Available, it can be deployed to support multiple different protocols for multiple different agents of SoC105, among other exemplary implementations.As introduced above, a flexible wired interface, or unified fabric interface (UFI), allows you to run many protocols over a single wired interface that binds agents to the fabric, allowing you to run custom fabrics. Allows wire efficiency similar to that of a unique interface. In some implementations, the UFI can be developed by omitting fabric-specific details and separating intellectual property (IP) blocks, or other computational blocks (eg, 160a-h) from fabric 150. .. The result is, among other examples of benefits, the consistency of evolving the system interconnect fabric over time without changing the component compute blocks and interfacing with the system (eg, system on chip (SoC)) fabric 150. A clean compute block interface that allows reuse while allowing simple link-layer flow (eg, reset, connect, disconnect, flow control). Thus, UFI, among other examples, includes agent IP blocks (eg, PCIe, CXL, core) and fabric IP blocks (eg, server coherent fabric (SCF), client coherent fabric (CCF), netspeed®, etc. ) And can provide a simple, clean and verifiable interface for computing both block development models.FIG. 2 is a simplified block diagram 200 showing an exemplary logical flow diagram of an exemplary computer system (eg, SoC). The SoC may include a SoC IP block (eg 205), one or more gaskets (eg 210), and an interconnect fabric (eg 150). The SoC IP block (eg, 205) includes mutual, among other examples of IP blocks, a processor core block (eg, 225), a memory block (eg, 230), and an input / output (I / O) protocol block. Connection protocol block (eg, 235), direct memory access (DMA) block (eg, 240), interprocessor communication protocol (UPI) block (eg, 245), cache coherency protocol (eg, CXL.mem/CXL.cache). Blocks (eg, 250) may be included. In some cases, the protocol-specific logic of some agents (eg, blocks 245, 250) is at least somewhat aware of the fabric topology (eg, sockets in the system, caching agents in the system, etc.). ) Possible and can interface directly with the fabric. Gaskets (eg, 210) can be utilized to facilitate communication of some other blocks (eg, 225, 230, 235, 240) that are unaware of the fabric topology. The gasket 210 may include logic that provides a topology and protocol recognition transformation between the IP blocks of the fabric 150, which provides a protocol layer transformation to the network layer of the fabric. Gasket examples include core gasket 255, memory encryption logic 260, two-level memory (2LM) logic 265, host I / O processor (HIOP) 270 (eg, producer / consumer workflows can work. Convert load / store workflows such as PCIe to out-of-order protocols such as IDI / UPI), and uBox logic 275 may be included, while maintaining the system requirements to be. The gasket also functions as a bridge to another interface 215 (eg, to an IOSF interface, etc.), among other examples.The UFI interface 220 as discussed herein can be implemented in the system to provide a clean protocol boundary around the fabric 150 or gasket 210 of the system, and the compute block 205 (eg, SoC) within the system (eg, SoC). Allows (for example, IP blocks) to operate without knowing the details of the fabric. For example, the fabric can implement standard and simple bridges and provide basic functionality such as address decoding and flow control. The gasket may implement an optional SoC-specific stand-alone feature between the fabric and the agent (implementing UFI on the fabric and agent). In addition, the UFI interface may define physical channels that can be configured to meet the bandwidth requirements of individual compute blocks, among other exemplary features and advantages.The UFI can specify a standard interface between the agent and the interconnect fabric of the system. The agent can be various IP blocks or other computing elements (eg, hardware elements) connected to the fabric, not only different profiles (upstream and / or downstream ports), but also different protocols or It may have bandwidth requirements. The fabric is expected to support the requirements of agents within UFI and related protocols tunneled by UFI. Moving on to FIG. 3, a simplified block diagram 300 showing an exemplary UFI interface 220 channel that couples the agent 305 to fabric 150 is shown. In some implementations, the UFI defines three physical channels in each direction: a set of agent-to-fabric (A2F) channels 310 and a set of fabric-to-agent (F2A) channels 315). Implements an interface 220 that includes a request (REF) physical channel (eg, 330, 350), a response (RSP) physical channel (eg, 335, 355), and a data (DATA) physical channel (eg, 325, 345). can do. In some embodiments, the UFI further includes global control channels 320, 340 to support common global control signals across the three major physical channels.UFI regulations can flexibly map a plurality of different protocols (eg, indie interface (IDI), UPI, CXL.mem, etc.) through these physical channels. UFI provides substantial configurability within the deployed system. For example, among other factors, it is possible to configure not only the supported protocols, but also the number of channels required to meet performance, and potentially different fabrics used in the system. For example, Tables 1 and 2 below show usage examples when the protocol profile and the number of physical channels are different. The combination of protocol and number of channels can be considered as an "agent profile".Table 1: Examples of protocol profilesTable 2: Examples of agent profilesTable 1 shows a list of exemplary agents (eg, cores, UPI agents, etc.) that can be contained in IP blocks contained in the system and identifies a collection of protocols supported by the corresponding agent logic (eg, ISOF). The agent supports IDI and non-coherent UPI (UPI_NC), while the uBox agent supports IDI, IDI system agent (IDI_SA), non-coherent UPI, etc.). In traditional systems, each of the agent's supported protocols may be provided with a different separate wired interface (connecting the agent to the fabric) (eg, in the uBox agent example that supports three protocols, three separate). Wired interface can be provided). When a UFI interface is applied, the IP block can replace these multiple interfaces with a single UFI interface and connect to a fabric that supports communication using any of the supported protocols. For example, Table 2 shows an example of a UFI channel implemented in a single UFI interface to support each agent example listed in the leftmost column of Table 2. For example, the UFI interface of an exemplary CXL agent is IDI and CXL. It can support both with a mem and includes one request channel, one data channel, and two response channels in the A2F direction, and two request channels, one data channel, and one response channel in the F2A direction. In some cases, the supported protocols may not use one of the UFI channels. Therefore, such channels may be omitted in some UFI instances. As an example, the uBox agents identified in Table 2 support IDI_SA, IDI, and UPI_NC, and based on these protocols, the request channel in the F2A direction (omitted for that UFI instance), among other examples. Does not need.Returning to the discussion of FIG. 3, each channel (eg, 320, 325, 330, 335, 340, 345, 350, 355) may consist of a set of physical wires or lanes, where each wire , Assigned to carry a particular type of signal. In a UFI interface, a set of physical lanes (eg, wires or other conductors) is provided and assigned to various channels, which are specified for the interface and assigned to each physical lane of the interface. Embody the logical set of signals. Each device owns a pin and corresponding UFI logic (implemented in hardware circuitry and / or software) to implement an interface termination (transmitter or receiver) or instance. Join to a physical lane that embodies the connection between the sender and receiver on the interface. Therefore, as discussed below, the set of signals can be specified under the UFI of each channel. Some of the specified signals may have a width and format specified for each channel instance, regardless of the protocol supported by the corresponding UFI interface. Other signals, among other exemplary features, are the attributes of the supported protocol (eg, protocol header length) and the agent's speed of operation (eg, agents that run slower than the fabric are longer). It may be based on (corrected by data channel). In this way, the data, among other exemplary advantages, is routed through a dedicated global control channel (eg, 320, 340) in parallel with the requests and responses used to manage the coherency of the system. Can be transmitted with the transmitted link training and control signals.To illustrate certain general principles of UFI, non-limiting examples of potential UFI implementations are described herein. For example, as such, the UFI interface is configured to support a number of different sub-protocols of CXL that map to UFI's physical channels (eg, CXL.io, CXL.mem, CXL.cache, etc.). obtain. Such implementations map such coherent protocols between agents and fabrics, so CXL. It can be called the cache / mem protocol interface (CPI). According to the above, a UFI implementation (eg, CPI) may allow mapping of multiple different protocols (eg, CXL.mem and CXL.cache) to the same physical wire (which implements a channel of a UFI interface).CXL (Compute Express Link) is a dynamic protocol multiplexing (or muxing) of the coherency protocol (CXL. Cache), memory access protocol (CXL.mem), and IO protocol (CXL.io). A low-latency, high-bandwidth discrete or on-package link that supports. CXL. cache is an agent coherency protocol that supports device caching of host memory, CXL. mem is a memory access protocol that supports device-mounted memory, CXL. io is a PCIe-based non-coherent I / O protocol with enhanced support for accelerators. CXL aims to provide a rich set of protocols, thereby supporting a wide spectrum of devices such as accelerator devices. Depending on the particular accelerator usage model, only all CXL protocols (CXL.io, CXL.mem, CXL.cache) or subsets are enabled and the corresponding compute blocks or devices (accelerators, etc.) access the system. It can provide a low latency, wide bandwidth route for.In UFI, the specific choice of channel mapping and physical wire sharing between different protocols can be protocol and implementation specific, and all these various mappings can be permitted by UFI regulations. For example, in some implementations, depending on whether the component is a downstream port or an upstream port, CXL. cache and CXL. Different channels of mem are related to the agent-to-fabric (A2F) direction or the fabric-to-agent (F2A) direction. For example, in the example of FIG. 3, CXL. cache and CXL. The mem protocol can be mapped to a physical channel (eg, 320, 325, 330, 335, 340, 345, 350, 355) that connects the corresponding CXL agent to the fabric, among other examples. Table 3 shows examples of channels that can be used in an example of a CPI UFI implementation. For example, Table 3 shows the context of the agent-to-fabric connection of the upstream and downstream ports in the UFI implementation. cache and CXL. Capture the profile in terms of the physical channel of the mem. In the case of CPI, CXL. cache and CXL. The number of meters and each physical channel used by the agent may be an implementation option based on, for example, the bandwidth requirements of the agent.Table 3: Examples of CPI agent profilesMoving on to FIG. 4, an exemplary agent and a simplified block diagram 400 showing the binding of such agent to the fabric is shown. FIG. 4 shows an exemplary system topology for a port that supports the CXL link 415. For example, the CXL link 415 can couple the CPU host device 405 to another device 410 (eg, a memory device or an accelerator device). Each agent (on devices 405, 410) has link layer logic (eg, 420a-b, 425a-b) that supports each subprotocol of CXL (eg, CXL.io, CXL.mem, CXL.cache). Can include. CXL. mem and CXL. In the case of cache, a common controller (eg, 425a-b) can be used. Protocol multiplexing can be facilitated by CXL arbitration / multiplexing logic (eg, 425a-b implemented in a hardware circuit) that interfaces with the Flex Bus ™ physical layer (430a-b, etc.). Flex Bus can be implemented as a flexible high speed port statically configured to support either PCIe or CXL. FlixBus allows either the PCIe protocol or the CXL protocol to be transmitted over a high bandwidth off-package link. Protocol selection on FlixBus PHY430a-b can be made at boot time via auto-negotiation based on the application.Continuing with the example of FIG. 4, the UFI implementations (eg, CPI) 220a, 220b are CXL. cache and CXL. Used for coherent protocols such as mem, but different UFI implementations or different wired interface provisions (430a, 430b) (eg, Streaming Fabric Interface (SFI)) are available in PCIe and CXL. Used for load / store protocols like io. In one example, Streaming Fabric Interfaces (SFIs) 430a-b act as intermediate interfaces, which do not assume protocol or application-specific responsibilities between the sender and receiver, and load / store protocols (eg, PCIe). , CXL.io, etc.) to provide a scalable streaming interface that can maintain high bandwidth requirements. SFI includes stand-alone protocol specifications, SFI semantics provided to support various protocols that can be mapped to flow control, and virtual channel semantics provided by SFI specifications, among other examples and interface implementations. Not.As shown in FIG. 4, the system can use exemplary UFI instantiations 220a-b (eg, CPI instantiations). Such UFI instantiations 220a-b allow wires to be shared by the fabric and wire efficiency can be achieved around the fabric and agents by allowing different protocols to share common wires. For example, in a UFI implementation, the channels of various protocols originating from the agent are carefully mapped to the minimum set of physical and virtual channels, so the bandwidth and channel spacing requirements of the agent and protocol are the total number of wires. Is filled to be the least. The UFI may not include new protocol provisions. Instead, UFI maps existing protocols to a common set of channels. To maximize wire sharing across various protocols, UFI provides that protocols use common flow control and virtualization features (eg, as specified by UFI) on their channels. In some UFI implementations, certain protocols can be mapped to use a common data width and control signal width, depending on the instantiation. In some cases, UFI virtual channel specifications are included in all mapped protocols. You can set ordering considerations within or between channels, but among many exemplary features, messages may be considered unordered if the ordering considerations remain unspecified. ..Moving on to FIGS. 5A-5B, in some implementations of the UFI interface, the same interface can be used to support communication between the agent and the fabric using any of the agent's supported protocols. For example, in the block diagram 500a of FIG. 5A, a single instance of the UFI interface is CXL in a CPI implementation. cache and CXL. An example used to support both with mem is shown. On the other hand, as shown in the block diagram 500b of FIG. 5B, an alternative implementation of the UFI interface is, among other exemplary agents and protocols, two separate instances of the UFI interface, namely the first CXL. The cache (310a, 315a), and the second CXL. Mem (310b, 315b) may be provided. In fact, some agents may expose only one protocol per UFI interface and instead choose to duplicate the UFI interface instead of mapping multiple protocols to one UFI interface. is there. Choosing such an implementation can simplify the design at the expense of using additional wires and logic.As introduced above, the UFI interface defines three physical channels in each direction: request (RQ), response (RSP), and data (DATA). The CHECK channel transmits requests from the agent to the fabric and from the fabric to the agent, respectively. Transaction address and protocol level command information is encapsulated in header fields of data transmitted over the channel. A physical EQU channel can transfer one transaction per cycle, and the width of the channel (eg, the number of physical lanes provided to implement the channel) is between all protocols that share this physical channel. It can be determined by the maximum width required to transfer one request. The data channel carries all the messages that transfer data between agents. This may include write data, read response data, snoop response data and the like. Transactions can be sent by multiple FLITs (FLow Control UnIT) depending on the data channel. For example, a 64B transfer with a data channel carrying 32B of data can be transmitted via two FLITs, among other examples. The RSP channel carries the response without data. In the case of a request generated by an agent, when the agent is received from the fabric, a response is sent back using this physical channel. These responses can include completion, snoop response, and the like. Such responses may not carry address bits, so in some implementations this channel may utilize header fields that are relatively narrower than the RQ. In fact, the RSP physical channel can transfer a single flow control unit (FLIT) message. Common signals that apply to all physical channels are called global signals, and additional lanes of the interface can be defined to carry such signals. For example, among other features, global signals can be used to support initialization, disconnection, and other error reporting. UFI physical channels are used by various protocols that are mapped to UFI interfaces. Multiple instantiations of the same channel of RQ, DATA, and RSP channels may be allowed to match the link bandwidth to the fabric bandwidth. Moreover, among the many exemplary features and implementations, not all protocols need to use all channels of UFI.A UFI instance can have a global channel and a variable number of RQ, DATA, and RSP channels in each direction between the agent and the fabric. At the first level, the signals are grouped in the direction of data flow as agent-to-fabric (A2F) and fabric-to-agent (F2A) directions. The global layer carries signals that apply across all physical channels. For example, Table 4 shows an exemplary A2F global signal and Table 5 shows an exemplary F2A global signal. The width specifies the number of physical connections (eg, wires or lanes) used in the channel to implement the signal.Table 4: A2F global channel wireTable 5: F2A Global Channel WireThe UFI request, the CHECK layer, carries the agent-to-fabric and fabric-to-agent requests. Address and protocol level command information is encapsulated in header fields or signals on the RQ layer. Since the header and the information contained within the header can be protocol-specific, the mapping can be specified to map the protocol-specific information to bits (and the specific wires used to transmit those bits). Moreover, given the protocol-specific properties of the header, the width of the header signal is also configurable and can be adjusted to support the protocols implemented in the UFI. Other fields or signals may be protocol agnostic and the signal width can be fixed. UFI's RQ layer signal can be provided to be symmetric in the A2F and F2A directions, even if some protocols do not utilize or provide such symmetry. As an example, CXL. cache and CXL. mem is not a symmetric protocol. Thus, CXL. cache and CXL. Upstream and downstream versions of mem are mapped as different protocols. In fact, implementations may only support the relevant subset of protocols used for the feature. Table 6 shows examples of signals and signal widths in the UFI CHECK layer. The direction specifies the signal direction from the viewpoint of the packet transmitting side (Tx) and the packet receiving side (Rx).Table 6: RQ layer fieldsAs mentioned above, the size of the header signal (HDR) is variable and is based on the protocol being transferred via the UFI interface. When multiple protocols are transmitted over the UFI interface, the width of the HDR is sized to the maximum size of HDR being transferred through the interface, or the maximum header size of multiple protocols supported. The reserved field width is mainly used to cover the unused portion of HDR. The sender drives 0 in the reserved field and the corresponding receiver ignores this field.To show an exemplary mapping of protocol headers to UFI HDR signals, Tables 7 and 8 show CXL in the upstream and downstream directions. An exemplary mapping of the cache protocol to the request channel HDR signal is shown. For example, the widths of various fields (excluding address parity) are described in CXL. Specified by cache specifications. In one example, address parity is calculated as the XOR of all bits in the Address field. In the case of an upstream port, A2F corresponds to a host-to-device (H2D) channel on the CXL (compute express link) and F2A corresponds to a device-to-host (D2H) channel on the CXL. In the case of a downstream port, A2F corresponds to the D2H channel on the CXL and F2A corresponds to the H2D channel on the CXL. In the case of a downstream port, the Device Trust Level field specified in the CXL security policy register is also part of the D2H request. In this example, CXL. Only one virtual channel is supported by these channels in the cache.Table 7: Upstream port CXL. Mapping of cache protocol to HDRTable 8: Downstream port CXL. Mapping of cache protocol to HDRSimilarly, CXL. cache and CXL. In an example where both with mem are supported in the same UFI implementation (eg, in the CPI), CXL. The mem header can also be mapped to HDR signals, as shown in the examples in Tables 9 and 10. In this example, the width of the different fields (excluding address parity) is specified according to the CXL specification, and the address parity is calculated as the XOR of all the bits of the address field. In the case of an upstream port, A2F is CXL. Map to the lower (M2S) RQ channel from the master on the mem. In the case of a downstream port, among other examples, A2F is mapped from the bottom to the master (S2M) (eg, there is no RQ channel in this direction) and F2A is mapped to M2S RQ. Currently, CXL. Only one virtual channel is supported on these channels of mem.Table 9: Upstream port CXL. Mapping of the mem protocol to HDRTable 10: Downstream port CXL. Mapping of the mem protocol to HDRIn some UFI implementations, among the many implementation-specific factors and configurations enabled by UFI, ordering rules can be specified and applied based on the protocol used. As an example, ordering may be required when multiple instantiations of the RQ channel are implemented (eg, to match the link bandwidth to the fabric bandwidth). For example, in the CPI example, the following ordering rule is set to CXL. It can be applied to cache traffic to maintain the ordering semantics outlined in the CXL specification when multiple CHECK channels are implemented. Simultaneous messages of the same clock cycle are not ordered from each other. The response received on the RSP channel should be considered to precede the request received on the RQ channel in the same clock cycle. Similarly, CXL. In the case of mem traffic, CXL. An ordering rule can be specified and applied so that a mem request is mapped to a particular instance using an address-based hash. Specific hashes are implementation-specific, but all addresses are mapped to only one instance, and the CXL specification outlines them. To ensure that the order of the mem M2S channels is maintained.In UFI, the DATA physical channel carries all messages that have data transfers between agents. This may include write data, read response data, snoop response data and the like. A data physical channel message containing data can be transmitted as multiple flow control units (ie, FLIT). Even if some protocols (eg, CXL.cache and CXL.mem) are not symmetric, the data layer signal can be provided to be symmetric in the A2F and F2A directions. In the example where the data signal is asymmetric, the upstream and downstream versions of the DATA channel (eg, CXL.cache and CXL.mem) are mapped as different protocols. Table 11 shows the various signals (and corresponding wires) used to implement the UFI data layer, where the directional sequences are the packet sender (Tx) and packet receiver (Rx) perspectives. Specify the direction of the signal from.Table 11: Data layer fieldsSimilar to the RQ channel, the size of the header signal (HDR) in the DATA layer is variable and is based on the protocol being transferred through the interface. When multiple protocols are transmitted over the interface, the HDR width is sized to the maximum size of HDR being transferred over the UFI interface. The reserved field width is used to cover the unused portion of HDR. For example, the sender drives 0 in the reserved field and the receiver ignores this field. In some implementations, messages in supported protocols carry 64B of data. Messages with a 32B payload may also be supported on the DATA channel. In either case, a credit equivalent to 64B can be used.As an example of protocol header mapping on a DATA channel, CXL (eg, in a CPI implementation). cache and CXL. The mapping of mem is provided as an exemplary example. In the case of an interface that carries 64B of data, the 64B transfer is transmitted over one cycle. The entire header is also transmitted over one cycle. In the case of an interface that carries 32B of data, the data_body is 256 bits wide and the 64B transfer is transmitted over two cycles. The data_eop signal needs to be asserted in the second cycle, the data_header is valid in the first cycle, and the second cycle is reserved. In the case of an interface that carries 16B of data, the data_body is 128 bits wide and the 64B transfer is transmitted over 4 cycles. The data_eop signal needs to be asserted in cycle 4, and among many examples, data_header is valid in the first cycle and the second, third and fourth cycles are reserved. For each of the upstream and downstream ports, CXL. Examples of mapping cache data header fields to data_header are shown in Tables 12 and 13. In the case of an upstream port, A2F corresponds to H2D from CXL and F2A corresponds to D2H from CXL. In the case of a downstream port, A2F corresponds to D2H and F2A corresponds to H2D. In some implementations, CXL. cache and CXL. Only one virtual channel is supported on these channels of mem.Table 12: Upstream port CXL. Mapping of cache protocol to data_headerTable 13: Downstream port CXL. Mapping of cache protocol to data_headerSimilarly, Tables 14 and 15 show CXL. An exemplary mapping of mem to the DATA header signal is shown. In the case of an interface that carries 64B of data, 64B transfers are transmitted over one cycle. The entire header is also transmitted over one cycle. In the case of an interface that carries 32B of data, the data_body is 256 bits wide and the 64B transfer is transmitted over two cycles. The data_eop signal needs to be asserted in cycle 2, and data_header is evenly divided between the two cycles. When data_header has an H bit width, H is also created by padding with reserved bits as needed. The H / 2 bits ([H / 2-1: 0]) are transmitted in the first cycle and the remaining bits are transmitted in the second cycle. In the case of an interface that carries 16B of data, the data_body is 128 bits wide and the 64B transfer is transmitted over 4 cycles. The data_eop signal needs to be asserted in cycle 4, and data_header is evenly divided between the four cycles. When data_header is H bit width, H becomes a multiple of 4 by padding with reserved bits as needed. Among the many examples, H / 4 bits ([H / 4-1: 0]) are transmitted in the first cycle and ([H / 2-1: H / 4]) are transmitted in the second cycle. ([3H / 4-1: H / 2]) is transmitted in the third cycle, and the remaining bits are transmitted in the fourth cycle. For each of the upstream and downstream ports, CXL. Examples of mapping of the mem data header field to data_header are shown in Tables 14 and 15. In the case of an upstream port, A2F corresponds to M2S RwD from CXL and F2A corresponds to S2M DRS from CXL. In the case of a downstream port, A2F corresponds to S2M DRS and F2A corresponds to M2S RwD.Table 14: Upstream port CXL. Mapping of the mem protocol to data_headerTable 15: Downstream port CXL. Mapping of the mem protocol to data_headerVarious parameters can be provided to the UFI that may allow further configuration of the DATA layer of the interface. For example, you can specify a DataHdrSep parameter and set a value for that parameter to specify how the payload follows the corresponding header of the DATA channel. For example, the DataHdrSep parameter indicates that the payload follows the transmission of the corresponding header at fixed intervals of 0 to 3 cycles specified by the parameter value. This parameter can be specified in each direction (A2F and F2A) to allow independent control. The value set of the DataHdrSep parameter is applicable to all protocols in a given UFI. Fixed intervals allow the payload to be sent without a separate valid indicator.In some implementations, the UFI may not allow intra-packet level interleaving within or between protocols. For example, after a packet initiates transmission through an interface, before different packets from the same or another protocol initiate transmission, until the end of packet (EOP) arrives and is asserted. , UFI may send packets over the interface. In implementations that employ this feature, the interface may benefit from this simplification and, among other exemplary advantages and alternative implementations, can simplify fabric and agent design.In the case of a request generated by an agent or fabric, the receiving agent or fabric sends back the corresponding response using the RSP physical channel. Such responses may include completion, snoop responses, and the like. The UFI provides that the RSP layer signal is symmetric in the A2F and F2A directions, but some protocols may not (and need not) use the signal symmetrically. In this case as well, CXL. cache and CXL. Since protocols such as mem are not symmetric, thus CXL. cache and CXL. Upstream and downstream versions of mem can be implemented as different mapping protocols. Table 16 shows an example of a signal in a UFI implementation, where the direction sequence specifies the direction of the signal from the perspective of the packet's sender (Tx) and the packet's receiver (Rx).Table 16: RSP layer fieldsLike the RQ and DATA channels, the size of the RSP HDR is variable and is based on the protocol being forwarded through the interface. When multiple protocols are transmitted over an interface, the width of the HDR is sized to the maximum size of the RSP HDR being forwarded through the interface. The reserved field width is used to cover the unused portion of the HDR, the transmitting side drives 0 with the reserved wire (field), and the receiving side ignores this field. In addition, as with RQ and DATA channels, individual protocols can be mapped to RSP HDR signals, each protocol defining a different protocol-specific header field width. Tables 17 and 18 show the upstream and downstream ports of CXL. An example of cache mapping is shown. CXL. In the cache example, A2F is mapped to the H2D response and F2A is mapped to the D2H response at the upstream port. In the case of a downstream port, A2F is mapped to the D2H response and F2A is mapped to the H2D response. In some cases, CXL (eg, on the CPI interface). cache and CXL. The mem implementation supports a single virtual channel on these channels.Table 17: Upstream port CXL. Mapping cache to HDRTable 18: Downstream port CXL. Mapping cache to HDRSimilarly, CXL. In an exemplary mapping of the mem header to the UFI RSP HDR signal, the width of the different fields can be specified by the protocol, as shown in Tables 19 and 20. CXL. In the case of mem and in the case of an upstream port, F2A maps to S2M NDR. In the case of a downstream port, A2F maps to S2M NDR (No Data Response).Table 19: Upstream port CXL. mapping of mem to HDRTable 20: Downstream port CXL. mapping of mem to HDRFIG. 6 shows an exemplary timing diagram 600 of a signal in an exemplary A2F DATA channel of a UFI interface. Although the example in FIG. 6 specifies the A2F DATA channel, it should be understood that the following features and principles discussed in this example are equally applicable to the F2A DATA channel. Also, although the example in FIG. 6 targets DATA channels, it should be understood that similar principles and behaviors can dominate other UFI channel types (RSP and RQ). The signals specified for inclusion in the DATA channel include, among other examples, the clock signal 605, the valid data signal 610 (eg, A2F_data_is_valid), the data protocol ID signal 615 (eg, A2F_data_protocol_id), and the data virtual channel ID signal 620. (Eg A2F_data_vc_id), shared credit signal 625 (eg A2F_data_shared_credit), data header signal 630 (eg A2F_data_header), last packet (indicating packet end) signal 635 (eg A2F_data_eop), and data , A2F_data_payload). Each signal in the channel (eg, 605, 610, 615, 620, 625, 630, 635, 640) was listed consistently with the examples shown in (eg, Table 6, Table 11, and Table 16 above). ) Can consist of a set of one or more physical lanes.In the particular example of FIG. 6, each signal line can be toggled between the low and high values once every clock cycle 605. The valid data signal 610, when high, may indicate that valid data should be transmitted. Thus, the lane of the data header signal 630 is data that embodies the header of the corresponding payload data so that the start of transmission of the header is aligned with the valid signal (eg, in cycle x2). Can be encoded. The values transmitted in the data protocol ID signal 615, VC ID signal 620, and shared credit signal 625 are also aligned with the valid signal 610 and / or the header and applied to the header (eg, CXL.mem) and its payload data. The specific protocol (potentially multiple protocols) to be used, as well as the virtual channel used for transmission (eg VC0) and the credit type used by the header (eg shared or dedicated (per VC)). It can also be identified. When the shared credit signal 625 indicates that a dedicated credit is being used (eg, when the signal 625 is low or "0"), the aligned VC ID signal also identifies the VC ID of the dedicated credit. Depending on the size of the header and the width of the header signal, multiple clock cycles (eg, 2 cycles) may be required to transmit the header. A portion of the data or a "pump" (eg, embodied as a single frit) can be transmitted in multiple lanes within a single clock cycle. Similarly, the payload data lane 640 may be encoded with the payload data and the timing of the payload data transmission may be based on the transmission of the corresponding header.In the example of FIG. 6, the channel may be configured so that there is no delay or separation between the start of the payload data (eg, 648, 649) and the start of the corresponding header data (eg, 644, 646). Thus, in such an example, the start of payload data (eg, payload pump 648) may be transmitted in time with the start of header data (eg, header pump 0 644). From the signal transmitted on the channel, the receiver can see that the payload data is associated with the header and that the data is CXL (based on the aligned protocol ID signal 615). It can be identified that it follows the mem protocol and is associated with virtual channel VC0 (based on aligned virtual channel signal 620). The receiver can further identify the nature of the credit used by the header from the aligned shared credit signal 625 (and VC ID signal).The end-of-packet signal 630 can be used to indicate when (eg, at which frit or clock cycle) the last pump or frit of data for a given packet is being transmitted. For example, in one implementation, if the value of the EOP signal 630 is low, it means that the payload data (and / or header data) being transmitted on the channel is "not" at the last pump of the packet's data. Can be shown. However, if the EOP signal 630 is high, this indicates that the payload data pump (eg, 650) is the last data in the packet, thereby indicating the end of one packet, thereby these signals. Subsequent data received in (eg, payload and header signals) may indicate that it belongs to a different subsequent packet. For example, the EOP signal 635 is low at clock cycle x2 when the first pump of header 644 and payload 648 is transmitted, but the last pump (eg, to indicate the end of the corresponding packet). , 646, 650) is transmitted, the clock cycle x3 makes a transition to high.As further shown in the example of FIG. 6, the active signal 610 can be used to interrupt the transmission of packets (and their corresponding data and header flits) on the channel. In fact, valid can be deasserted in the middle of a message, which suspends the transfer until the valid is reasserted. For example, the EOP signal 635 goes low at the clock signal x5, the active signal 610 goes high, the start of a new packet, the corresponding header data on the header signal 630 (eg, 656), and the payload on the payload signal 640. Data (eg, 660) can be presented. To simplify the illustration of these principles, this next packet may also require two clock cycles, or frit, to transmit. However, instead of transmitting the header and payload data of the two pumps (eg, 656, 658 and 660, 662) in consecutive clock cycles, the active signal 610 is low in clock cycle x6 to interrupt packet transmission. can do. In the subsequent cycle (clock cycle x7), the active signal 610 is returned high and the last header pump 658 and payload pump 662 are combined with the corresponding header pumps (eg, 656, 658) transmitted in the header signal 630. Allow transmission (along with the data of the aligned protocol ID 615, VD ID 620, and shared credit 625 signal). In addition, the EOP signal 635 can be returned high in time with the transmission of the last header pump 658 of the packet indicating the last packet.In addition to the channel data transmitted over the channel (eg, as payload data 648, 650, 660, 662, etc.), as discussed in more detail below (both shared and dedicated credits). The credit return flow can be received in the F2A direction of the corresponding channel. These credit returns may have nothing to do with the transaction associated with the packets being sent simultaneously in the A2F direction of the channel.In some implementations, the agent and fabric (and other agents and components connected through the fabric) can share the clock. In other implementations, one or more agents may utilize a different clock than the clock utilized by the fabric. In addition, in some implementations, the agent and fabric can be reset individually. The initialization flow guarantees a synchronous handshake and ensures that both the sender and receiver are ready before the packet transfer begins. Although the UFI interface is synchronous, it can place a clock crossing queue (eg, first-in first-out (FIFO)) on the receiving side. To address the issue of FIFO backpressure due to clock crossing (and the corresponding clock crossing FIFO), the UFI may specify blocking signals (eg, * _block and * _txblock_crd_flow), which inject additional messages. Can be opportunistically asserted by the receiver to potentially stop or block the injection. The UFI can also enable the block signal configuration to configurely adjust the delay between the blocking signal assertion and the actual blocking of the message injection. For example, the blocking signal is configured to meet timing requirements on the transmitting side to allow message injection to be blocked during a configured number of clock cycles (eg, 1-3 clocks). obtain. In implementations where clock crossing is not present, blocking signals are not available and are allowed to be tied (eg to 0). In some implementations, the initialization signal, among other exemplary features and implementations, supports clock differences via a simple synchronizer and makes no assumptions about clock ratios.Timing of FIG. 7 Moving to FIG. 700, an alternative implementation of the example of FIG. 6 is to configure the header-to-payload isolation parameter to a non-zero value as well as the effect of the blocking signal (eg, 705) on the channel. Is presented for illustrative purposes. In some implementations of UFI, by transmitting a blocking signal (from the receiver to the transmitter on the channel), the transmitter deasserts the active signal and packets (eg, as shown in the example of FIG. 6). Or it can be stopped or interrupted when sending a message. Transmission of the blocking signal 705 from the receiver may not immediately deassert the active signal, instead the parameter deasserts the specified (non-zero) number of clock cycles in effect after the transmitter transmits the blocking signal. It can be configured to specify the rules. For example, in the example of FIG. 7, reception of the blocking signal 705 (eg, at clock cycle x3) is 3 clock cycles (based on configured values) such that the active signal 705 is low at clock cycle x6. It is to force the deassertion of the valid 705 later. As also shown in FIG. 7, the blocking signal can be asserted in multiple consecutive cycles (eg, clock cycles x5 x10), with three clock cycles deasserting the block signal 705 (not shown). Until then, a corresponding stop is generated based on the associated deassertion of the active signal after 3 cycles (eg, from clock cycle x8). The result is the transmission of header pumps 644, 646, 656, 658 at a timing that coincides with the timing in the example of FIG.As an inverse example of the example of FIG. 6, FIG. 7 shows the use of a non-zero payload offset parameter. For example, the data header separation parameter (eg, A2F_DataHdrSep) can be configured to provide the desired offset from the start of the header to the start of the corresponding payload. In the example of FIG. 7, the data header separation parameter is set to 2 clock cycle separations (intervals) so that each payload pump is (eg, the corresponding header pumps (eg, 644, 646, 656, 658) are aligned). It is transmitted in two clock cycles following the high state active signal 610. The examples shown in FIGS. 6 and 7 show the correlation between the number of cycles used to send the header and the number of cycles used to send the corresponding payload, but some implementations. So, sending a payload may require more clock cycles than a header. In such cases, the payload pump (or frit) can be transmitted without the corresponding header pump. In other implementations, a copy of the header can be sent, among other exemplary implementations, to accommodate each associated payload pump.For example, in FIG. 7, valid can be asserted in clock cycles x2 and x3, and the results of header pumps 644, 646 (and the corresponding values of protocol ID 615, VC ID 620, and shared credit 625 signals) are the same clock. Sent in a cycle. Further, based on the two-cycle data header separation parameter, the transmission of payload data (eg, pumps 648, 650) associated with the header (eg, pumps 644, 646) is valid for cycles x2 and x3. May be delayed by 2 cycles from, which causes the associated payload data pump (eg, 648, 650) to be transmitted in cycles x4 and x5. In some cases, this delay causes the payload data of the preceding packet (eg, 650) to be transmitted in the same cycle as the header data of the subsequent packet (eg, 656), as in the example of FIG. In some cases. This same delay, among other examples, is the payload corresponding to the payload data of all subsequent packets on the channel (eg, header pumps 656, 658 sent for valid assertions at clock cycles x5 and x7). It can be applied to pumps 660, 662).6 and 7 should be understood as non-limiting, simplified, exemplary examples presented and described herein to show the corresponding general principles of the UFI interface. In fact, the principles and features shown in FIGS. 6 and 7 as applications to the UFI DATA channel can be applied equally to other UFI channels, especially to at least some of the signals of the UFI RQ and RSP channels. .. For example, the RQ and RSP channels include not only the corresponding header signal (eg, RQ HDR or RSP HDR), but also the respective valid signals for the corresponding flow control and protocol identifier signals provided by the respective channels. Can be asserted. EQU and RSP channels may also include their respective blocking signals to allow the receiver to stop messages on the channel (eg, to attempt to relieve FIFO back pressure). Similarly, a credit-return channel directed from the receiver to the sender (both dedicated and shared credit for that channel) parallels packets and messages sent on the channel, among other exemplary features. Can be offered to make such credit returns.FIG. 8 shows a timing diagram 800 showing a simplified example of credit returns over an exemplary UFI interface channel. For example, each physical channel may include a credit return interface from the receiver. In this section, CHAN refers to an abstraction of one of the specific physical channels (RQ, DATA, RSP). For example, the channel may include, among other exemplary signals, a blocking signal (eg, 805) for credit return (and a function similar to the blocking function discussed in the example of FIG. 7), a shared credit return signal 810. , Credit return valid signal 815 (which can function similarly to the valid signals discussed in the examples of FIGS. 6 and 7), VC ID signal 820 (to identify the virtual channel to which the dedicated credit return is applied). , And a protocol ID signal 825 for credit return may be included. Such credit return signals can follow the examples discussed in Tables 6, 11 and 16 above.In the example of FIG. 8, in the UFI interface implementation, when the * CHAN_rxcrd_shared signal is asserted at 810, it indicates that shared credits are being returned. * The assertion of the CHAN_rxcrd_valid signal 815 indicates that a dedicated credit has been returned. Shared and dedicated credits can be returned simultaneously in parallel via the channel's credit return interface. * CHAN_rxcrd_VC_ID820 indicates the VC ID of the returned dedicated credit, and * CHAN_rxcrd_protocol_ID signal 825 identifies the protocol of the returned dedicated credit (if multiple protocols are supported on the channel). In the example of FIG. 8, shared credits are returned during clock cycles x1 to x3, as indicated by the assertion of * CHAN_rxcrd_shared810. During the clock cycle x4, only dedicated credits are returned, as indicated by the assertion of * CHAN_rxcrd_valid 815 (in the case of VC1 of protocol 2). From clock cycle x5 to x8, both * CHANGE_rxcrd_shared810 and * CHANGE_rxcrd_valid815 are asserted and both shared and dedicated credits are returned.As further shown in FIG. 8, the block signal can be applied in some implementations of the credit return signal set in the UFI interface so that the receiver can suspend or stop the credit return flow. For example, in clock cycle x7, the blocking signal * CHANGE_Txblock_crd flow 805 is asserted, whereby after the blocking signal 805 is asserted, the number of clock cycles is returned and credit is stopped. The number of cycles can be specified according to configurable parameters. For example, in the example of FIG. 8, the parameter is set to 2 cycles and the credit return is stopped at cycle x9 as indicated by the deassertion of both * CHANGE_rxcrd_shared810 and * CHANGE_rxcrd_valid815. When the blocking signal 805 is deasserted (eg, at clock cycle x8), the continuation of credit returns is allowed (eg, after 2 cycles in relation to the configured parameters). Continuing with the example of FIG. 8, among a number of exemplary embodiments, the blocking signal 805 is reasserted at clock cycle x9, resulting in reinitiation of credit return suspension after two cycles.In some implementations, to facilitate credit return and accounting maintenance, the sender of the link will have a credit counter (eg, 8 bits) for each supported credit type (both shared and dedicated). Credit counter) can be included. Therefore, the recipient of a credit-returning link needs to return credits according to the grain size of the credit counter (eg, one that fits within the corresponding 8-bit counter (eg, 255 credits)), among other features.As described herein, in some UFI implementations, both the corresponding virtual channel (VC) and virtual network (VN) have additional flow control classes for messages beyond the baseline channel specification. Can be separated into. Baseline channel flow control provides non-blocking flow control for each class of traffic within each protocol. Some instances may provide multiple virtual channels and traffic classes for the protocol, but in other implementations and applications (and corresponding protocols (eg, CXL.cache and CXL.mem)), per physical channel, Only a single virtual channel can be provided for each direction. Some implementations of UFI may provide additional fields to assist in quality of service metrics and / or application, among other exemplary features.In some implementations, the agent can only notify shared credits (eg, for VC_IDS) that are guaranteed to sink independently (including network layer dependencies). This can be done to avoid the need for dedicated credits for each VC_ID. The RSP channel of the protocol agent is an example of this possible, for example, there is a pre-assigned tracking structure that can accept responses.Error handling in the case of illegal flow control may cause unspecified behavior. Therefore, agents and fabric components may have logic in register transfer logic (RTL) to check for malicious cases that trigger assertions, and also log error events or indicate fatal errors. A signal can be sent to allow debugging or repair. Such error conditions that may be detected include, among other examples, that the packet has not completed (eg, 2 FIFO message encoding, but EOP is set to the first FIFO. If), the assertion of the last packet (EOP) signal is included when receiving the queue overflow and clock crossing the FIFO overflow condition.In some implementations, state machines or other logic may be provided on agents and fabric devices to participate in UFI's defined connection and disconnect flows. For example, such a flow is invoked during boot / reset and when entering low power mode, among other exemplary states or events. In some implementations, the UFI defines an initialization phase in which the sender (TX) is informed about the availability of credits on the receiver (RX) after the connection is established. In some cases, the reset can be individually deasserted between the agent side and the fabric side of UFI. In the case of an independent reset, the initialization signal may be driven into a disconnected state (eg, on the global channel) at reset, and traffic may not be transmitted until the initialization is in the connected state. The disconnect flow may be further supported by the agent, for example, to reconstruct credits to achieve power savings. Without this flow, all CPI credits could be configured to the final value before continuing the initial connection.The connections within the UFI can be separated in the A2F and F2A directions. Connection signaling is on the initialization global physical channel of the UFI interface that binds agent 305 of the system to fabric 150. For example, FIG. 9 shows an example of a global channel of a UFI interface, including signal sets 905, 910 for use in initializing the UFI interface. For example, the A2F initialization signal set 905 and the F2A initialization signal set 910 may be provided. Reset signals (eg, 915, 920) can be further specified at the agent and fabric level, which allows the software or hardware controller to initiate a reset of the agent 305 and / or fabric 140. Each of the A2F and F2A global signal sets may include a transmitting side connection request (txcon_req) signal, a receiving side connection confirmation (rxcon_ack) signal, and a receiving side disconnect NACK (rxdiscon_nack) signal. The three signal sets (eg, txcon_req signal, rxcon_ack signal, and rxdiscon_nack) define initialization states and can cause transitions between these states. In some cases, the global initialization signal sets 905, 910 also include the rx_empty signal to identify, among other exemplary signals, that the receiving queue is empty for all channels and credits have been returned. Can include.At initialization, the agent side and fabric side of the UFI interface can be prevented from being reset at or near the same time. One end of the interface (eg, after returning from a reset) may not have an implicit requirement for when the other end returns from a reset. In some implementations, the UFI specifies an explicit handshake during initialization between the agent and the fabric, both endpoints before any credit or transaction is sent on the UFI interface. Prevent (and all pipeline stages between them) from being reset. Therefore, after the reset, the receiving side can start transmitting credits of the dedicated VC buffer and the shared buffer. In some implementations, the UFI may support blocking signals transmitted by the sender during execution for credit returns.FIG. 10 shows an exemplary state machine for initialization states in an exemplary implementation of UFI. The states are disconnected state 1010 (which can be entered based on reset 1005), connected state 1015, connected state (1020, 1035), disconnected state 1025, and rejected. State 1030 may be included. The combination of the values of the txcon_req signal, the rxcon_ack signal, and the rxdiscon_nack signal may indicate the respective initialization states. As an example, in the disconnecting state 1025, the txcon_req signal can be low, the rxcon_ack signal can be high, and the rxdiscon_nack can be low. Changing a particular one of the signal values can transition from one initialization state to another. For example, among the many examples shown in the example of the state machine of FIG. 10, in the disconnecting state 1025, changing the rxcon_ack signal from high to low can shift to the disconnected state 1010, while changing the rxdiscon_nack signal from low. When changed to high, it may shift to the refusal state 1030. The UFI interface uses each initialization state to determine the actions performed by the receiver and transmitter, such as the exemplary actions described in Table 21 below.Table 21: Operation in the initialized stateSignaling rules can be specified for the global initialization signal set. In one example, the txt_req signal may be defined such that the transition from 0 to 1 reflects the connection request and the transition from 1 to 0 reflects the disconnect request. The credit return signal can be provided, for example, with a credit valid (crd_valid) signal and a credit shared (crd_hard) signal. In one example, crd_valid = can be defined to mean releasing dedicated message credits for protocol IDs and virtual channel IDs, and crd_shard = 1 can occur in parallel with shared credits (dedicated message credit returns). It means to release (is). In some implementations, the credit return behaves like a running credit return during the initial initialization of the credit. The rx_empty signal indicates that all channel credits and all receiver queues returned by the receiver are empty (although this is an ongoing message or clock crossing queue, among other exemplary problems). Not necessarily the message in the intermediate buffer such as). In some implementations, the sender checks rx_empty before initiating the disconnect. Checking increases the likelihood that the disconnect will be accepted quickly (for example, if there are no ongoing requests that may not yet be registered on the receiving side). In some implementations, to further increase the likelihood of accepting disconnects, the sender, among other exemplary features, has the last enable time so that the receiver pipeline has time to flow into the receiver queue. A timer delay can be implemented after a message has been sent. In some implementations, the sender sends a message during initialization as soon as the credit becomes available and becomes independent of the rx_impty assertion. Alternatively, the sender may stop transmitting the packet after initialization until rx_empty is asserted, and the sender can use the received credit as an index of the total credit notified by the receiver.In an exemplary implementation of the UFI interface, the sender can send a packet upon receiving a sufficient number of credits for a message on any given physical channel. The transmission also depends on having the correct credit. Shared credits can be used with any message, and dedicated credits can only be used with a single VC and protocol combination message. In some implementations, the receiver may suspend credit release for N cycles after CHANGE_txblock_crd_flow is asserted. The configurable Agent Blocking parameter specifies the value of the N cycle. There is an N-cycle delay between changes in the txblock_crd_flow state until the crd_valid and crd_shared signals reflect the corresponding block or unblock. Such a blocking signal can be used, for example, in a credit return clock crossing instance, where txblock_crd_flow is asserted, for example, when the free entry in the clock crossing FIFO is N. In an implementation where clock crossing is not an issue, the txblock_crd_flow signal can be tied to 0, among other exemplary implementations.As a further example of signaling rules that can be specified in UFI implementations, connection ACKs can be specified to always comply with connection requests. As described above, the connection request can be notified by the transition of txt_req from 0 to 1. This transition serves as an indicator that the sender is ready to receive the credit and is operating normally. ACK can be notified by the transition of rxcon_ack from 0 to 1. The ACK can be stopped for any time until the receiver is ready to complete. Similarly, a disconnect ACK or NACK can be specified to comply with the disconnect request. The disconnect request can be notified by the transition of txt_req from 1 to 0. The disconnection ACK can be notified by the transition of rxcon_ack from 1 to 0. The disconnect NACK can be notified by the transition of rxdiscon_nack from 0 to 1. Among a number of exemplary policies and implementations, rules may be specified to require the receiver to respond with an ACK or NACK to each disconnect request received by the receiver.Moving on to FIG. 11, an exemplary timing diagram 1100 for initializing the UFI interface from reset to connected state is shown. In the particular example shown in FIG. 11, an exemplary A2F initialization flow utilizing the initialization signal on the global channel of the UFI interface is shown. It should be understood that the corresponding A2F flow (eg, mirror) may be implemented with the opposite driver in the F2A direction. As shown in FIG. 11, the initialization signal set may include a receiving side disconnect NACK signal 1110, a receiving side connection ACK signal 1115, and a transmitting side connection request signal 1120. Additional signals, including an agent reset signal 915 (putting the agent into the reset state) and a fabric reset signal 920 (putting the fabric into the reset state), are shown to illustrate specific features. Also indicated is at least one representation of the credit return signal set 1125 of the UFI channel (eg, one or more credit signal sets of the RQ, DATA, and RSP channels). In the illustrations of FIGS. 11 and 12, the "F" after the signal name represents the fabric as the signal driver and the "A" represents the agent as the signal driver.To enter the connected state, when the transmitting side deviates from the reset (eg, the corresponding reset signal (eg, 915, 920)), the transmitting side asserts the txt_req signal 1120 to identify a request to the receiving side. can do. Similarly, when the receiving side deviates from the reset, the receiving side waits for a connection request with the txt_req signal 1120. The connection request assertion can be any number of cycles after the reset (eg, 915) asserts. Until the connection is complete, the txt_req signal 1120 remains asserted and is deasserted only as part of the disconnect flow. Upon receiving the connection request on the txcon_req signal 1120, the receiver asserts the rxcon_ack signal 1115 to confirm the request. The rxcon_ack signal 1115 can be asserted after both the reset (eg, fabric reset 920) and the txt_req signal 1120 have been asserted. The rxcon_ack signal 1115 remains asserted and is first deasserted only in the disconnect flow.This sequence may allow the initialization link state 1105 to move from the disconnected state through the connected state to the connected state. Upon entering the connected state (and transmitting the rxcon_ack signal), the receiver can immediately begin returning credits (eg, on the credit return wire 1125). In fact, the receiver can start returning credits at the same time as the assertion 1115 of the rxcon_ack signal. Thus, the transmitting side (eg, the agent) is ready to accept a credit return when asserting the txcon_req signal 1120 (eg, at clock cycle x4). This is because, for example, credit returns may be observed before A2F_rxcon_ack is observed due to intermediate buffering or clock crossing. Once the minimum credit to send the packet is received, the sender can start sending the packet or message over the channel. The reconnection flow can be implemented in the same way as the connection from the reset flow described herein, but to initiate the initialization of a new credit, the receiver is the credit counter, among other exemplary implementations. First resets the value, and the sender resets its available credit counter to zero.Moving on to FIG. 12, an exemplary timing diagram 1200 showing an exemplary disconnect and reconnect flow of an exemplary UFI interface is shown. In this example, the transmitter can deassert the txcon_req signal 1120 to facilitate disconnection at time x3. In some implementations, the rxdiscon_nack signal 1110 is deasserted before the txcon_req signal 1120 is deasserted to allow the disconnection to proceed. When disconnection is requested, the sender no longer sends the message on any channel (eg, indicated by the assertion of the CHANGE_is_valid bit). Based on the initiation of the disconnect flow by the sender, the receiver decides whether to confirm the disconnect (ACK) or negatively confirm (NACK or reject). To confirm the disconnection, the receiver deasserts the rxcon_ack signal 1115 after confirming that all pipelines are empty (eg, at clock cycle x4), which is reflected by the link status indicator 1105. Mark the entry to the disconnected state. In some cases, the recipient can also confirm that all credits have been returned.Diagram 1200 of FIG. 12 shows an example in which the disconnect request was acknowledged by the receiver. FIG. 13 shows the opposite example in which the receiver responds with a negative response (ie, NACK). For example, to transmit a negative response, the receiver may instead assert the rxdiscon_nack signal 1110 (eg, in clock cycle x4). For example, a negative response may be selected if, among other exemplary reasons, the receiver determines that the pipeline cannot flow without the risk of a deadlock. After NACK, the transmitter can reassert the txcon_req signal 1120 (eg, at clock cycle x5). Adhering to this effective confirmation of NACK on the receiving side by the transmitting side, the rxdiscon_nack signal 1110 can be deasserted (eg, as shown by clock cycle x6 in the example of FIG. 13).In some implementations, the connect and disconnect flow is expected to complete within a few microseconds after the start. In some implementations, timeouts can be specified explicitly or implicitly. For example, the receiver may be configured to respond with an ACK or NACK within a specified or recommended time frame. For example, an agent, fabric, or system (eg, SoC) can specify a timeout or time frame to achieve this expectation.In some examples, the agent or fabric element may reset while the UFI interface is in a connected state, resulting in a sudden reset. For example, the specified or recommended flow could be to enter a disconnect before reset. As an example, the Rxcon_ack signal can transition from 1 to 0 because a sudden reset occurs on the receiving side of the link while the value of the txt_req signal on the transmitting side is 1. In such a case, the sender forcibly disconnects itself and restarts the initialization. If this (sudden reset) occurs while the sender is idle, the sender can recover without losing the message. As another example of a sudden reset, the standard disconnect flow can be followed if the txt_req signal transitions from 1 to 0 due to a sudden reset on the sender of the link while rxcon_ack is 1. If this (sudden reset) occurs while Rx is idle and Tx remains in the reset state, the disconnect should receive an ACK and reach the disconnected state completely. However, if the disconnection is rejected (NACK) by the receiving side, a fatal or invalid link state (for example, an unrecoverable error) may occur. In the event of a sudden reset, protocol messages can be lost if traffic is active (eg, not idle), which can be fatal to continue normal operation.As mentioned above, the UFI interface in the system can be configured according to various parameters. For example, the set of parameters can be specifically defined according to the use case, characteristics, protocol, and topology of a given system, such as a particular SoC design. Such parameters include, for example, protocols transmitted and supported over the interface, header size (and thus the width of the corresponding channel), separation between header and payload data, blocking signals and messages and / or credits. Examples of delays, time frames, and other parameters between flow blockages can be specified. In some implementations, parameters can be specified for each physical channel reference. In another example, among many examples, the parameters can be specified for the entire UFI interface instance (eg, if the parameters apply to all channels of the interface). Parameter values may be specified and stored, for example, in configuration registers or other data structures for use and reference by agents and fabric components connected through the interface. Table 22 shows an example of the parameters that can be set in an example of the CPI implementation of the UFI interface.Table 22: Supported parametersMany of the above examples describe UFIs that support CXL-based protocols, but UFIs are not very limited and can be configured to potentially support any coherent interconnect protocol, with numerous examples and Within alternative use cases and implementations, it should be emphasized that the corresponding headers of these protocols are mapped to the header signals of the UFI requirements, data, and response channels.Moving on to FIGS. 14A-14B, simplified flowcharts 1400a-b are shown showing exemplary techniques for using the UFI interface, as discussed in the exemplary implementations herein. .. For example, in the example of FIG. 14A, the sender of a UFI interface sets a signal to send a message to the receiver on a particular one of a plurality of channels of the interface (eg, RQ, RSP, or DATA). It may consist of lanes assigned to signals received from the receiver of the message, as well as lanes assigned to each signal within. A global channel may include multiple lanes that send and receive their respective signals to control the aspects of the interface, including the initialization of the interface. In fact, communicating the initialization signal on the interface (1405) and initializing the interface to send a message on any one of the channels (eg, RQ, RSP, or DATA) (1410). Can be done. To send a message on a channel, a dedicated set of one or more lanes of the channel can send a valid signal (1415), the corresponding header signal, VC ID signal, and credit type signal (eg, shared or Dedicated) can be transmitted along with the asserted active signal (eg, to indicate that these signals carry valid information). On the same channel, the sender of a message (eg, request, response without data, or data transfer) receives a credit return and at the same time in a separate credit return lane (assigned to a set of credit return signals) on the channel. You can send a message. When the message is complete, the signal of the last packet is sent (in another lane) to identify the final pump, frit, or other amount of data corresponding to the end of the message (and the next on the channel). Allows you to send messages.).The example of FIG. 14B shows techniques related to the receiver of a channel of the UFI interface (eg, RQ, DATA, RSP) (eg, the receiver of the same channel as the sender discussed in the example of FIG. 14B). .. For example, a global channel can be provided to the receiver to communicate the initialization signal (1435) and initialize the interface (1440). After initialization, the active signal can be received in the active signal lane of the channel (1445), and the corresponding header signal, VC ID signal, and credit type signal can be received in the corresponding separate lane of the channel (1445). 1450). These signals may be received in tandem with the active signals to identify that the active signals apply to these signals (1450). The message is received on the channel through these signals, and the message is applied to the collective signal (and other signals such as the protocol ID signal (of multiple protocols, the header and the rest of the message). It can be processed based on the information of)) (1455). For example, the credits used in a message can be identified by a credit type signal and a VC ID signal (which can also identify the virtual channel applied to the message), among other examples. A credit return can be sent to a channel on the channel's dedicated lane while receiving a message on the channel (1460). Among other examples, other signals such as a blocking signal for stopping message data on the channel may also be transmitted. The end of the message can be determined based on the assertion of the last packet signal on another dedicated lane of the interface (eg, when the EOP signal is transmitted in the same clock cycle as the active signal) (1465). When determining the end of a message, subsequent messages are received and identified on the channel. The flow in the example of FIGS. 14A-14B is common across interface channels (eg, RQ, DATA, and RSP) in both the A2F and F2A (or transmit / receive, upstream / downstream) directions. Can be. Some channels (eg, DATA channels) may possess additional or different signals, among other exemplary implementations, based on the capabilities of the channels beyond the signal set of these common or similar channels. ..Note that the devices, methods, and systems described above can be implemented in any electronic device or system as described above. As a particular example, the figure below provides an exemplary system for utilizing the solutions described herein (eg, SoCs, computational blocks, fabric blocks, etc.). Many different interconnects, use cases, topologies, and applications are disclosed, described, and revisited from the discussion above to illustrate the following systems in more detail. And as is readily apparent, the above progress can be applied to any of their interconnects, fabrics, or architectures and their composite components.With reference to FIG. 15, an embodiment of a block diagram of a computer system including a multi-core processor is shown. Processor 1500 includes any device such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a coprocessor, a system-on-chip (SoC), or any other device that executes code. A processor or processing device is included. In one embodiment, the processor 1500 includes at least two cores, namely cores 1501 and 1502, which may include an asymmetric core or a symmetric core (illustrated embodiment). However, processor 1500 may include any number of processing elements that can be symmetric or asymmetric.In one embodiment, the processing element refers to the hardware or logic for supporting software threads. Examples of hardware processing elements include thread units, thread slots, threads, process units, contexts, context units, logical processors, hardware threads, cores, and / / which can hold processor state, such as execution state or architecture state. Or other elements are included. In other words, in one embodiment, the processing element refers to any hardware that can be independently associated with code such as software threads, operating systems, applications, or other code. A physical processor (or processor socket) typically refers to an integrated circuit that potentially contains multiple other processing elements, such as cores or hardware threads.A core often refers to logic placed on an integrated circuit that can maintain an independent architectural state, each of which is associated with at least some dedicated execution resources. In contrast to the core, hardware threads typically refer to logic located on an integrated circuit that can maintain an independent architectural state, and an independent architectural state provides access to execution resources. share it. As you can see, the lines between the hardware thread and core names overlap when certain resources are shared and other resources are dedicated to the architectural state. Nevertheless, in most cases, core and hardware threads are considered by the operating system as separate logical processors, and the operating system can schedule operations on each logical processor individually.As shown in FIG. 15, the physical processor 1500 includes two cores, namely cores 1501 and 1502. Here, cores 1501 and 1502 are considered to be symmetrical cores, i.e., cores having the same configuration, functional units, and / or logic. In another embodiment, core 1501 comprises an out-of-order processor core and core 1502 comprises an in-order processor core. However, cores 1501 and 1502 are adapted to run native cores, software management cores, cores adapted to run the native instruction set architecture (ISA), and translated instruction set architecture (ISA). It can be individually selected from any type of core, such as cores, co-designed cores, or other known cores. In a heterogeneous core environment (ie, asymmetric cores), code can be scheduled or executed on one or both cores using several conversion formats, such as binary translation. To further the discussion, the functional units shown within the core 1501 will be described in more detail below and the units within the core 1502 will operate in a similar manner in the depicted embodiments.As shown, the core 1501 includes two hardware threads 1501a and 1501b, which may also be referred to as hardware thread slots 1501a and 1501b. Thus, in one embodiment, a software entity such as an operating system potentially considers the processor 1500 to be four separate processors, i.e. four logical processors or processing elements capable of running four software threads simultaneously. As mentioned above, the first thread can be associated with the architecture state register 1501a, the second thread can be associated with the architecture state register 1501b, the third thread can be associated with the architecture state register 1502a, and the fourth. The thread may be associated with the architecture state register 1502b. Here, each of the architectural state registers (1501a, 1501b, 1502a, and 1502b) can be referred to as a processing element, thread slot, or thread unit, as described above. As shown, the architecture state register 1501a is replicated in the architecture state register 1501b, so that individual architecture states / contexts can be stored for the logical processor 1501a and the logical processor 1501b. At core 1501, other small resources such as instruction pointers and rename logic within the allocator and renamer blocks 1530 may also be replicated to threads 1501a and 1501b. Some resources such as the reorder buffer, ILTB1520, load / store buffer, queue, etc. of the reorder / retirement unit 1535 may be shared by partitioning. Other resources, such as general-purpose internal registers, page table-based registers, low-level data cache and data TLB 1515, execution unit 1540, and parts of out-of-order units 1535, are potentially fully shared.Processor 1500 often includes other resources, which can be fully shared, shared through partitioning, or dedicated to / by processing elements. FIG. 15 shows an embodiment of a purely exemplary processor that includes exemplary logical units / resources of the processor. Note that the processor may include or omit any of these functional units and may include other known functional units, logic, or firmware not shown. As shown, core 1501 includes a simplified representative out-of-order (OOO) processor core. However, the in-order processor may be used in different embodiments. The OOO core includes a branch target buffer 1520 that predicts the branch to be executed / executed, and an instruction translation buffer (I-TLB) 1520 that stores an instruction address translation entry.Core 1501 further includes a decoding module 1525 coupled to the fetch unit 1520 to decode the fetched elements. In one embodiment, the fetch logic includes individual sequencers associated with thread slots 1501a, 1501b, respectively. Typically, core 1501 is associated with a first ISA that specifies / specifies instructions that can be executed by processor 1500. In most cases, the machine code instructions that are part of the first ISA include a portion of the instruction (called an opcode) that references / specifies the instruction or operation to be executed. The decoding logic 1525 includes a circuit that recognizes these instructions from their opcodes and passes the decoded instructions to the pipeline for the processing specified in the first ISA. For example, as described in more detail below, in one embodiment, the decoder 1525 includes logic designed or adapted to recognize a particular instruction, such as a transaction instruction. As a result of recognition by the decoder 1525, the architecture or core 1501 performs certain pre-defined actions to perform tasks related to the appropriate instructions. Note that any of the tasks, blocks, operations, and methods described herein can be performed in response to a single or multiple instructions, some of which can be new or old instructions. This is very important. Note that in one embodiment, the decoder 1526 recognizes the same ISA (or a subset thereof). Alternatively, in a heterogeneous core environment, the decoder 1526 recognizes a second ISA (a subset of the first ISA or a separate ISA).In one example, the allocator and the liner block 1530 include an allocator that reserves resources such as a register file that stores the instruction processing results. However, threads 1501a and 1501b are potentially capable of out-of-order execution, and the allocator and liner block 1530 also reserve other resources such as reorder buffers for tracking instruction results. Unit 1530 may also include a register liner that renames the program / instruction reference register to another register inside processor 1500. The reorder / retirement unit 1535 includes components such as the reorder buffer, load buffer, and store buffer described above, and supports out-of-order execution and subsequent in-order retirement of instructions executed out-of-order.In one embodiment, the scheduler and execution unit block 1540 includes a scheduler unit for scheduling instructions / operations on the execution unit. For example, a floating point instruction is scheduled on the port of an execution unit that has an available floating point execution unit. A register file associated with the execution unit is also included to store the processing result of the information instruction. Illustrative execution units include floating point execution units, integer execution units, jump execution units, load execution units, store execution units, and other known execution units.The lower level data cache and data translation buffer (D-TLB) 1550 are coupled to the execution unit 1540. The data cache stores recently used / manipulated elements such as data operands that are potentially held in memory coherency. The D-TLB stores recent virtual / linear translations to physical address translations. As a particular example, the processor can include a page table structure for dividing physical memory into multiple virtual pages.Here, cores 1501 and 1502 share access to a higher level or farther cache, such as a second level cache associated with the on-chip interface 1510. A higher level or farther refers to an increasing cache level or a cache level farther away from the execution unit. In one embodiment, the higher level cache is the final level data cache (the last cache in the memory hierarchy on the processor 1500), such as a second or third level data cache. However, higher level caches are less limited as they may be associated with or include an instruction cache. Alternatively, a trace cache (a type of instruction cache) can be combined after the decoder 1525 to store recently decoded traces. Here, an instruction potentially refers to a macro instruction (ie, a general instruction recognized by a decoder), which can be decoded into multiple microinstructions (microoperations).In the configured configuration, processor 1500 also includes an on-chip interface module 1510. Historically, memory controllers, described in more detail below, have been included in the external computer system of processor 1500. In this scenario, the on-chip interface 1510 includes system memory 1575, a chipset (often including a memory controller hub that connects to memory 1575, and an I / O controller hub that connects peripherals), a memory controller hub, and north. Communicates with devices external to the controller 1500, such as bridges or other integrated circuits. And in this scenario, the bus 1505 is a multi-drop bus, point-to-point interconnect, serial interconnect, parallel bus, coherent (eg, cache coherent) bus, layered protocol architecture, differential bus, GTL bus, etc. It may include known interconnects.Memory 1575 may be dedicated to processor 1500 or shared with other devices in the system. Common examples of memory 1575 types include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. Device 1580 can be a graphics accelerator, processor, or card coupled to a memory controller hub, data storage, wireless transceiver, flash device, voice controller, network controller, or other known device coupled to an I / O controller hub. Note that it can be included.However, as more logic and devices have recently been integrated on a single die such as SOC, each of these devices can be incorporated on processor 1500. For example, in one embodiment, the memory controller hub is on the same package and / or die as the processor 1500. Here, a portion of the core (on-core portion) 1510 includes one or more controllers for interfacing with other devices such as memory 1575 or graphics device 1580. Configurations that include interconnects and controllers to interface with such devices are often referred to as on-core (or un-core configurations). As an example, the on-chip interface 1510 includes a ring interconnect for on-chip communication and a high-speed serial point-to-point link 1505 for off-chip communication. In addition, in a SoC environment, more devices such as network interfaces, coprocessors, memory 1575, graphics processors 1580, and other known computer devices / interfaces can be integrated into a single die or integrated circuit for a smaller form factor. Can provide high functionality and low power consumption.In one embodiment, processor 1500 executes compiler, optimization, and / or translator code 1577 to compile application code 1576 that supports or interfaces with the equipment and methods described herein. It can be transformed and / or optimized. A compiler often contains a program or set of programs that translates source text / code into target text / code. Compiling program / application code by a compiler is typically done in multiple phases and is passed in to translate high-level programming language code into low-level machine or assembly language code. However, the single-pass compiler can still be used for simple compilation. The compiler can utilize known compilation techniques to perform known compiler operations such as lexical analysis, preprocessing, parsing, semantic analysis, code generation, code conversion, and code optimization.Larger compilers often contain multiple phases, but most of the time these phases are contained within two general phases: (1) front end, ie syntactic processing, semantic processing, and how many. When such conversion / optimization can be performed, and (2) when the back end, that is, analysis, conversion, optimization, and code generation are generally performed. Some compilers refer to the middle, which indicates the ambiguity of the line between the compiler's front end and back end. As a result, compiler inserts, associations, generations, or references to other operations can occur in any of the aforementioned phases or paths, as well as in other known phases or paths of the compiler. As a descriptive example, the compiler may insert a call / operation in the front-end phase of compilation in one or more phases of compilation, and then convert the call / operation to lower-level code during the conversion phase, etc. Potentially insert operations, calls, functions, etc. Note that during dynamic compilation, the compiler or dynamic optimization code may not only insert such operations / calls, but may also optimize the code for execution during execution. As a particular example, binary code (already compiled code) can be dynamically optimized during execution. Here, the program code may include dynamic optimization code, binary code, or a combination thereof.Like compilers, translators, such as binary translators, convert code statically or dynamically to optimize and / or convert code. Therefore, references to code, application code, program code execution, or other software environments include (1) executing dynamic or static compiler programs, optimized code optimizers, or translators, compiling program code, software. Maintaining structure, executing other operations, optimizing code, or converting code, (2) executing main program code including operations / calls such as optimized / compiled application code, (3) software structure To maintain, perform other software-related operations, execute other program code such as libraries associated with the main program code to optimize the code, or (4) refer to a combination thereof. There is.Here, referring to FIG. 16, a block diagram of an embodiment of a multi-core processor is shown. As shown in the embodiment of FIG. 16, processor 1600 includes a plurality of domains. Specifically, the core domain 1630 includes a plurality of cores 1630A-1630N, and the graphics domain 1660 includes one or more graphics engines having a media engine 1665 and a system agent domain 1610.In various embodiments, the system agent domain 1610 is suitable in light of the activity (or inactivity) that individual units of domains 1630 and 1660 (eg, core and / or graphics engine) generate in a given unit. Can be independently controlled to operate dynamically in any power mode / level (eg, active, turbo, sleep, hibernate, deep sleep, or other Advanced Configuration Power Interface-like state). As such, it handles power control events and power management. Each of the domains 1630 and 1660 can operate at different voltages and / or powers, and each individual unit within the domain potentially operates at an independent frequency and voltage, respectively. It should be noted that although only three domains are shown, the scope of this disclosure is not limited to this point and additional domains may exist in other embodiments.As shown, each core 1630 further includes a low level cache in addition to various execution units and additional processing elements. Here, the various cores are coupled together and into a shared cache memory formed from multiple units or slices of final level caches (LLCs) 1640A-1640N. These LLCs often include storage and cache controller functionality and are not only shared between cores, but also potentially among graphics engines.As confirmed, the ring interconnect 1650 joins the cores together and provides the interconnect between the core domain 1630, the graphics domain 1660, and the system agent circuit 1610 via multiple ring stops 1652A-1652N. , Each ring stop is in the bond between the core and the LLC slice. As can be seen in FIG. 16, the interconnect 1650 is used to transmit a variety of information, including address information, data information, acknowledgment information, and snoop / invalid information. Ring interconnects are shown, but any known on-die interconnect or fabric may be utilized. As descriptive examples, some of the above fabrics (eg, another on-die interconnect, on-chip system fabric (OSF), advanced microcontroller bus architecture (AMBA) interconnect, multidimensional mesh fabric, or other known (Interconnection architecture) may be used in the same manner.As further illustrated, the system agent domain 1610 includes a display engine 1612 that provides control and interfaces to the associated display. The system agent domain 1610 may include other units such as an integrated memory controller 1620 that provides an interface to system memory (eg, DRAM implemented in multiple DIMMs), coherence logic 1622 that performs memory coherence operations, and the like. .. There can be multiple interfaces that allow interconnection between the processor and other circuits. For example, in one embodiment, not only at least one Direct Media Interface (DMI) 1616 interface, but also one or more PCIe ™ interfaces 1614 are provided. The display engine and these interfaces are typically coupled to memory via a PCIe ™ bridge. In addition, one or more other interfaces may be provided to provide communication between other agents such as additional processors or other circuits.Here, with reference to FIG. 17, a block diagram of a typical core, specifically, a back-end logical block of a core such as the core 1630 of FIG. 16 is shown. In general, the structure shown in FIG. 17 includes an out-of-order processor that fetches incoming instructions and performs various processes (eg, caching, decoding, branch prediction, etc.). Has a front-end unit 1770 used to execute and pass instructions / operations to the Out-of-Order (OOO) Engine 1780. The OOO engine 1780 further processes the decoded instruction.Specifically, in the embodiment of FIG. 17, the out-of-order engine 1780 receives one or more microinstructions or decoded instructions, which may be in the form of up, from the front-end unit 1770 and registers them or the like. Includes an allocation unit 1782 for allocating to appropriate resources and the like. The instructions are then provided to the reservation station 1784, which reserves the resources and schedules those instructions to be executed by one of the plurality of execution units 1786A-1786N. For example, there can be various types of execution units, including, among other things, arithmetic logic units (ALUs), load and store units, vector processing units (VPUs), and floating point execution units. The results from these different execution units are provided to the reorder buffer (ROB) 1788, which receives the unordered results and returns them to the correct program order.Further referring to FIG. 17, it should be noted that both the front-end unit 1770 and the out-of-order engine 1780 are coupled to different levels of memory hierarchy. The instruction-level cache 1772, which then binds to the intermediate level cache 1776 and then to the final level cache 1795, is specifically shown. In one embodiment, the final level cache 1795 is implemented on-chip (sometimes referred to as uncore). As an example, unit 1790 is similar to system agent 1710 in FIG. As described above, the uncore 1790 communicates with the system memory 1799, which in the illustrated embodiment is implemented via the ED RAM. Also note that the various execution units 1786 within the out-of-order engine 1780 are communicating with the first level cache 1774, which is also communicating with the intermediate level cache 1776. It should also be noted that additional cores 1730N-2 to 1730N can bind to LLC1795. Although shown at this high level in the embodiment of FIG. 17, it should be understood that various changes and additional components may exist.Moving to FIG. 18, a block diagram of an exemplary computer system formed by a processor comprising an execution unit that executes an instruction according to one embodiment of the present disclosure is shown, with one or more interconnects being one. Implement one or more features. System 1800 includes components such as processor 1802 that use an execution unit that includes logic to execute algorithms for process data, according to the present disclosure, such as embodiments described herein. In one embodiment, the sample system 1800 also runs an operating system and a version of the software included to provide and use the corresponding graphical user interface. However, the embodiments of the present disclosure are not limited to a particular combination of hardware circuits and software.Embodiments are not limited to computer systems. Alternative embodiments of the present disclosure can be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include mobile phones, internet protocol devices, digital cameras, personal digital assistants (PDAs), handheld PCs and the like. Embedded applications include microcontrollers, digital signal processors (DSPs), system-on-chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or one or more in at least one embodiment. May include other systems capable of executing the instructions of.In this illustrated embodiment, processor 1802 includes one or more execution units 1808 to implement an algorithm that will execute at least one instruction. One embodiment may be described in the context of a single processor desktop or server system, while alternative embodiments may be included in a multiprocessor system. System 1800 is an example of a "hub" system architecture. Computer system 1800 includes processor 1802 for processing data signals. As an exemplary example, the processor 1802 implements, for example, a complex instruction set computer (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, and a combination of instruction sets. Includes processors, or other processor devices such as digital signal processors. Processor 1802 is coupled to processor bus 1810, which transmits data signals between processor 1802 and other components within system 1800. Elements of system 1800 (eg graphic accelerator 1812, memory controller hub 1816, memory 1820, I / O controller hub 1825, wireless transceiver 1826, flash BIOS 1828, network controller 1834, voice controller 1836, serial expansion port 1838, I / O controller 1840 etc.) perform conventional functions well known to those skilled in the art.In one embodiment, processor 1802 includes level 1 (L1) internal cache memory 1804. Depending on the architecture, processor 1802 may have a single internal cache or multiple levels of internal cache. Other embodiments include a combination of both an internal cache and an external cache, depending on the particular implementation and need. The register file 1806 stores various types of data in various registers, including integer registers, floating point registers, vector registers, banked registers, shadow registers, checkpoint registers, status registers, and instruction pointer registers.Execution unit 1808, which contains logic for performing integer and floating point operations, also resides in processor 1802. In one embodiment, processor 1802 includes a microcode (ucode) ROM that stores microcode that executes an algorithm for a particular macroinstruction at run time or processes a complex scenario. Here, the microcode is potentially updatable to handle logic bugs / fixes in processor 1802. In one embodiment, the execution unit 1808 includes logic to process the packed instruction set 1809. By including the instruction set 1809 packed in the instruction set of general-purpose processor 1802 along with the associated circuits for executing instructions, the operations used in many multimedia applications use the data packed in general-purpose processor 1802. Can be executed. Thus, many multimedia applications are accelerated and run more efficiently by using the full width of the processor's data bus to perform operations on the packed data. This potentially eliminates the need to transfer data from smaller units over the processor's data bus in order to perform one or more operations on a data element at a time.Alternative embodiments of Execution Unit 1808 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 1800 includes memory 1820. The memory 1820 includes a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, or other memory device. Memory 1820 stores instructions and / or data represented by data signals to be executed by processor 1802.It should be noted that any of the aforementioned features or aspects of the present disclosure and solution may be utilized in one or more interconnects shown in FIG. For example, an on-die interconnect (ODI) (not shown) for coupling internal units of processor 1802 implements one or more embodiments of the embodiments described above. Alternatively, embodiments include a processor bus 1810 (eg, other known high performance computing interconnects), a high bandwidth memory path 1818 to memory 1820, and a point-to-point link to graphic accelerator 1812 (eg, between peripheral components). Associated with connection express (PCIe) compliant fabrics), controller hub interconnects 1822, I / O or other interconnects that combine other illustrated components (eg, USB, PCI, PCIe). Some examples of such components include voice controller 1836, firmware hub (flash BIOS) 1828, wireless transceiver 1826, data storage 1824, legacy I / O controller 1810 including user input and keyboard interface 1842, universal serial bus. A serial expansion port 1838 such as (USB) and a network controller 1834 are included. The data storage device 1824 can include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.Next, with reference to FIG. 19, a block diagram of a second system 1900 according to an embodiment of the present disclosure is shown. As shown in FIG. 19, the multiprocessor system 1900 is a point-to-point interconnect system, including a first processor 1970 and a second processor 1980 coupled via a point-to-point interconnect 1950. Each of the processors 1970 and 1980 can be several versions of the processor. In one embodiment, 1952 and 1954 are part of a serial point-to-point coherent interconnect fabric, such as a high performance architecture.Although only two processors 1970, 1980 are shown, it should be understood that the scope of this disclosure is not so limited. In other embodiments, there may be one or more additional processors in a given processor.Processors 1970 and 1980 are shown to include integrated memory controller units 1972 and 1982, respectively. Processor 1970 also includes point-to-point (PP) interfaces 1976 and 1978 as part of its bus controller unit, and similarly, the second processor 1980 includes PP interfaces 1986 and 1988. Processors 1970, 1980 can use the PP interface circuits 1978, 1988 to exchange information via the point-to-point (PP) interface 1950. As shown in FIG. 19, IMC 1972 and 1982 combine processors into their respective memories, i.e., memory 1932 and memory 1934 (which can be part of the main memory locally attached to each processor).Processors 1970, 1980 use point-to-point interface circuits 1976, 1994, 1986, 1998, respectively, to exchange information with the chipset 1990 via the individual PP interfaces 1952, 1954. The chipset 1990 also exchanges information with the high performance graphics circuit 1938 via the interface circuit 1992 along the high performance graphics interconnect 1939.The shared cache (not shown) may be contained in either processor or external to both processors, but the processor is low because it is connected to the processor via a PP interconnect. When in power mode, the local cache information of either or both processors can be stored in the shared cache.The chipset 1990 may be coupled to the first bus 1916 via interface 1996. In one embodiment, the first bus 1916 can be a peripheral device interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I / O interconnect bus, but the scope of the present disclosure. Is not so limited.As shown in FIG. 19, various I / O devices 1914 are coupled to the first bus 1916, along with a bus bridge 1918 that connects the first bus 1916 to the second bus 1920. In one embodiment, the second bus 1920 includes an LPC (low pin count) bus. In one embodiment, various devices include, for example, a keyboard and / or mouse 1922, a communication device 1927, and a storage unit 1928, such as a disk drive or other mass storage device, often containing instructions / codes and data 1930. Is coupled to the second bus 1920. In addition, voice I / O 1924 is shown coupled to the second bus 1920. Note that other architectures with different components and interconnect architectures are possible. For example, instead of the point-to-point architecture of FIG. 19, the system can implement a multi-drop bus or other such architecture.Next, moving to FIG. 20, an embodiment of the system-on-chip (SOC) design according to the above disclosure is shown. As a specific exemplary example, SOC2000 is included in the user equipment (UE). In one embodiment, the UE is any mobile phone, smartphone, tablet, ultra-thin notebook, notebook including broadband adapters, or any other similar communication device used by the end user to communicate. Refers to a device. In most cases, the UE connects to a base station or node, which potentially corresponds to a mobile station (MS) in the GSM network.Here, SOC2000 includes two cores 2006 and 2007. Similar to the discussion above, cores 2006 and 2007 are Intel® Architecture Core®-based processors, AMD (Advanced Micro Devices) processors, MIPS-based processors, ARM-based processor designs, or theirs. Can comply with the instruction set architecture of their customers, as well as their licensees or adapters. Cores 2006 and 2007 are coupled to cache control 2008 associated with bus interface units 2009 and L2 cache 2011 to communicate with other parts of system 2000. Interconnects 2010 include on-chip interconnects such as IOSF, AMBA, or other interconnects discussed above, which interconnects potentially exhibit one or more embodiments described herein. Implement.Interface 2010 includes a subscriber identification module (SIM) 2030 to interface with the SIM card, a boot ROM 2035 to hold the boot code for execution by cores 2006 and 2007, and to initialize and boot the SOC2000, an external memory ( For example, an SDRAM controller 2040 to interface with a DRAM 2060), a flash controller 2045 to interface with a non-volatile memory (eg, flash 2065), a peripheral control 2050 to interface with a peripheral (eg, a serial peripheral interface), an input (eg, a serial peripheral interface). It provides communication channels to other components such as video code 2020 and video interface 2025 for displaying and receiving (touchable inputs), GPU 2015 for performing graphic related calculations. Any of these interfaces can incorporate aspects of the embodiments described herein.In addition, the system shows peripherals for communication such as the Bluetooth® module 2070, 3G modem 2075, GPS2085, and WiFi® 2085. Note that as mentioned above, the UE includes a radio for communication. As a result, not all of these peripheral communication modules are needed. However, the UE includes several forms of radios for external communication.Although this disclosure has been described with reference to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations from it. The appended claims are intended to cover all such modifications and modifications contained within the true spirit and scope of the present disclosure.Design can go through various stages from creation to simulation to manufacturing. The data representing the design can represent the design in multiple ways. First, the hardware can be represented using a hardware description language or another functional description language to help in simulation. In addition, circuit-level models, including logic and / or transistor gates, can be created at several stages in the design process. In addition, most designs, at some stage, reach a data level that represents the physical placement of various devices within the hardware model. When conventional semiconductor manufacturing techniques are used, the data representing the hardware model can be data that specifies the presence or absence of various features on different mask layers of the mask used to manufacture the integrated circuit. In any representation of the design, the data can be stored on any form of machine-readable medium. A memory, or magnetic or optical storage such as a disk, can be a machine-readable medium for storing information transmitted via light or radio waves modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or transmitting a code or design is transmitted, a new copy is made to the extent that copying, buffering, or retransmission of the electrical signal is performed. In this way, the communication provider or network provider can at least temporarily store bullet points such as carrier-encoded information on a tangible machine-readable medium that embodies the techniques of the embodiments of the present disclosure.Modules used herein refer to any combination of hardware, software, and / or firmware. As an example, a module includes hardware such as a microcontroller associated with a non-temporary medium for storing code adapted to be executed by a microcontroller. Thus, in one embodiment, reference to a module refers to hardware specifically configured to recognize and / or execute code held in a non-temporary medium. Moreover, in another embodiment, the use of a module refers to a non-temporary medium containing code specifically adapted to be performed by a microcontroller to perform a given operation. And, as can be inferred, in yet another embodiment, the term module (in this example) may refer to a combination of a microcontroller and a non-temporary medium. In most cases, the module boundaries shown separately are generally different and potentially overlap. For example, the first and second modules can share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, the use of the term logic includes hardware such as transistors, registers, or hardware such as programmable logic devices.In one embodiment, the use of the phrase "configured to" is the placement, assembly, manufacture, sale of equipment, hardware, logic, or elements to perform a specified or determined task. Refers to offers, imports and / or designs. In this example, a non-working device or element thereof is designed, combined, and / or interconnected to perform its specified task, and is further "configured to perform the specified task." Is done. As a purely exemplary example, logic gates provide 0 or 1 during operation. However, the logic gates "configured" to provide the enable signal to the clock do not include all potential logic gates that can provide 1 or 0. Instead, a logic gate is a gate in which 1 or 0 outputs are coupled in a clock-enabled manner during operation. The use of the term "configured to" does not require any operation and instead focuses on the latent state of the device, hardware, and / or element, in which the latent state is the device, hardware, Note again that and / or the element is designed to perform a particular task when the device, hardware, element is in operation.Further, in one embodiment, the use of the phrases "possible to / to" and / or "operable to" allows the use of equipment, logic, hardware, and / or elements in a particular way. Refers to some equipment, logic, hardware, and / or elements designed to do so. As mentioned above, in one embodiment, the use of the phrase "possible to" or "operable to" refers to the latent state of equipment, logic, hardware, and / or elements, equipment, logic, The hardware and / or elements are not working, but are designed to allow the device to be used in a particular way.The values used herein include any known representation of a number, state, logical state, or binary logical state. Often, the use of logic levels, logic values, or logic values, also called 1s and 0s, simply represents a binary logical state. For example, 1 indicates a high logic level and 0 indicates a low logic level. In one embodiment, a storage cell such as a transistor or flash cell may be capable of holding a single logical value or a plurality of logical values. However, other representations of values in computer systems have been used. For example, the decimal number 10 can also be represented as a binary value of 1010 and the hexadecimal character A. Therefore, the value includes a representation of the information that can be held by the computer system.In addition, the state can be represented by a value or part of the value. As an example, a first value such as Logic 1 can represent a default or initial state, while a second value such as Logic 0 can represent a non-default state. Further, in one embodiment, the terms reset and set refer to default and updated values or states, respectively. For example, default values potentially contain high logical values, or resets, while updated values potentially contain low logical values, or sets. Note that any combination of values can be used to represent any number of states.An embodiment of the method, hardware, software, firmware, or code described above refers to an instruction or code stored on a machine-accessible, machine-readable, computer-accessible, or computer-readable medium that can be executed by an implementation. Can be implemented via. Non-temporary machine-accessible / readable media include any mechanism that provides (ie, stores and / or transmits) information in a machine-readable form, such as a computer or electronic system. For example, non-temporary machine-accessible media include random access memory (RAM) such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory device; power storage device; optical. Storage devices; acoustic storage devices; other forms of storage devices for holding information received from transient (propagated) signals (eg, carriers, infrared signals, digital signals), etc. It is distinguished from non-temporary media from which information can be received.Instructions used to program the logic for executing the embodiments of the present disclosure can be stored in memory within the system such as DRAM, cache, flash memory, or other storage. In addition, the instructions can be delivered over the network or via other computer-readable media. Thus, machine-readable media may include any mechanism for storing or transmitting information in a format readable by a machine (eg, a computer), whereas machine-readable media may include floppy disks, optical discs, compact disks, read-only media. Memory (CD-ROM), magneto-optical disk, read-only memory (ROM), random access memory (RAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EPROM), Tangible used to transmit information over the Internet via magnetic or optical cards, flash memory, or electrical, optical, acoustic, or other forms of propagating signals (eg, carriers, infrared signals, digital signals, etc.) Includes, but is not limited to, machine-readable storage. Thus, a computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).The following examples relate to embodiments according to the present specification. Example 1 is a device, which is an agent circuit that supports a set of coherent interconnect protocols; an interface that couples to an interconnect fabric to support a set of coherent interconnect protocols. Includes configured interfaces and; An interface is a global channel that connects to a first plurality of physical lanes, a global channel for communicating control signals that support the interface; and a request channel that connects to a second plurality of physical lanes. A request channel for communicating a request-related message to another agent on the fabric; a response channel that joins a third plurality of physical lanes, the response channel is a response-related message on the fabric. A response channel that is a channel for communicating with other agents and the response includes a response without payload data; a data channel that joins multiple fourth physical lanes, and the data channel is a data transfer. A channel for communicating messages related to to other agents on the fabric, including a data channel, which includes payload data in the data transfer.Example 2 includes the subject of Example 1, and the request is a request that targets the memory of the system.Example 3 includes any one subject of Examples 1 and 2, each of the request channel, the response channel, and the data channel containing a plurality of signals of each, and each signal of the plurality of signals is a physical lane of the channel. Assigned to each subset of.Example 4 includes the subject matter of Example 3, where the first portion of the plurality of signals is transmitted to the fabric and the second portion of the plurality of signals is received from the fabric.Example 5 includes any one subject of Examples 3-4, where each of the plurality of signals of the request channel, response channel, and data channel has its own valid signal, protocol identifier signal, virtual channel identifier field, and header. Containing the signal, the active signal is asserted for a valid instance of the header signal, the header signal contains the header of a particular message, the protocol identifier signal identifies the protocol associated with the header, and the virtual channel identifier signal (Field) identifies the virtual channel used for a particular message.Example 6 includes the subject of Example 5, a set of coherent interconnect protocols comprises a plurality of protocols, and a protocol identifier signal identifies one of the plurality of protocols as being associated with a header.Example 7 includes the subject of Example 6, the plurality of protocols include the CXL (Compute Express Link) protocol, and the CXL protocol is CXL. cache protocol and CXL. Includes mem protocol.Example 8 includes any one subject of Examples 6-7, and the header signal has a width that supports the maximum header format of multiple protocols.Example 9 includes any one subject of Examples 5-8, the plurality of signals in the data channel further include a payload data signal for transmitting payload data, and the payload data signal includes a plurality of lanes. Is done.Example 10 includes the subject of Example 9, where the payload data signal corresponds to a header signal, and the payload data signal is transmitted in multiple clock cycles after the header signal is transmitted.Example 11 includes the subject of Example 10, and multiple clock cycles include configurable parameters of the interface.Example 12 includes any one subject of Examples 5-11, and each of the plurality of signals of the request channel, the response channel, and the data channel supports the receipt of the credit return associated with each channel. Including further.Example 13 includes the subject of Example 12, where credits are returned in a credit return signal, at least in parallel with the transmission of the message using the header signal.Example 14 includes any one subject of Examples 12-13, and the credit return includes a virtual channel dedicated credit and a shared credit return.Example 15 includes any one subject of Examples 5-14, each of the plurality of signals of the request channel, the response channel, and the data channel further comprising a blocking signal for receiving a blocking request. Causes deassertion of the active signal of the corresponding channel.Example 16 includes the subject of Example 15, in which the active signal is deasserted in a certain number of clock cycles after the blocking signal is asserted.Example 17 includes the subject of Example 16, and a particular number of clock cycles include configurable parameters of the interface.Example 18 includes the subject of any one of Examples 3-17, where the global channel contains an agent-to-fabric instance of the global channel and the request channel contains an agent-to-fabric instance of the request channel. Included, the response channel contains an agent-to-fabric instance of the response channel, and the data channel contains an agent-to-fabric instance of the data channel. The interface is a fabric-to-agent instance of the global channel assigned to the fifth physical lane, a fabric-to-agent instance of the request channel assigned to the sixth physical lane, and a seventh plurality. It further includes fabric-to-agent instances of response channels assigned to physical lanes, and fabric-to-agent instances of data channels assigned to a plurality of eighth physical lanes.Example 19 includes any one subject of Examples 1-18, the set of protocols comprises a plurality of protocols, and the request channel, the response channel, and the data channel support the respective messages of the plurality of protocols.Example 20 includes the subject of any one of Examples 1-19, and the interface comprises a second instance of one of a request channel, a response channel, and a data channel.Example 21 includes the subject of any one of Examples 1-20, and the global channel includes a set of signals for initializing the interface.Example 22 includes the subject of Example 21, where the initialization of the interface is by state machine, the state machine contains multiple initialization states for the interface, and of multiple initialization states depending on the value of the set of signals. Cause a transition between.Example 23 includes the subject matter of any one of Examples 1-22, further comprising a compute block circuit, the compute block circuit implementing the compute block within a system on chip (SOC), and an interconnect fabric of the SoC. Includes interconnect fabric.Example 24 includes the subject of Example 23, wherein the computational block circuit includes a data processor.Example 25 includes any one subject of Examples 23-24, and the computational block circuit includes computer memory.Example 26 is a device, which is a fabric circuit that implements at least a portion of the system's interconnect fabric; an interface that couples to an agent in a compute block and supports a set of coherent interconnect protocols. Includes an interface configured to; An interface is a global channel that connects to a first plurality of physical lanes, a global channel for communicating control signals to support the interface; and a request channel that connects to a second plurality of physical lanes. A request channel for communicating a request-related message to the agent; a response channel that joins a third plurality of physical lanes, the response channel for communicating a response-related message to the agent. A channel and a response that includes a response without payload data; a data channel that joins a fourth plurality of physical lanes, and the data channel communicates messages related to data transfer. Is a channel for, and the data transfer includes a data channel and; which contains payload data.Example 27 includes the subject of Example 26, where the request is a request that targets the memory of a compute block.Example 28 includes any one subject of Examples 26-27, each of the request channel, the response channel, and the data channel containing a plurality of signals of each, and each signal of the plurality of signals is a physical lane of the channel. Assigned to each subset of.Example 29 includes the subject of Example 28, where the first portion of the plurality of signals is transmitted to the fabric and the second portion of the plurality of signals is received from the fabric.Example 30 includes any one subject of Examples 28-29, where each of the plurality of signals of the request channel, response channel, and data channel has its own valid signal, protocol identifier signal, virtual channel identifier field, and header. Containing the signal, the active signal is asserted for a valid instance of the header signal, the header signal contains the header of a particular message, the protocol identifier signal identifies the protocol associated with the header, and the virtual channel identifier signal (Field) identifies the virtual channel used for a particular message.Example 31 includes the subject of Example 30, a set of coherent interconnect protocols comprises a plurality of protocols, and a protocol identifier signal identifies one of the plurality of protocols as being associated with a header.Example 32 includes the subject of Example 31, multiple protocols include the CXL (Compute Express Link) protocol, and the CXL protocol is CXL. cache protocol and CXL. Includes mem protocol.Example 33 includes any one subject of Examples 31 to 32, and the header signal has a width that supports the maximum header format of multiple protocols.Example 34 includes any one subject of Examples 30-33, the plurality of signals in the data channel further comprises a payload data signal for transmitting payload data, and the payload data signal includes a plurality of lanes. Is done.Example 35 includes the subject of Example 34, where the payload data signal corresponds to a header signal, and the payload data signal is transmitted in multiple clock cycles after the header signal is transmitted.Example 36 includes the subject of Example 35, where multiple clock cycles include configurable parameters of the interface.Example 37 includes any one subject of Examples 30-36, and each of the plurality of signals of the request channel, the response channel, and the data channel supports the receipt of the credit return associated with each channel. Including further.Example 38 includes the subject of Example 37, where credit is returned in a credit return signal, at least in parallel with the transmission of the message using the header signal.Example 39 includes the subject matter of any one of Examples 37-38, and the credit return includes a virtual channel dedicated credit and a shared credit return.Example 40 includes any one subject of Examples 30-39, each of the plurality of signals of the request channel, the response channel, and the data channel further comprising a blocking signal for receiving a blocking request. Causes deassertion of the active signal of the corresponding channel.Example 41 includes the subject of Example 40, in which the active signal is deasserted in a certain number of clock cycles after the blocking signal is asserted.Example 42 includes the subject of Example 41, and a particular number of clock cycles include configurable parameters of the interface.Example 43 includes the subject of any one of Examples 28-42, where the global channel contains a fabric-to-agent instance of the global channel and the request channel contains a fabric-to-agent instance of the request channel. Included, the response channel contains a fabric-to-agent instance of the response channel, and the data channel contains a fabric-to-agent instance of the data channel. The interfaces are the agent-to-fabric instance of the global channel assigned to the fifth physical lane, the agent-to-fabric instance of the request channel assigned to the sixth physical lane, and the seventh plurality. It further includes an agent-to-fabric instance of the response channel assigned to the physical lane, and an agent-to-fabric instance of the data channel assigned to the eighth physical lane.Example 44 includes any one subject of Examples 25-43, the set of protocols comprises a plurality of protocols, and the request channel, response channel, and data channel support the respective messages of the plurality of protocols.Example 45 includes the subject of any one of Examples 25-44, and the interface comprises a second instance of one of a request channel, a response channel, and a data channel.Example 46 includes the subject of any one of Examples 25-45, and the global channel includes a set of signals for initializing the interface.Example 47 includes the subject of Example 46, where the initialization of the interface is by state machine, the state machine contains multiple initialization states for the interface, and of multiple initialization states depending on the value of the set of signals. Cause a transition between.Example 48 includes any one subject of Examples 25-47, the fabric circuit comprises a network-on-chip device, and the network-on-chip device comprises an interface.Example 49 is a method, in which in the first clock cycle, a valid signal asserted in a set of valid lanes of a particular channel of an interface, a first in a set of header lanes of a particular channel. A step of receiving a header signal, a VC ID signal in a set of virtual channel identifier (VC ID) lanes of a particular channel, and a protocol identifier signal in a set of protocol identifier lanes of a particular channel, where the interface is an agent. The first header signal is matched to the active signal, the first header signal contains at least part of the packet's header, and the protocol identifier signal is among the multiple coherent protocols supported by the interface. A particular one of the signals is identified and applied to the packet, and the particular channel comprises one of a plurality of channels of the interface, and the plurality of channels include a request channel, a data channel, and a response channel. In subsequent clock cycles, receive an asserted active signal, an EOP signal asserted in a set of last packet (EOP) lanes of a particular channel, and a second header signal in a set of header lanes. The second header signal is the step of receiving, which includes at least a portion of the header of the packet; the last in the subsequent clock cycle containing the asserted valid signal, based on the asserted EOP signal. Includes steps to determine the packet and;Example 50 includes the subject of Example 49, further including a step of identifying the deassertion of the active signal, the deassertion of the active signal interrupts the header signal.Example 51 includes any one subject of Examples 49-50, further comprising the step of receiving a shared credit signal in a set of shared credit lanes on a particular channel in the first clock cycle, by the shared credit signal. , It is specified whether to use shared credit or dedicated credit with the header.Example 52 includes the subject of Example 51, where the VC ID signal identifies a particular virtual channel associated with the dedicated credit when the shared credit signal identifies that the dedicated credit will be used.Example 53 comprises the subject of any one of Examples 49-52, the particular channel comprises a data channel, and the method comprises receiving payload data in one set of lanes of the payload data signal of the data channel; header. Further includes a step of determining that the payload data is associated with the packet based on;Example 54 includes the subject of Example 53, where payload data is defined to reach a certain number of clock cycles following the reception of a header signal.Example 55 includes the subject of Example 54, in which a particular number of clock cycles are configured in the header payload isolation parameters of the interface.Example 56 includes any one subject of Examples 49-55, further comprising the step of transmitting a blocking signal in a blocking signal lane of a particular channel, with the blocking signal deasserting the active signal in a set of valid lanes. Occurs.Example 57 includes the subject of Example 56, further comprising determining the back pressure in the queue, and the blocking signal is transmitted based on the determined back pressure.Example 58 includes any one subject of Examples 49-57, and the width of the header signal is based on the largest header format of the plurality of coherent protocols.Example 59 includes any one subject of Examples 49-58, the request channel is a channel for communicating the message related to the request to the agent, and the response channel is the channel for communicating the message related to the response to the agent. The response includes a response without payload data, and the data channel is a channel for communicating messages related to data transfer containing payload data.Example 60 includes any one subject of Examples 49-59, further including the step of initializing the interface with a set of initialization signals within the global channel of the interconnect, where the global channel is a plurality of global channels. Each signal associated with a lane and in the set of initialization signals is mapped to one of each of the plurality of global channel lanes.Example 61 includes the subject of Example 60, where interface initialization is by a state machine, the state machine defines multiple initialization states, and transitions between multiple initialization states are of the initialization signal. Based on the value of the set.Example 62 includes the subject of Example 61, in which the message is received on the channel after the interface initialization is complete.Example 63 includes the subject of Example 62, further including the step of transmitting flow control credits in the respective flow control signal lanes of the request channel, response channel, and data channel upon completion of initialization.Example 64 includes any one subject of Examples 49-63 and further includes the step of transmitting credit returns in the respective credit return lanes included in each of the request channel, response channel, and data channel.Example 65 includes the subject of Example 64, and credit returns include returns of dedicated and shared credits.Example 66 comprises any one subject of Examples 49-65, and the plurality of coherent protocols are described in CXL. mem protocol and CXL. Includes cache protocol.Example 67 is a system comprising means for performing any one of the methods of Examples 49-66.Example 68 is a method, in which in the first clock cycle, a valid signal asserted in a set of valid lanes of a particular channel of an interface, a first in a set of header lanes of a particular channel. A header signal, a VC ID signal in a set of virtual channel identifier (VC ID) lanes of a particular channel, and a protocol identifier signal in a set of protocol identifier lanes of a particular channel, where the interface is an agent. The first header signal is matched to the active signal, the first header signal contains at least a portion of the packet's header, and the protocol identifier signal is among the multiple coherent protocols supported by the interface. Identifying a particular one of the packets and applying it to the packet, the particular channel containing one of a plurality of channels on the interface, and the plurality of channels including the request channel, the data channel, and the response channel. And; the step of determining the last packet; the asserted active signal in the subsequent clock cycle, the last packet (EOP) signal asserted in one set of EOP lanes on a particular channel, and one set. A step of transmitting a second header signal in the header lane, wherein the second header signal contains at least a portion of the packet's header and an asserted EOP signal identifies the last packet. ;including.Example 69 includes the subject of Example 68, further including a step of identifying the deassertion of the active signal, with the deasserting of the active signal interrupting the header signal.Example 70 includes any one subject of Examples 68-69, further comprising the step of transmitting a shared credit signal in a set of shared credit lanes on a particular channel in the first clock cycle, by the shared credit signal. , It is specified whether to use shared credit or dedicated credit with the header.Example 71 includes the subject of Example 70, where the VC ID signal identifies a particular virtual channel associated with the dedicated credit when the shared credit signal identifies that the dedicated credit will be used.Example 72 includes any one subject of Examples 68-71, the particular channel comprises a data channel, and the method further comprises the step of transmitting payload data in one set of lanes of the payload data signal of the data channel. ..Example 73 includes the subject matter of Example 72, where payload data is specified to be transmitted in a certain number of clock cycles following the reception of a header signal.Example 74 includes the subject of Example 73, where a particular number of clock cycles are configured in the header payload isolation parameters of the interface.Example 75 comprises the subject of any one of Examples 68-74 and includes the step of receiving a blocking signal in a blocking signal lane of a particular channel; based on the reception of the blocking signal, a valid signal in a set of valid lanes. Further includes the step of deasserting and;Example 76 includes the subject of Example 75, further including the step of determining the specified number of clock cycles in the interface configuration parameters, where the active signal deasserts the specified number of clock cycles after receiving the blocking signal. To do.Example 77 includes any one subject of Examples 68-76, the width of the header signal is based on the largest header format of the plurality of coherent protocols.Example 78 includes any one subject of Examples 68-77, the request channel is a channel for communicating the message related to the request to the agent, and the response channel is the channel for communicating the message related to the response to the agent. The response includes a response without payload data, and the data channel is a channel for communicating messages related to data transfer containing payload data.Example 79 includes any one subject of Examples 68-78, further including the step of initializing the interface with a set of initialization signals within the global channel of the interconnect, where the global channel is a plurality of global channels. Each signal associated with a lane and within a set of initialization signals is mapped to one of each of the plurality of global channel lanes.Example 80 includes the subject of Example 79, where the initialization of the interface is by a state machine, the state machine defines multiple initialization states, and the transition between the multiple initialization states is of the initialization signal. Based on the value of the set.Example 81 includes the subject of Example 80, in which the message is transmitted on the channel after the interface initialization is complete.Example 82 includes the subject of Example 81, further including the step of receiving flow control credits in the respective flow control signal lanes of the request channel, response channel, and data channel upon completion of initialization.Example 83 includes any one subject of Examples 68-82, further including the step of receiving credit returns in the respective credit return lanes included in each of the request channel, response channel, and data channel.Example 84 includes the subject matter of Example 83, where credit returns include returns of dedicated and shared credits.Example 85 includes any one subject of Examples 68-84, and the plurality of coherent protocols are described in CXL. mem protocol and CXL. Includes cache protocol.Example 86 is a system comprising means for performing any one of the methods of Examples 68-85.Example 87 is a system, which includes a fabric; multiple compute blocks communicatively coupled through the fabric; and a particular compute block within the plurality of compute blocks is a set of coherent. Includes agent circuits that support interconnect protocols; interfaces that couple to the interconnect fabric and are configured to support a set of coherent interconnect protocols; An interface is a global channel that connects to a first plurality of physical lanes, a global channel for communicating control signals that support the interface; and a request channel that connects to a second plurality of physical lanes. A request channel for communicating a request-related message to another agent on the fabric; a response channel that joins a third plurality of physical lanes, the response channel is a response-related message on the fabric. A response channel, which is a channel for communicating with other agents, the response of which includes a response without payload data; a plurality of data channels coupled to a plurality of fourth physical lanes. A channel for communicating messages related to data transfer to other agents on the fabric, including data channels, which include payload data;Example 88 includes the subject of Example 87, the system includes a system-on-chip (SoC), and the SoC comprises a fabric and a plurality of computational blocks.Example 89 comprises the subject of any one of Examples 87-88, and the fabric comprises a network on chip device.Example 90 includes any one subject of Examples 87-89, further includes computer memory, and the request is a request that targets computer memory.Example 91 includes any one subject of Examples 87-90, each of the request channel, the response channel, and the data channel containing a plurality of signals of each, and each signal of the plurality of signals is a physical lane of the channel. Assigned to each subset of.Example 92 includes the subject of Example 91, where the first portion of the plurality of signals is transmitted to the fabric and the second portion of the plurality of signals is received from the fabric.Example 93 includes any one subject of Examples 91-92, where each of the plurality of signals of the request channel, response channel, and data channel has its own valid signal, protocol identifier signal, virtual channel identifier field, and header. Containing the signal, the active signal is asserted for a valid instance of the header signal, the header signal contains the header of a particular message, the protocol identifier signal identifies the protocol associated with the header, and the virtual channel identifier signal (Field) identifies the virtual channel used for a particular message.Example 94 includes the subject of Example 93, a set of coherent interconnect protocols comprises a plurality of protocols, and a protocol identifier signal identifies one of the plurality of protocols as being associated with a header.Example 95 includes the subject of Example 94, the plurality of protocols include the CXL (Compute Express Link) protocol, and the CXL protocol is CXL. cache protocol and CXL. Includes mem protocol.Example 96 includes any one subject of Examples 94-95, the header signal having a width that supports the maximum header format of multiple protocols.Example 97 includes any one subject of Examples 93-96, the plurality of signals in the data channel further comprises a payload data signal carrying payload data, and the payload data signal includes a plurality of lanes.Example 98 includes the subject of Example 97, where the payload data signal corresponds to a header signal, and the payload data signal is transmitted in multiple clock cycles after the header signal is transmitted.Example 99 includes the subject of Example 98, where multiple clock cycles include configurable parameters of the interface.Example 100 includes any one subject of Examples 93-99, and each of the plurality of signals of the request channel, the response channel, and the data channel supports the receipt of the credit return associated with each channel. Including further.Example 101 includes the subject of Example 100, where credit is returned in a credit return signal, at least in parallel with the transmission of the message using the header signal.Example 102 includes any one subject of Examples 100 to 101, and the credit return includes a virtual channel dedicated credit and a shared credit return.Example 103 includes any one subject of Examples 93 to 102, each of the plurality of signals of the request channel, the response channel, and the data channel further comprising a blocking signal for receiving a blocking request. Causes deassertion of the active signal of the corresponding channel.Example 104 includes the subject of Example 103, in which the active signal is deasserted in a certain number of clock cycles after the blocking signal is asserted.Example 105 includes the subject of Example 104, and a particular number of clock cycles include configurable parameters of the interface.Example 106 includes the subject of any one of Examples 91-105, where the global channel contains an agent-to-fabric instance of the global channel and the request channel contains an agent-to-fabric instance of the request channel. Included, the response channel contains an agent-to-fabric instance of the response channel, and the data channel contains an agent-to-fabric instance of the data channel. The interface is a fabric-to-agent instance of the global channel assigned to the fifth physical lane, a fabric-to-agent instance of the request channel assigned to the sixth physical lane, and a seventh plurality. It further includes fabric-to-agent instances of response channels assigned to physical lanes, and fabric-to-agent instances of data channels assigned to a plurality of eighth physical lanes.Example 107 includes any one subject of Examples 87-106, the set of protocols comprises multiple protocols, and the request channel, response channel, and data channel support messages for each of the plurality of protocols.Example 108 includes the subject of any one of Examples 87-107, and the interface comprises a second instance of one of a request channel, a response channel, and a data channel.Example 109 includes the subject of any one of Examples 87-108, and the global channel contains a set of signals for initializing the interface.Example 110 includes the subject of Example 109, where the initialization of the interface is by state machine, the state machine comprises multiple initialization states of the interface, and of multiple initialization states depending on the value of the set of signals. Cause a transition between.Example 111 comprises the subject matter of any one of Examples 87-110, further comprising a compute block circuit, the compute block circuit implementing the compute block within a system on chip (SoC), and an interconnect fabric of the SoC. Includes interconnect fabric.Example 112 includes the subject of Example 111, and the computational block circuit includes a data processor.Example 113 includes the subject of Example 111, and the computational block circuit includes computer memory.Example 114 includes any one subject of Examples 1-113, and the interface comprises an unequal number of request channels, response channels, and data channels.Example 115 includes any one subject of Examples 1-114, and the interface comprises at least one of each of a request channel, a response channel, and a data channel.References to "one embodiment" or "embodiment" throughout the specification mean that the particular features, structures, or properties described in connection with the embodiments are included in at least one embodiment of the present disclosure. To do. Thus, the appearance of the phrase "in one embodiment" or "in an embodiment" throughout the specification at various locations does not necessarily refer to the same embodiment. Moreover, specific features, structures, or properties can be combined in any suitable manner in one or more embodiments.In the above specification, detailed description has been given with reference to specific exemplary embodiments. However, it will be clear that various modifications and modifications can be made without departing from the broader spirit and scope of the invention described in the appended claims. Therefore, the specification and drawings should be viewed in an exemplary sense, not in a restrictive sense. Moreover, the use of the aforementioned embodiments and other exemplary languages does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment. |
An interlevel dielectric structure includes first and second dielectric layers between which are located lines of a conductive material with a dielectric material in spaces between the lines of conductive material, with the lower surface of the dielectric material extending lower than the lower surface of lines of conductive material adjacent thereto, and the upper surface of the dielectric material extending higher than the upper surface of conductive material adjacent thereto, thus reducing fringe and total capacitance between the lines of conductive material. The dielectric material, which has a dielectric constant of less than about 3.6, does not extend directly above the upper surface of the lines of conductive material, allowing formation of subsequent contacts down to the lines of conductive material without exposing the dielectric material to further processing. Various methods for forming the interlevel dielectric structure are disclosed. |
1. A method of forming an interlevel dielectric comprising the steps of:providing a first dielectric layer over a surface of a substrate situated on a semiconductor wafer; depositing a conductive layer on said first dielectric layer, the conductive layer having an upper surface and a lower surface; depositing an additional layer on said conductive layer; patterning said conductive layer and said additional layer by: forming a patterned mask layer on said additional layer; and etching through said additional layer and said conductive layer and into said first dielectric layer, leaving a space between adjacent remaining portions of said conductive layer, said adjacent remaining portions of said conductive layer forming lines of conductive material; depositing a layer of dielectric material having a dielectric constant of less than about 3.6 to fill said space, the layer of dielectric material extending above the upper surface of the adjacent lines of conductive material and below the lower surface of the adjacent lines of conductive material sufficient to reduce the fringe capacitance therebetween, but not extending directly over or under the upper and lower surfaces of the adjacent lines of conductive material; removing said layer of dielectric material from the top thereof downward to at least to the level of the top of said additional layer; and depositing a second dielectric layer over all layers on said surface of said substrate. 2. The method as defined in claim 1, further comprising the step, to be performed after said step of removing said layer of dielectric material and before said step of depositing a second dielectric layer, of removing said additional layer on said lines of conductive material.3. The method as defined in claim 2, wherein said additional layer comprises titanium.4. The method as defined in claim 2, wherein said additional layer comprises TiN.5. The method as defined in claim 1, wherein at least one of said first and second dielectric layers comprises silicon dioxide.6. The method as defined in claim 1, wherein said dielectric material comprises PTFE.7. The method as defined in claim 1, wherein said additional layer comprises silicon dioxide.8. The method as defined in claim 1, wherein said step of removing said layer of dielectric material comprises an etch back step.9. The method as defined in claim 1, wherein said step of removing said layer of dielectric material comprises a chemical mechanical polishing step.10. The method as defined in claim 1, wherein said conductive material is selected from the group consisting of polysilicon, aluminum, copper, tungsten, and multiple layers of TiN/Al/TiN, TiN/Al/Ti, W/TiN/Ti, or any combinations thereof.11. A method of forming an interlevel dielectric comprising the steps of:providing a first dielectric layer over a surface of a substrate situated on a semiconductor wafer; depositing a conductive layer on said first dielectric layer, the conductive layer having a lower surface and an upper surface; patterning said conductive layer by: forming a mask layer on said conductive layer; and etching through said conductive layer and into said first dielectric layer, leaving a space between adjacent remaining portions of said conductive layer that extends below the lower surface of said conductive layer, said adjacent remaining portions of said conductive layer forming lines of conductive material each having an upper surface; depositing an additional layer on the upper surfaces of lines of conductive material and on said first dielectric layer; depositing a layer of dielectric material having a dielectric constant of less than about 3.6 to fill said space, the layer of dielectric material extending above the upper surface of the lines of conductive material and below the lower surface of the lines of conductive material but not directly over or under the upper and lower surfaces of the lines of conductive material; removing said layer of dielectric material from the top thereof downward to at least to the level of the top of said additional layer; and depositing a second dielectric layer over all layers on said surface of said substrate. 12. The method as defined in claim 11, wherein depositing an additional layer comprises depositing a layer of silicon dioxide by silane and oxygen based plasma enhanced chemical vapor deposition.13. The method as defined in claim 11, further comprising, after depositing an additional layer and before depositing a layer of dielectric material, of etching said additional layer.14. The method as defined in claim 13, wherein said additional layer has a top surface extending to a lateral surface at a corner, and wherein said step of performing an etch: on said additional layer etches the corner of the additional layer faster than the top surface of the additional layer.15. The method as defined in claim 13, wherein said additional layer comprises silicon dioxide and wherein said step of performing an etch on said additional layer etches in an argon or an argon-plus-fluorine based plasma.16. The method as defined in claim 11, wherein at least one of said first and second dielectric layers comprises silicon dioxide.17. The method as defined in claim 11, wherein said dielectric material comprises PTFE.18. The method as defined in claim 11, wherein said additional layer comprises silicon dioxide.19. The method as defined in claim 11, wherein said step of removing said layer of etch back step.20. The method as defined in claim 11, wherein said step of removing said layer of dielectric material comprises a chemical mechanical polishing step.21. The method as defined in claim 11, wherein said conductive material is selected from the group consisting of polysilicon, aluminum, and copper.22. The method as defined in claim 11, wherein the layer of dielectric material having a dielectric constant of less than about 3.6 extends both above and below the adjacent lines of conductive material sufficient to reduce the fringe capacitance therebetween.23. A method of forming an interlevel dielectric comprising the steps of:providing a first dielectric layer over a surface of a substrate situated on a semiconductor wafer; depositing a metal layer on said first dielectric layer, the metal layer having a lower surface and an upper surface; patterning said metal layer by: forming a mask layer on said metal layer; and etching through said metal layer and into said first dielectric layer, leaving a space between adjacent remaining portions of said metal layer that extends below the lower surface of said metal layer, said adjacent remaining portions of said metal layer forming metal lines each having an upper surface; depositing a thin layer of silicon dioxide conformably over said metal lines and selectively on said upper surfaces of said metal lines; depositing a layer of dielectric material having a dielectric constant of less than about 3.6 to fill said space, the layer of dielectric material extending above the upper surface of the lines of conductive material and below the lower surface of the lines of conductive material but not directly over or under the upper and lower surfaces of the lines of conductive material; removing said layer of dielectric material from the top thereof downward to at least to the level of the top of said additional layer; and depositing a second dielectric layer over all layers on said surface of said substrate. 24. The method as defined in claim 23, wherein said step of depositing a layer of silicon dioxide conformably over said metal lines and selectively on said upper surfaces of said metal lines comprises an ozone-based TEOS deposition.25. The method as defined in claim 23, wherein said metal lines comprise aluminum with a titanium nitride film on said upper surface of said metal lines.26. The method as defined in claim 23, further comprising, after depositing a layer of silicon dioxide conformably over said metal lines and before depositing a layer of dielectric material, of etching said additional layer.27. The method as defined in claim 26, wherein said additional layer has a top surface extending to a lateral surface at a corner, and wherein said step of performing an etch on said additional layer etches the corner of the additional layer faster than the top surface of the additional layer.28. The method as defined in claim 27, wherein said additional layer comprises silicon dioxide and wherein said step of performing an etch on said additional layer etches in an argon or an argon-plus-fluorine based plasma.29. The method as defined in claim 23, wherein at least one of said first and second dielectric layers comprises silicon dioxide.30. The method as defined in claim 23, wherein said dielectric material comprises PTFE.31. The method as defined in claim 23, wherein said additional layer comprises silicon dioxide.32. The method as defined in claim 23, wherein said step of removing said layer of dielectric material comprises an etch back step.33. The method as defined in claim 23, wherein said step of removing said layer of dielectric material comprises a chemical mechanical polishing step.34. The method as defined in claim 23, wherein said metal layer comprises at least one of aluminum or copper.35. The method as defined in claim 23, wherein the layer of dielectric material having a dielectric constant of less than about 3.6 extends both above and below the adjacent: lines of conductive material sufficient to reduce the fringe capacitance therebetween.36. A method of forming an interlevel dielectric comprising:providing a first dielectric layer over a surface of a substrate; forming a conductive layer on said first dielectric layer, the conductive layer having a lower surface and an upper surface; forming an additional layer on said conductive layer; forming lines of conductive material having spaces therebetween that extend below the lower surface of said conductive layer from the conductive layer; filling the spaces between the lines of conductive material with dielectric material having a dielectric constant of less than about 3.6; and forming a second dielectric layer on the additional layer, wherein said second dielectric layer and said additional layer are formed of the same material; wherein portions of the dielectric material having a dielectric constant of less than about 3.6 extend both above and below the adjacent lines of conductive material but do not extend directly over or under the upper and lower surfaces of the lines of conductive material. 37. A method of forming an interlevel dielectric that reduces fringe capacitance between adjacent lines of conductive material, the method comprising:providing a first dielectric layer over a surface of a substrate; forming a conductive layer on said first dielectric layer, the conductive layer, having a lower surface; forming an additional layer on said conductive layer; etching through said additional layer and said conductive layer in a single etch step and into said first dielectric layer, leaving a space between adjacent remaining portions of said conductive layer that extends below the lower surface of said conductive layer, said adjacent remaining portions of said conductive layer forming lines of conductive material; filling the spaces between adjacent remaining portions of said conductive layer with dielectric material having a dielectric constant of less than about 3.6; and forming a second dielectric layer on the additional layer, wherein said second dielectric layer and said additional layer are formed of the same material; wherein the dielectric material having a dielectric constant of less than about 3.6 extends both above and below, but not directly over, the respective adjacent lines of conductive material sufficient to reduce the fringe capacitance therebetween. |
This application is a divisional of application Ser. No. 09/249,659, filed on Feb. 12, 1999, now U.S. Pat. No. 6,107,686, which is a divisional application of Ser. No. 08/677,514, filed on Jul. 10, 1996, now U.S. Pat. No. 6,107,183, both of which are incorporated herein by reference.BACKGROUND OF THE INVENTION1. The Field of the InventionThe present invention relates to the design and manufacture of interlevel dielectrics in the manufacture of semiconductor devices. More particularly, the present invention relates to the design and manufacture of interlevel dielectrics in the manufacture of semiconductor devices in which the dielectric constant of the interlevel dielectric is less than about 3.6.2. The Relevant TechnologyThe continuing trend in the semiconductor industry of squeezing more and more circuit devices into a given area has resulted in significant improvements in the performance of individual integrated circuits and of electronic devices that employ integrated circuits. In a typical integrated circuit, individual circuit elements or groups of elements are generally electrically connected together by a metallization process, in which layers of metal are deposited and patterned to form metal lines which complete the circuit as designed. Multiple metal layers are often employed. Metal lines within patterned metal layers are insulated by layers known as interlevel dielectrics. The interlevel dielectrics insulate the metal lines from any undesired electrical contact both with other metal lines, whether in the same or another metal layer, and with other circuit elements.The capacitance between two conductive materials is also affected by the material as well as the distance between them. The ratio of the capacitance between two conductors with a given material between them to the capacitance of the same two conductors with nothing (a vacuum) between them is known as the dielectric constant of the given material. Thus a material with a high dielectric constant placed between two conductors increases the capacitance between the two conductors.The increasing density of integrated circuits has resulted in unneeded capacitance between metal lines in an integrated circuit due to metal line coupling capacitance. The unneeded capacitance slows circuit performance by causing too much buildup of charge where none is needed, thus slowing the buildup of charge at circuit elements where it is needed.One way to decrease unneeded capacitance between metal lines in an integrated circuit is to decrease the dielectric constant of the material between them. Silicon dioxide, the material of choice for interlevel dielectrics, has a relatively high dielectric constant. Replacing silicon dioxide with a material having a lower dielectric constant would thus provide reduced capacitance. Useable materials having a low dielectric constant (e.g. less than about 3.6.) are generally much less stable than silicon dioxide and are thus unable to reliably protect the metal lines, and are unable to withstand further processing.One way to gain some of the benefits of low dielectric constant materials is shown in FIG. 1. FIG. 1 is a partial cross section of a partially formed integrated circuit device. A substrate or lower layer 12 has a first dielectric layer 14 comprised of a traditional dielectric material such as silicon dioxide. Lines of conductive material 16, typically metal, overlie first dielectric layer 14. A material with a dielectric constant lower than that of silicon dioxide 18 is located in between lines of conductive material 16. Lines of conductive material 16 together with low dielectric constant dielectric material 18 are covered by a second dielectric layer 21 comprised of a traditional dielectric material such as silicon dioxide. Second dielectric layer 21 together with first dielectric layer 14 isolate low dielectric constant dielectric material 18 from other portions of the integrated circuit. Second dielectric layer 21 allows further processing, including formation of contact holes for contacting lines of conductive material 16 such as contact hole 46, without exposing dielectric material 18 to processing agents.While the structure shown in FIG. 1 results in decreased capacitance between adjacent pairs of metal lines, further decrease is needed to allow increasing miniaturization and high speed operation of ever denser integrated circuits.SUMMARY OF THE INVENTIONIn accordance with the present invention, an interlevel dielectric structure includes first and second dielectric layers between which are located lines of a conductive material with a dielectric material in spaces between the lines of conductive material, with the lower surface of the dielectric material extending lower than the lower surface of lines of conductive material adjacent thereto, and the upper surface of the dielectric material extending higher than the upper surface of lines of conductive material adjacent thereto, thus reducing fringe and total capacitance between the lines of conductive material. The dielectric material, which has a dielectric constant of less than about 3.6, does not extend directly above the upper surface of the lines of conductive material, allowing formation material to further processing.A preferred method for forming the interlevel dielectric structure includes providing an additional layer on a conductive layer on a first dielectric layer, then patterning both the additional layer and the conductive layer with an over etch into but not through the first dielectric layer, to form conductive lines with spaces therebetween. A dielectric material is then deposited to fill the spaces and is then etched or chemically mechanically polished back to the additional layer on the conductive layer. The additional layer on the conductive layer is then optionally removed before a second dielectric layer is deposited over all.Another preferred method for forming the interlevel dielectric structure includes providing a conductive layer on a first dielectric layer, then patterning the conductive layer with an over etch into but not through the first dielectric layer to form conductive lines with spaces therebetween. An additional layer is then deposited by a method providing poor step coverage. The additional layer is then optionally etched, and a dielectric material is then deposited in the spaces. The dielectric material is then etched or chemically mechanically polished back to the additional layer. The additional layer is then optionally removed before a second dielectric layer is deposited over all.Yet another preferred method for forming the interlevel dielectric structure includes providing a metal layer on a first dielectric layer, then patterning the metal layer with an over etch into but not through the first dielectric layer to form metal lines with spaces therebetween. A thin layer of silicon dioxide is then deposited by a method providing preferential deposition on the upper surfaces of the metal lines. The thin layer of silicon dioxide is then optionally etched, and a dielectric material is then deposited to fill the spaces and is then etched or chemically mechanically polished back. A second dielectric layer is then deposited over all.The above briefly described methods allow reliable formation of a desired interlevel dielectric structure, which structure provides reduced total capacitance between adjacent conductive lines needed for further miniaturization of integrated circuits.BRIEF DESCRIPTION OF THE DRAWINGSIn order that the manner in which the above-recited and other advantages and objects of the invention are obtained may be more fully explained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments and applications thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and applications of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:FIG. 1 is a partial cross section of a partially formed integrated circuit device.FIG. 2 is a partial cross section of a partially formed integrated circuit device having a structure formed during the practice of a method of the present invention.FIG. 3 is a partial cross section of a partially formed integrated circuit device for use with a method of the present invention.FIG. 4 is a cross section of the structure shown in FIG. 3 after further processing, and having a structure formed by a method of the present invention.FIG. 5 is a partial cross section of a partially formed integrated circuit device showing features formed during the practice of a method of the present invention.FIG. 6 is a partial cross section of a partially formed integrated circuit device depicting facet etching of a bread-loafed dielectric on metallization lines.FIG. 7 is a cross section of the structure shown in FIG. 5 after further processing, and having a structure formed by a method of the present invention.FIG. 8 is a partial cross section of a partially formed integrated circuit device showing features formed during the practice of a method of the present invention.FIG. 9 is a cross section of the structure shown in FIG. 8 after farther processing, having a structure formed by a method of the present invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe present invention introduces an interlevel dielectric structure having a dielectric material between conductive lines with a lower surface of the dielectric material below a lower surface of the conductive lines, and an upper surface of the dielectric material above an upper surface of the conductive lines. The present invention also provides various methods for constructing the inventive structure. Because silica glass is used extensively in this art as a dielectric, and its dielectric constant is about 3.8, we define the interlevel dielectric material as one having a dielectric constant below about 3.6, preferably below about 2.9, and most preferably below about 2.2.A preferred embodiment of the structure of the present invention is shown in FIG. 2. A substrate or underlying layer(s) 12 of a semiconductor device is overlaid with a first dielectric layer 14, typically comprised of silicon dioxide, and having an upper surface 22. Lines of conductive material 16 with spaces therebetween extend (perpendicular to the plane of FIG. 2) along upper surface 22 of first dielectric layer 14. Each of the lines of conductive material 16 has a lower surface 24 and an upper surface 26, with lower surfaces 24 being in contact with upper surface 22 of first dielectric layer 14. Lines of conductive material 16 are typically metal such as aluminum or copper, but may be comprised of other conductive materials such as polysilicon, aluminum, copper, tungsten, and multiple layers of TiN/Al/TiN, TiN/Al/Ti, W/TiN/Ti, or any combinations thereof.A second dielectric layer 20 overlies lines of conductive material 16, with a lower surface 28 of second layer of dielectric material 20 being in contact with upper surfaces 26 of lines of conductive material 16.Dielectric material 17, comprised of polytetrafluoroethylene (PTFE) or other, suitable material, is situated in the spaces between lines of conductive material 16. Dielectric material 17 has an upper surface 32 higher than the upper surfaces 26 of lines of conductive material 16 adjacent thereto, and a lower surface 30 lower than the lower surfaces 24 of lines of conductive material 16 adjacent thereto.The extension of dielectric material 17 below and above lines of conductive material 16 significantly reduces capacitance between adjacent pairs of lines of conductive material 16.The electric field formed by a potential difference applied across an adjacent pair of lines of conductive material 16 is strongest in a direct line and centrally between the adjacent pair, such as along dashed line N in FIG. 2. But the electric field so formed also extends to a fringe area not in a direct line between the adjacent pair, such as along dashed line F in FIG. 2. The field in this area, called the fringe, is associated with a portion of the total capacitance, the portion called herein "fringe capacitance," between the adjacent pair.The portion of the total capacitance included in fringe capacitance increases as aspect ratio (height/width) of lines of conductive material 16 decreases, and can be a significant fraction of total capacitance at low aspect ratios. The extension of dielectric material 17 below and above lines of conductive material 16 provides a low dielectric material in the fringe areas of the electric field, thus reducing fringe capacitance and total capacitance accordingly.While dielectric material 17 extends below and above lines of conductive material 16, it does not extend directly over surface 26 or under surface 24. This allows formation of contact holes such as contact hole 48 without exposing dielectric material 17 to processing agents that could degrade dielectric material 17 or upper surface 26 at contact hole 48.The above structure and variations thereon may be formed in a variety of ways, presently preferred examples of which will be described below.One preferred method of formiing a structure of the present invention includes providing a first dielectric layer 14 over the surface of a substrate of an underling layer 12, then forming a conductive layer 34 and an additional layer 36 thereover, as shown in FIG. 3. Conductive layer 34 and additional layer 36 are then patterned by forming and patterning a mask layer over additional layer 36, and then etching additional layer 36, conductive layer 34, and a portion of first dielectric layer 14 at areas that are left exposed through the mask layer. This results in spaces between adjacent remaining portions of conductive layer 34.Dielectric material 17 is then deposited to fill these spaces, and then removed from the top downward to at least the top of the remaining portions of additional layer 36 by an etch back or by chemical mechanical polishing. A second dielectric layer 21 is then deposited over the substrate, resulting in the structure shown in FIG. 4.FIG. 4, lines of conductive material 16 are formed of the remaining portions of conductive layer 34. Dielectric material 17 is deposited between lines of conductive material 16. If additional layer 36 is comprised of a suitable dielectric such as silicon dioxide, the remaining portions of additional layer 36 may be incorporated into the inventive structure as shown. Thus the remaining portion of additional layer 36 in FIG. 4, together with second dielectric layer 21, correspond to the depiction seen in FIG. 2 as second dielectric layer 20.If additional layer 36 is not a dielectric, such as if titanium is used, for example, then the remaining portions of layer 36 shown in FIG. 4 are removed by an appropriate process immediately before the deposition of second dielectric layer 20. This alternative additional process step results in a structure like that which is shown in FIG. 2.Another preferred method of forming a structure of the present invention includes providing a first dielectric layer over a substrate or an underlying layer, then depositing and patterning a conductive layer over the first dielectric layer. During patterning of the conductive layer, the conductive layer is over etched such that the first dielectric layer is etched partially with the same pattern. Next, an additional layer is deposited over the patterned metal layer by a deposition method having poor step coverage.The results of the above steps are shown in FIG. 5. First dielectric layer 14 has been formed on substrate or underlying layer 12, and a conductive layer has been deposited and patterned, leaving lines of conductive material 16. Additional layer 38 has been deposited by a deposition method having poor step coverage. This results in additional layer 38 being formed substantially only on the upper surfaces of lines of conductive material 16 as shown.If additional layer 38 is comprised of a suitable dielectric material, the further process steps may proceed as before, with deposition and partial top-down removal of dielectric material 17 and deposition of second dielectric layer 21, resulting in the structure shown in FIG. 7. The remaining portions of additional layer 38 are incorporated into the inventive structure as shown, so that the remaining portion of additional layer 38 in FIG. 5 together with second dielectric layer 21 in FIG. 7, correspond to the depiction seen in FIG. 2 as second dielectric layer 20.Silicon dioxide is the currently preferred material for additional layer 38, with deposition by a silane and oxygen plasma enhanced chemical vapor deposition (PECVD) being the preferred poor step coverage deposition method.FIG. 6 illustrates an optional etch step that may be included immediately after deposition of additional layer 38 to remove lateral buildup of additional layer 38. The preferred etch is a facet etch, and is preferably performed in an argon or an argon-plus-fluorine based plasma. In a facet etch, additional layer 38 is etched slower at a top surface thereof than it is etched at a corner thereof which connects the top surface to a lateral surface thereof. The facet etch has the effect of removing substantially all of the lateral buildup portions of additional layer 38 and the removed portions redeposit in semi-triangular form at the base of the lines of conductive material 16 and first dielectric layer 14 interface. A continuous but thin lateral layer of additional layer 38 also deposits down the sides of lines of conductive material 16. Further processing as above then results in a structure like that which is shown in FIG. 4, with the remaining portions of layer of additional material 36 corresponding to the remaining portions of additional layer 38. The redeposited fraction of additional material 38, however, remains thinly on the sides of lines of conductive material 16 and first dielectric layer 14.If additional layer 38 is not a dielectric, or is otherwise not suitable to remain in place in the inventive structure, then additional layer 38 is removed by an appropriate process immediately before the deposition of second dielectric layer 21. This alternative additional process step results in a structure that is like that shown in FIG. 2.In yet another presently preferred method for forming a structure of the present invention, a first dielectric layer is provided over a substrate or an underlying layer, then a metal layer is deposited and patterned to form metal lines over the first dielectric layer. During patterning of the metal layer, the metal layer is over etched such that the first dielectric layer is etched partially with the same pattern. A thin silicon dioxide layer is then deposited conformably over the metal lines by a deposition process that deposits preferentially on the upper surface of the metal lines.The above process results generally in the structure shown in FIG. 8. First dielectric layer 14 is formed on substrate 12. Metal lines in the preferred form of aluminum lines 40 have been formed on first dielectric layer 14, and first dielectric layer 14 has been over etched in the same pattern as aluminum lines 40. A titanium nitride film 42 from a photolithography process used to pattern aluminum lines 40 remains on the upper surface of aluminum lines 40. While not required, inclusion of titanium nitride film 42 is presently preferred.The preferred deposition process for selectively depositing a thin silicon dioxide layer 44 is an ozone based TEOS process, which preferentially deposits on TiN over silicon dioxide. Preferably, silicon dioxide layer 44 will be deposited only on titanium nitride film 42 and not on the sidewall of aluminum lines 40 as shown in FIG. 8.After deposition of silicon dioxide layer 44, the process may continue as with the other above processes by deposition and partial removal of a dielectric material 17, followed by deposition of second dielectric layer 21, resulting in the structure shown in FIG. 9. Silicon dioxide layer 44 is incorporated into the inventive structure as shown, so that silicon dioxide layer 44 together with second dielectric layer 21 correspond to the depiction seen in FIG. 2 as second dielectric layer 20.As an alternative process step, an etch such as a facet etch in an argon or an argon-plus-fluorine based plasma may be performed on silicon dioxide layer 44 after the deposition thereof.The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrated and not restrictive. The scope of the invention is, therefore, indicated by the appended claims and their combination in whole or in part rather than by the foregoing description. AU changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. |
A method, system, and computer program product are provided for adjusting write timing in a memory device based on results of an error detection function. For instance, the method can include determining a write timing window between a signal on a data bus and a write clock signal based on the results of the error detection function. The method can also include adjusting a phase difference between the signal on the data bus and the write clock signal based on the write timing window. The memory device can recover data on the data bus based on the adjusted phase difference. |
WHAT IS CLAIMED IS: 1. A method for adjusting write timing in a memory device, comprising: determining a write timing window between a signal on a data bus and a write clock signal based on results of an error detection function; and adjusting a phase difference between the signal on the data bus and the write clock signal based on the write timing window, wherein the memory device recovers the signal on the data bus based on the adjusted phase difference. 2. The method of claim 1, further comprising: configuring the memory device in an error detection mode of operation. 3. The method of claim 2, wherein configuring the memory device in the error detection mode of operation comprises configuring the memory device in a write mode of operation during the error detection mode of operation. 4. The method of claim 1, wherein determining the write timing window comprises: performing an error detection function on a first data pattern to generate a first result from the error detection function; transmitting the first data pattern on a data bus to the memory device; receiving a second result from the memory device, wherein the second result is based on the error detection function performed on a second data pattern, the second data pattern comprising the first data pattern received at an interface of the memory device based on the write clock signal; comparing the first result to the second result to determine whether the first and second results match each other; and determining a first timing boundary and a second timing boundary of the write timing window based on the comparison of the first and second results. 5. The method of claim 4, wherein performing the error detection function comprises performing at least one of a parity function and a checksum function. 6. The method of claim 4, wherein if the first and second results match each other, determining the first timing boundary and the second timing boundary of the timing window comprises: iteratively repeating a sequence of the transmitting, receiving, and comparing steps for positive incremental phase shifts in the first data pattern to determine the first timing boundary, wherein the first timing boundary is defined by the last positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase-shifted first data pattern matches the corresponding second result; and iteratively repeating the sequence of the transmitting, receiving, and comparing steps for negative incremental phase shifts in the first data pattern to determine the second timing boundary, wherein the second timing boundary is defined by the last negative incremental phase shift in the first data pattern in which the first result from error detection function performed on the negative phase-shifted first data pattern matches the corresponding second result. 7. The method of claim 4, wherein if the first and second results match each other, determining the first timing boundary and the second timing boundary of the timing window comprises: iteratively repeating a sequence of the transmitting, receiving, and comparing steps for positive incremental phase shifts in the write clock signal to determine the first timing boundary, wherein the first timing boundary is defined by the last positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result; and iteratively repeating the sequence of the transmitting, receiving, and comparing steps for negative incremental phase shifts in the write clock signal to determine the second timing boundary, wherein the second timing boundary is defined by the last negative incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result. 8. The method of claim 4, wherein if the first and second results do not match each other, determining the first timing boundary and the second timing boundary of the timing window comprises: iteratively repeating a sequence of the transmitting, receiving, and comparing steps for positive incremental phase shifts in the first data pattern to determine the first timing boundary, wherein the first timing boundary is defined by the first positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase-shifted first data pattern matches the corresponding second result; and starting at the first timing boundary, iteratively repeating the sequence of the transmitting, receiving, and comparing steps for positive incremental phase shifts in the first data pattern to determine the second timing boundary, wherein the second timing boundary is defined by the last positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase- shifted first data pattern matches the corresponding second result. 9. The method of claim 4, wherein if the first and second results do not match each other, determining the first timing boundary and the second timing boundary of the timing window comprises: iteratively repeating a sequence of the transmitting, receiving, and comparing steps for positive incremental phase shifts in the write clock signal to determine the first timing boundary, wherein the first timing boundary is defined by the first positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result; and starting at the first timing boundary, iteratively repeating the sequence of the transmitting, receiving, and comparing steps for positive incremental phase shifts in the write clock signal to determine the second timing boundary, wherein the second timing boundary is defined by the last positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result. 10. The method of claim 4, wherein comparing the first result to the second result comprises comparing each bit in the first result to each corresponding bit in the second result to determine whether the first and second results match each other. 11. The method of claim 1, wherein adjusting the phase difference comprises introducing a phase delay in at least one of the signal on the data bus, the write clock signal, and both the signal on the data bus and the write clock signal. 12. A method for adjusting write timing in a memory device, comprising: receiving a first data pattern transmitted from a processing unit; performing an error detection function on a second data pattern to generate a first result, wherein the second data pattern comprises the first data pattern received at an interface of the memory device based on a write clock signal; transmitting the first result to the processing unit; and receiving a signal on a data bus, wherein a phase difference between the signal on the data bus and the write clock signal is within a write timing window, the write timing window based on a comparison between the first result and a second result from the error detection function performed on the first data pattern. 13. The method of claim 12, further comprising operating in an error detection mode of operation. 14. The method of claim 12, wherein receiving the signal on the data bus comprises receiving the signal on the data bus that is between a first boundary and a second boundary of the write timing window. 15. The method of claim 12, wherein performing the error detection function comprises performing at least one of a parity function and a checksum function. 16. A system comprising: a memory device; a processing unit coupled to the memory device and configured to: determine a write timing window between a signal on a data bus and a write clock signal based on results of an error detection function; and adjust a phase difference between the signal on the data bus and the write clock signal based on the write timing window, wherein the memory device recovers the signal on the data bus based on the adjusted phase difference. 17. The system of claim 16, wherein the processing unit is configured to place the memory device in an error detection mode of operation during a write mode of operation. 18. The system of claim 16, wherein the processing unit is configured to: perform an error detection function on a first data pattern to generate a first result from the error detection function; transmit the first data pattern on a data bus to the memory device; receive a second result from the memory device, wherein the second result is based on the error detection function performed on a second data pattern, the second data pattern comprising the first data pattern received at an interface of the memory device based on the write clock signal; compare the first result to the second result to determine whether the first and second results match each other; and determine a first timing boundary and a second timing boundary of the write timing window based on the comparison of the first and second results 19. The system of claim 18, wherein the processing unit is configured to perform at least one of a parity function and a checksum function when performing the error detection function. 20. The system of claim 18, wherein if the first and second results match each other, the processing unit is configured to determine the first timing boundary and the second timing boundary of the timing window based on:iteratively repeating a sequence of the transmit, receive, and compare functions for positive incremental phase shifts in the first data pattern to determine the first timing boundary, wherein the first timing boundary is defined by the last positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase-shifted first data pattern matches the corresponding second result; and iteratively repeating the sequence of the transmit, receive, and compare functions for negative incremental phase shifts in the first data pattern to determine the second timing boundary, wherein the second timing boundary is defined by the last negative incremental phase shift in the first data pattern in which the first result from error detection function performed on the negative phase-shifted first data pattern matches the corresponding second result. The system of claim 18, wherein if the first and second results match each other, the processing unit is configured to determine the first timing boundary and the second timing boundary of the timing window based on: iteratively repeating a sequence of the transmit, receive, and compare functions for positive incremental phase shifts in the write clock signal to determine the first timing boundary, wherein the first timing boundary is defined by the last positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result; and iteratively repeating the sequence of the transmit, receive, and compare functions for negative incremental phase shifts in the write clock signal to determine the second timing boundary, wherein the second timing boundary is defined by the last negative incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result. 22. The system of claim 18, wherein if the first and second results do not match each other, the processing unit is configured to determine the first timing boundary and the second timing boundary of the timing window based on:iteratively repeating a sequence of the transmit, receive, and compare functions for positive incremental phase shifts in the first data pattern to determine the first timing boundary, wherein the first timing boundary is defined by the first positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase-shifted first data pattern matches the corresponding second result; and starting at the first timing boundary, iteratively repeating the sequence of the transmit, receive, and compare functions for positive incremental phase shifts in the first data pattern to determine the second timing boundary, wherein the second timing boundary is defined by the last positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase- shifted first data pattern matches the corresponding second result. 23. The system of claim 18, wherein if the first and second results do not match each other, the processing unit is configured to determine the first timing boundary and the second timing boundary of the timing window based on: iteratively repeating a sequence of the transmit, receive, and compare steps for positive incremental phase shifts in the write clock signal to determine the first timing boundary, wherein the first timing boundary is defined by the first positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result; and starting at the first timing boundary, iteratively repeating the sequence of the transmit, receive, and compare steps for positive incremental phase shifts in the write clock signal to determine the second timing boundary, wherein the second timing boundary is defined by the last positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result. 24. The system of claim 16, wherein the processing unit is configured to introduce a phase delay in at least one of the signal on the data bus, the write clock signal, and both the signal on the data bus and the write clock signal when adjusting the phase difference between the signal on the data bus and the write clock signal. 25. A system comprising: a processing unit; and a memory device coupled to the processing unit and configured to: receive a first data pattern transmitted from a processing unit; perform an error detection function on a second data pattern to generate a first result, wherein the second data pattern comprises the first data pattern received at an interface of the memory device based on a write clock signal; transmit the first result to the processing unit; and receive a signal on a data bus, wherein a phase difference between the signal on the data bus and the write clock signal is within a write timing window, the write timing window based on a comparison between the first result and a second result from the error detection function performed on the first data pattern. 26. The system of claim 25, wherein the memory device is configured to operate in an error detection mode of operation. 27. The system of claim 25, wherein the memory device is configured to receive the signal on the data bus that is between a first boundary and a second boundary of the write timing window. 28. The system of claim 25, wherein the memory device is configured to perform at least one of a parity function and a checksum function when performing the error detection function. 29. A computer program product comprising a computer-usable medium having computer program logic recorded thereon enabling a processor to analyze software code, the computer program logic comprising: first computer readable program code that enables a processor to determine a write timing window between a signal on a data bus and a write clock signal based on results of an error detection function; and second computer readable program code that enables a processor to adjust a phase difference between the signal on the data bus and the write clock signal based on the writetiming window, wherein the memory device recovers the signal on the data bus based on the adjusted phase difference. The computer program product of claim 29, wherein the first computer readable program code comprises: third computer readable program code that enables a processor to perform an error detection function on a first data pattern to generate a first result from the error detection function; fourth computer readable program code that enables a processor to transmit first data pattern on a data bus to the memory device; fifth computer readable program code that enables a processor to receive a second result from the memory device, wherein the second result is generated from the error detection function performed on a second data pattern, the second data pattern comprising the first data pattern received at an interface of the memory device; sixth computer readable program code that enables a processor to compare the first result to the second result to determine whether the first and second results match each other; and seventh computer readable program code to enable a processor to determine a first timing boundary and a second timing boundary of the write timing window based on the comparison of the first and second results. The computer program product of claim 30, wherein if the first and second results match each other, the seventh computer readable program code comprises: eighth computer readable program code to enable a processor to iteratively repeat a sequence of the fourth, fifth, and sixth computer readable program codes for positive incremental phase shifts in the first data pattern to determine the first timing boundary, wherein the first timing boundary is defined by the last positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase-shifted first data pattern matches the corresponding second result; and ninth computer readable program code to enable a processor to iteratively repeat the sequence of the fourth, fifth, and sixth computer readable program codes for negativeincremental phase shifts in the first data pattern to determine the second timing boundary, wherein the second timing boundary is defined by the last negative incremental phase shift in the first data pattern in which the first result from error detection function performed on the negative phase-shifted first data pattern matches the corresponding second result. 32. The computer program product of claim 30, wherein if the first and second results match each other, the seventh computer readable program code comprises: eighth computer readable program code to enable a processor to iteratively repeat a sequence of the fourth, fifth, and sixth computer readable program codes for positive incremental phase shifts in the write clock signal to determine the first timing boundary, wherein the first timing boundary is defined by the last positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result; and ninth computer readable program code to enable a processor to iteratively repeat the sequence of the fourth, fifth, and sixth computer readable program codes for negative incremental phase shifts in the write clock signal to determine the second timing boundary, wherein the second timing boundary is defined by the last negative incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result. 33. The computer program product of claim 30, wherein if the first and second data results do not match each other, the seventh computer readable program code comprises: eighth computer readable program code to enable a processor to iteratively repeat a sequence of the fourth, fifth, and sixth computer readable program codes for positive incremental phase shifts in the first data pattern to determine the first timing boundary, wherein the first timing boundary is defined by the first positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase-shifted first data pattern matches the corresponding second result; and ninth computer readable program code to enable a processor to start at the first timing boundary and to iteratively repeat the sequence of the fourth, fifth, and sixthcomputer readable program codes for positive incremental phase shifts in the first data pattern to determine the second timing boundary, wherein the second timing boundary is defined by the last positive incremental phase shift in the first data pattern in which the first result from the error detection function performed on the positive phase-shifted first data pattern matches the corresponding second result. The computer program product of claim 30, wherein if the first and second results do not match each other, the seventh computer readable program code comprises: eighth computer readable program code to enable a processor to iteratively repeat a sequence of the fourth, fifth, and sixth computer readable program codes for positive incremental phase shifts in the write clock signal to determine the first timing boundary, wherein the first timing boundary is defined by the first positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result; and ninth computer readable program code to enable a processor to start at the first timing boundary and to iteratively repeat the sequence of the fourth, fifth, and sixth computer readable program codes for positive incremental phase shifts in the write clock signal to determine the second timing boundary, wherein the second timing boundary is defined by the last positive incremental phase shift in the write clock signal in which the first result from the error detection function performed on the first data pattern matches the corresponding second result. |
ADJUSTMENT OF MEMORY WRITE TIMING BASED ON ERROR DETECTION TECHNIQUES BACKGROUND Field [0001 ] Embodiments of the present invention generally relate to an adjustment of write timing in a memory device. More specifically, embodiments of the present invention refer to adjusting the write timing of the memory device based on results of an error detection function. Background [0002] Data communication between a processing unit and a memory device typically involves sending data along signal paths such as, for example, wires and traces. In a memory device with a synchronous interface, the processing unit may transmit a clock signal along with the data signal to the memory device. The clock signal is used to determine when the data signal should be latched by the memory device, thus synchronizing the memory device to the processing unit. For proper data recovery, the memory device must receive the clock signal within a time period that allows the clock signal to sample the data signal (e.g., the clock signal must sample the data signal within a period of time corresponding to a data eye of the data signal). Otherwise, the memory device may not recover the correct data value. [0003] Real-world variations, such as temperature and jitter, can cause attenuation in the transmitted data signal and clock signal from the processing unit to the memory device, thus causing a loss in data signal integrity. This can result in poor or inaccurate data recovery by the memory device. As operating frequencies in computer systems increase, a need arises to transmit data more rapidly from the processing unit to the memory device. Accordingly, the memory device not only needs to sample data at a faster rate, but also needs to sample the data at the proper time. SUMMARY [0004] Embodiments of the present invention include a method for adjusting write timing in a memory device. The method can include determining a write timing windowbetween a signal on a data bus and a write clock signal based on results of an error detection function. The method can also include adjusting a phase difference between the signal on the data bus and the write clock signal based on the write timing window, where the memory device recovers the signal on the data bus based on the adjusted phase difference. [0005 j Embodiments of the present invention also include another method for adjusting write timing in a memory device. The method can include the following: receiving a first data pattern transmitted from a processing unit; performing an error detection function on a second data pattern to generate a first result, where the second data pattern can be the first data pattern received at an interface of the memory device based on a write clock signal; transmitting the first result to the processing unit; and, receiving a signal on a data bus, where a phase difference between the signal on the data bus and the write clock signal is within a write timing window, the write timing window based on a comparison between the first result and a second result from the error detection function performed on the first data pattern. [0006] Embodiments of the present invention include a system for a system to adjust write timing in a memory device. The system can include a memory device and a processing unit coupled to the memory device. The processing unit can be configured to perform the following functions: determine a write timing window between a signal on a data bus and a write clock signal based on results of an error detection function; and, adjust a phase difference between the signal on the data bus and the write clock signal based on the write timing window, where the memory device recovers the signal on the data bus based on the adjusted phase difference. [0007] Embodiments of the present invention also include another system for a system to adjust write timing in a memory device. The system can include a processing unit and a memory device coupled to the processing unit. The memory device can be configured to perform the following functions: receive a first data pattern transmitted from a processing unit; perform an error detection function on a second data pattern to generate a first result, wherein the second data pattern comprises the first data pattern received at an interface of the memory device based on a write clock signal; transmit the first result to the processing unit; and, receive a signal on a data bus, where a phase difference between the signal on the data bus and the write clock signal is within a write timing window, thewrite timing window based on a comparison between the first result and a second result from the error detection function performed on the first data pattern. [0008] Embodiments of the present invention further include a computer program product to adjust write timing in a memory device. The computer program product includes a computer-usable medium having computer program logic recorded thereon enabling a processor to analyze software code. The computer program logic includes the following: first computer readable program code that enables a processor to determine a write timing window between a signal on a data bus and a write clock signal based on results of an error detection function; and, second computer readable program code that enables a processor to adjust a phase difference between the signal on the data bus and the write clock signal based on the write timing window, where the memory device recovers the signal on the data bus based on the adjusted phase difference. [0009] Further features and advantages of the invention, as well as the structure and operation of various embodiments of the present invention, are described in detail below with reference to the accompanying drawings. It is noted that the invention is not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art based on the teachings contained herein. BRIEF DESCRIPTION OF THE DRAWINGS [0010] The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the relevant art to make and use the invention. [0011] Figure 1 is an illustration of an example computer system with a processing unit and a memory device. [0012] Figure 2 is an illustration of an exemplary write timing diagram that is representative of proper data recovery by a memory device. [0013] Figure 3 is an illustration of an exemplary write timing diagram that is not representative of proper data recovery by a memory device. [0014] Figure 4 is an illustration of an embodiment of a computer system to adjust write timing in a memory device.[0015] Figure 5 is an illustration of an embodiment of a method for adjusting write timing in a memory device. [0016] Figure 6 is an illustration of an embodiment of a flowchart to determine a first write timing boundary of a write timing period when first and second error detection function results match each other. [0017] Figure 7 is an illustration of an exemplary timing diagram to facilitate in an explanation of a flowchart to determine a first write timing boundary of a write timing period when first and second error detection function results match each other. [0018] Figure 8 is an illustration of a flowchart to determine a second write timing boundary of a write timing period when first and second error detection function results match each other. [0019] Figure 9 is an illustration of an exemplary timing diagram to facilitate in an explanation of a flowchart to determine a second write timing boundary of a write timing period when first and second error detection function results match each other. [0020] Figure 10 is an illustration of an exemplary timing diagram to facilitate in an explanation of a determination of a first write timing boundary of a write timing period, based on a write clock signal, when first and second error detection function results match each other. [0021] Figure 11 is an illustration of an exemplary timing diagram to facilitate in an explanation of a determination of a second write timing boundary of a write timing period, based on a write clock signal, when first and second error detection function results match each other. 100221 Figure 12 is an illustration of a flowchart to determine a first write timing boundary of a write timing period when first and second error detection function results do not match each other. [0023] Figure 13 is an illustration of exemplary timing diagram to facilitate in an explanation of a flowchart to determine a first write timing boundary of a write timing period when first and second error detection function results do not match each other. [0024] Figure 14 is an illustration a flowchart to determine a second write timing boundary of a write timing period when first and second error detection function results do not match each other.[0025] Figure 15 is an illustration of an exemplary timing diagram to facilitate in an explanation of a flowchart to determine a second write timing boundary of a write timing period when first and second error detection function results do not match each other. [0026 J Figure 16 is an illustration of an exemplary timing diagram to facilitate in an explanation of a determination of a first write timing boundary of a write timing period, based on a write clock signal, when first and second error detection function results do not match each other. [0027J Figure 17 is an illustration of an exemplary timing diagram to facilitate in an explanation of a determination of a second write timing boundary of a write timing period, based on a write clock signal, when first and second error detection function results do not match each other. [0028] Figure 18 is an illustration of an example computer system in which embodiments of the present invention can be implemented. DETAILED DESCRIPTION [0029] The following detailed description refers to the accompanying drawings that illustrate exemplary embodiments consistent with this invention. Other embodiments are possible, and modifications can be made to the embodiments within the spirit and scope of the invention. Therefore, the detailed description is not meant to limit the invention. Rather, the scope of the invention is defined by the appended claims. [0030] It would be apparent to one of skill in the relevant art that the present invention, as described below, can be implemented in many different embodiments of software, hardware, firmware, and/or the entities illustrated in the figures. Thus, the operational behavior of embodiments of the present invention will be described with the understanding that modifications and variations of the embodiments are possible, given the level of detail presented herein. [0031] Figure 1 is an illustration of an example computer system 100 with a processing unit and a memory device. Computer system 100 includes a processing unit 110, a memory device 120, a data bus 1307-1300, an address/control (A/C) bus 140i5-140o, a clock signal 150 (e.g., a write clock signal), and an error detection and correction (EDC) signal 160.[0032] Processing unit 110 transmits address/control signals, via A/C bus 14015-1400, to memory device 120. Address/control signals can include, for example, clock enable (/CKE), chip select (/CS), row address strobe (/RAS), column address strobe (/CAS), write enable (/WE), and an address bus (e.g., A[8:0]). A command decoder (not shown) in memory device 120 receives the address/control signals and, based on bit settings of the address/control signals, indicates a mode of operation for memory device 120. Modes of operation for memory device 120 can include, for example, a read operation, a write operation, an idle operation, and a refresh operation. [0033] In a synchronous memory system, the address/control signals on A/C bus 140) 5- 140o of Figure 1 are timed relative to an edge of clock signal 150 (e.g., a rising edge of clock signal 150), in which the address/control signals are sampled on the edge of clock signal 150. For example purposes, A/C bus 14015-140o is illustrated as a 16-bit data bus. Based on the description herein, a person skilled in the relevant art will recognize that the bus width of A C bus 140i5-140o can vary (e.g., 8-bits, 32-bits, etc.). Address/control buses and associated signals traveling on these buses are known to those persons skilled in the relevant art. [0034] Error detection and correction (EDC) refers to techniques to ensure that data is transmitted without errors from processing unit 110 to memory device 120. In an example, EDC signal 160 can be used to carry either parity information or error correction code between processing unit 110 and memory device 120. An error detection function generates the parity and error correction code based on data signals on data bus 1307-130o as known by persons skilled in the relevant art. Based on the parity information or the error correction code, processing unit 110 can determine whether data transmission to memory device 120 is without errors. In computer system 100, EDC signal 160 is a unidirectional signal that carries EDC data from memory device 120 to processing unit 110. A person of ordinary skill in the art will understand that EDC signal 160 can also be a bi-directional signal, in which EDC data is transported between processing unit 110 and memory device 120. EDC techniques and algorithms are known to those persons skilled in the relevant art. [0035] Processing unit 110 transmits and receives data, via data bus 1307-130o, to and from memory device 120. During a write operation, data is transferred from processing unit 1 10 to memory device 120 via data bus 1307-130o- During a read operation, data istransferred from memory device 120 to processing unit 110 via data bus 1307-130o. In a synchronous memory system, the rate at which the data is transmitted and received by processing unit 1 10 is based on a clock signal such as, for example, clock signal 150. For example purposes, data bus 1307-130o is illustrated as an 8-bit bi-directional data bus. Based on the description herein, a person skilled in the relevant art will recognize that the bus width of data bus 1307-130o can vary (e.g., 16-bits, 32-bits, etc.). Data buses and associated signals traveling on these buses are known to those persons skilled in the relevant art. [0036] Memory device 120 stores data transmitted from processing unit 110. The receipt and storage of data (transmitted from processing unit 110) is known as "writing" to memory device 120. Conversely, data can be retrieved from memory device 120, which is known as "reading" from memory device 120. Memory device 120 can be configured with a synchronous interface, in which memory device 120 waits for clock signal 150 before processing the data on data bus 1307-1300. For instance, memory device 120 can generate an internal clock signal, aligned with clock signal 150, to receive the data from data bus 1307-130o or to transmit the data from memory device 120 to processing unit 110 via data bus 1307-130o. The internal clock signal of memory device 120 can be, for example, a multiple of the frequency of clock signal 150 (e.g., 2x, 4x, etc.) as understood by a person of ordinary skill in the relevant art. [0037] Figure 2 is an illustration of an exemplary write timing diagram 200 for computer system 100 that is representative of proper data recovery by memory device 120. Write timing diagram 200 includes timings for a data eye for data signal 1300 and clock signal 150, where the data eye can define a period of time 210 in which clock signal 150 can be used to sample data signal 1300 (e.g., proper data recovery by memory device 120 can occur within period of time 210). A data eye refers to, for example, a portion of data signal 1300 with a valid binary value. Here, clock signal 150 is center aligned to data signal 1300 and samples data signal 1300 within the data eye when clock signal 150 is HIGH (or has a logic value of T). As understood by a person of ordinary skill in the relevant art, the center alignment of clock signal 150 to data signal 1300 provides an ideal write timing for computer system 100 since memory device 120 is allowed a sufficient period of time to receive and sample data signal 1300. A person of ordinary skill in theart will understand that the alignment of clock signal 150 relative to data signal 1300 can occur in other alignment positions. [0038] Figure 3 is an illustration of an exemplary write timing diagram 300 for computer system 100 that is not representative of proper data recovery by memory device 120. Similar to write timing diagram 200, write timing diagram 300 includes timings for the data eye of data signal 130o and clock signal 150. However, clock signal 150 has a relative phase difference 310 (or timing skew) with respect to data signal 1300, where phase difference 310 may not provide memory device 120 a sufficient amount of time to sample data signal 1300 (e.g., a sufficient amount of time for memory device 120 to latch data signal 1300). Variations in relative phase difference 310 between data signal 1300 and clock signal 150 can be caused by various factors such as, for example, temperature and jitter in computer system 100. In exemplary write timing diagram 300, relative phase difference 310 can be defined by a difference between a center of data eye 210 and a center of clock signal 150 when clock signal 150 samples data signal 1300 (e.g., when clock signal 150 is HIGH or has a logic value of ' 1 '). [0039] As the operating frequency of computer system 100 increases, memory device 120 not only needs to sample signals on data bus 1307-130o at a faster frequency, but also needs to sample the data signals at the proper time. Clock signal 150 should be optimally aligned with signals on data bus 1307-130o to ensure proper sampling of the data. To align clock signal 150 with signals on data bus 130 -130o, the relative phase difference (or timing skew) between signals on data bus 1307-1300 and clock signal 150 can be monitored and adjusted based on an error detection function. As a result, computer system 100 can be configured such that the write timing between processing unit 110 and memory device 120 can be optimized. [0040] Figure 4 is an illustration of an embodiment of a computer system 400 to adjust write timing in a memory device. Computer system 400 includes a processing unit 410, a memory device 420, data bus 1307-130o, A/C bus 14015-140o, clock signal 150 (also referred to herein as a write clock signal), and EDC signal 160. Data bus 1307-1300, A/C bus 140i5-140o, write clock signal 150, and EDC signal 160 function in a similar manner as that described above with respect to Figure 1. [0041] In an embodiment, processing unit 410 and memory device 420 are integrated circuit (IC) devices on a circuit board with data bus 1307-1300, A/C bus 140i5-1400, andwrite clock signal 150 communicatively coupling the two IC devices, where data bus 1307-130o, A/C bus 140i5-1400, write clock signal 150, and EDC signal 160 can be wires, interconnects, or circuit board traces. In another embodiment, processing unit 410 and memory device 420 are integrated on a single IC device with data bus 1307-130o, A/C bus 140i5-140o, write clock signal 150, and EDC signal 160 communicatively coupling processing unit 410 to memory device 420. [0042] Data bus 1307-1300, A/C bus 140i5-1400, write clock signal 150, and EDC signal 160 are connected to input/output (I/O) ports of processing unit 410 and memory device 420 that are used in the modes of operation of memory device 420 (e.g., read, write, idle, and refresh modes of operation). I/O ports that connect a processing unit to a memory device (e.g., DQ and clock pins) are known to persons skilled in the relevant art. [0043] Processing unit 410 is a GPU according to an embodiment of the present invention. Alternatively, in another embodiment, processing unit 410 can be a CPU or a memory controller. Based on the description herein, a person skilled in the relevant art will recognize that embodiments of the present invention can be implemented with other types of processing units, which are within the scope and spirit of the present invention. [0044] In an embodiment, processing unit 410 includes phase delay circuits 430 and 440 and controller 450. In an embodiment, phase delay circuit 430 is configured to delay a transmission of signals traveling on data bus 1307-1300. Similarly, in an embodiment, phase delay circuit 440 is configured to delay write clock signal 150. Controller 450 is configured to control an amount of phase delay for each phase delay circuit 430 and 440 according to an embodiment of the present invention. The amount of phase delay issued by controller 450 to phase delay circuits 430 and 440 is described in detail below with respect to method 500 of Figure 5. Phase delay circuits and associated controllers used to control the phase delay circuits are known to persons of ordinary skill in the relevant art. [0045] In reference to Figure 4, in an embodiment, memory device 420 is a dynamic random access memory (DRAM) device. Based on the description herein, a person skilled in the relevant art will recognize that embodiments of the present invention can be implemented with other types of memory devices. These other types of memory devices are within the scope and spirit of the present invention. [0046] Figure 5 is an illustration of an embodiment of a method 500 for adjusting write timing in a memory device. Method 500 can occur using, for example, computer system400 of Figure 4. For explanation purposes, computer system 400 will be used to facilitate in the description of method 500. However, based on the description herein, a person of ordinary skill in the relevant art will recognize that method 500 can be implemented in other computer systems. [0047] In an embodiment, method 500 can be used by computer system 400 to adjust write timing between processing unit 410 and memory device 420. In particular, through one or more sequences of writing one or more data patterns to and reading corresponding error detection function results from memory device 420, controller 450 of processing unit 410 can adjust a phase difference between data signals on data bus 1307-130ο and write clock signal 150 (via phase delay circuits 430 and 440) such that memory device 420 properly recovers data from data bus 1307-130o. [0048] In reference to method 500 of Figure 5, in step 510, processing unit 410 issues one or more commands to configure memory device 420 in an error detection mode of operation. In an embodiment, memory device 420 can be in an error detection mode of operation during an active mode of operation of memory device 420. These active modes of operation can include, for example, a read and write mode of operation. In another embodiment, memory device 420 can be in an error detection mode of operation when memory device resources are inactive. In these modes of operation, memory device resources such as, for example, data bus 1307-1300, A/C bus 140i5-1400, and write clock signal 150 are not being used by computer system 400 for a read or write mode of operation. [0049] For ease of explanation, the following description of method 500 is described in the context of an error detection mode where memory device resources are inactive (e.g., data bus 1307-130o, A/C bus 140i5-140o, and write clock signal 150 are not being used for a read and/or write operation by computer system 400). However, based on the description herein, a person of ordinary skill in the relevant art will recognize that method 500 can also be implemented in an error detection mode of operation where memory device resources are active (e.g., data bus 130 -130o, A/C bus 140i5-140o, and write clock signal 150 are being used for a read and/or write operation by computer system 400). [0050] In step 520, processing unit 410 determines a write timing window between data signals on data bus 130 -130o and write clock signal 150 based on results of an error detection function. The write timing window refers to a time period in which data signalson data bus 1307-130o, write clock signal 150, or both the data signals on data bus 1307- 130o and write clock signal 150 can be phase-adjusted in relation to one another such that memory device 420 properly recovers the data on data bus 1307-1300. In an embodiment, the write timing window is defined by a first timing boundary and a second timing boundary. The write timing window and its associated first and second write timing boundaries are described in further detail below with respect to Figures 6-17. [0051] The error detection function refers to an algorithm used in an error detection and correction technique that can be used ensure that data is transmitted without errors from processing unit 410 to memory device 420. As noted above, data transmission errors can be the result of variations such as, for example, temperature and jitter in computer system 400. The error detection function, as it relates to method 500, is described in further detail below with respect to Figures 6-17. [0052] In an embodiment, a first data pattern is used to adjust the write timing of memory device 420. The first data pattern can be, for example, an 8-bit data pattern with a random combination of logic values of l 's and 0's. In an embodiment, processor 410 performs an error detection function on the first data pattern to generate, for example, a parity value, checksum value, or another type of result from an error detection function. In an embodiment, the error detection function implements a checksum scheme, where the checksum value of the first data pattern represents a hashed version of the first data pattern with a fixed-size bit-length. The result of the error detection function on the first data pattern is stored in processor 410 to be used for comparison purposes (as described further below) according to an embodiment of the present invention. [0053] Parity schemes and checksum schemes, among others, are used in conjunction with error detection and correction (EDC) techniques and are known to those persons skilled in the relevant art. Based on the description herein, a person of ordinary skill in the relevant art will recognize that other error detection functions can be used in conjunction with EDC techniques. These other error detection functions are within the scope and spirit of the present invention. [0054] In an embodiment, the first data pattern is transmitted from processing unit 410 to memory device 420, where memory device 420 samples the information in the first data pattern at an interface of memory device 420 (e.g., an I/O pin interface of memory device 420) based on write clock signal 150. After the first data pattern is received by memorydevice 420, memory device 420 performs the error detection function on a second data pattern. The second data pattern represents the first data pattern received at the interface of memory device 420. In an embodiment, the second data pattern can contain different bit information from the bit information of the first data pattern transmitted from processing unit 410 since a timing skew may have occurred between write clock signal 150 and the data signals on data bus 1307-1300. This timing skew is similar to the timing skew described above with respect to Figure 3. [0055] In an embodiment, memory device 420 applies the same error detection function to the second data pattern as processor 410 applied to the first data pattern. The result of the error detection function on the second data pattern represents a hashed version of the second data pattern with a fixed-size bit-length according to an embodiment of the present invention. The result of the error detection function on the second data pattern is transmitted to processor 410 via EDC signal 160 according to an embodiment of the present invention. [0056] Processing unit 410 compares the result of the error detection function on the first data pattern (also referred to herein as "the first result") to the result of the error detection function on the second data pattern (also referred to herein as "the second result") to determine whether the two results match each other. In an embodiment, controller 450 of processing unit 410 compares the first result to the second result, where controller 450 compares the bit information from the first result to bit information from the second result on a on a bit-by-bit basis. In other words, each bit in the first result is compared to a corresponding bit in the second result to determine whether the first and second results match each other. [0057] In an embodiment, controller 450 determines a first timing boundary and a second timing boundary of the write timing window based on the comparison of the first and second results. The following description of the determination of the first and second timing boundaries of the write timing window is described in the context of two scenarios: (1) a scenario when the first and second results match each other as described in Figures 6-11 ; and, (2) a scenario when the first and second results do not match each other as described in Figures 12-17. For ease of explanation, the following description of the first and timing boundaries of the write timing window is based on data signal 130o of data bus 1307-130o. A person of ordinary skill in the relevant art will recognize that,based on the description herein, the flowcharts and exemplary timing diagrams described below are equally applicable to data signals on data bus 1307-1300. [0058] Figure 6 is an illustration of an embodiment of a flowchart 600 to determine the first timing boundary of the write timing window when the first and second results match each other. An exemplary timing diagram 700 of Figure 7 will be used to facilitate in the explanation of flowchart 600. In reference to timing diagram 700, timing diagrams I and III represent data signal 1300 and write clock signal 150, respectively. Here, memory device 420 can properly recover data from data signal 1300 since write clock signal 150 has a sufficient amount of time to sample data signal 1300 (e.g., a sufficient amount of time for memory device 420 to latch data signal 1300). This is similar to the timing relationship between data signal 1300 and clock signal 150 described above with respect to Figure 2. Timing diagram II of Figure 7 is a phase-shifted representation of data signal 130o and will be described below with respect to flowchart 600. [0059] In reference to flowchart 600 of Figure 6, the starting point of flowchart 600 considers the situation when the first result matches the second result. This is the case since, as noted above with respect to timing diagram 700, memory device 420 has a sufficient amount of time to sample data signal 1300 (in timing diagram I of Figure 7). Accordingly, the result of the error detection function on the first data pattern is identical to the result of the error detection function on the second data pattern, according to an embodiment of the present invention. In an embodiment, processing unit 410 executes the steps of flowchart 600 when determining the first timing boundary of the write timing window. [0060] In step 610, processing unit 410 introduces a positive incremental phase shift to the first data pattern. In an embodiment, the incremental phase shift is defined as a fraction of a cycle of write clock signal 150. For instance, the fraction can be 1/10, 1/5, 3/10, or 2/5 of write clock signal 150. Further, in reference to timing diagram 700 of Figure 7, the positive incremental phase shift is defined as an incremental phase shift in the "(+)" direction, according to an embodiment of the present invention. [0061] In step 620, the phase-shifted first data pattern is transmitted to memory device 420. [0062] In step 630, processing unit 410 receives a result from the error detection function applied to a second data pattern. The second data pattern represents the phase-shiftedfirst data pattern received at an interface of memory device 420. In an embodiment, memory device 420 performs the same error detection function on the second data pattern as the error detection function applied by processor 410 on the first data pattern. The result of the error detection function on the second data pattern is transmitted from memory device 420 to processing unit 410 via EDC signal 160 according to an embodiment of the present invention. [0063] In step 640, processing unit 410 compares a first result of the error detection function applied to the first data pattern to a second result of the error detection function applied to the second data pattern, where controller 450 stores the bit information of the first result. Controller 450 compares the bit information from the first result to bit information from the second result, where the second result is indicative of the phase- shifted first data pattern received at an interface of memory device 420, according to an embodiment of the present invention. In an embodiment, the first and second results are compared to each other on a bit-by-bit basis. [0064] In step 650, if the bit information from the first result matches the bit information from the second result, processing unit 410 introduces an additional positive incremental phase delay in the first data pattern (step 610) and steps 620-640 are repeated. [0065] In step 660, if the bit information from the first and second results do not match each other, then phase shift information from the prior phase-shifted first data pattern is stored in processing unit 410. In reference to timing diagram 700 of Figure 7, timing diagram II represents a positive phase-shifted data signal 1300 (i.e., positive phase-shifted first data pattern). A marker 710 in timing diagrams I and II represents a reference point on data signal 1300 to indicate the positive incremental phase shifts in data signal 1300. Further, a marker 730 indicates a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, where if additional increments of positive phase shifts are introduced to data signal 1300 (in timing diagram II of Figure 7), write clock signal 150 cannot sample a valid data signal 1300. This is because, with any additional positive increments in the phase shift to data signal 1300, write clock signal 150 would not have a sufficient amount of time to sample data signal 1300 (e.g., a non- sufficient amount of time for memory device 420 to latch data signal 1300). [0066] In reference to timing diagram 700 of Figure 7, a time period 720 represents the first timing boundary of the write timing window according to an embodiment of thepresent invention. In particular, time period 720 is a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, in which data signal 1300 cannot have an additional positive increment in phase shift without risk of an improper data recovery by memory device 420. In an embodiment, in reference to an original phase position of data signal 1300 in relation to write clock signal 150 (e.g., marker 710 in timing diagram I of Figure 7), data signal 1300 cannot have a positive phase shift more than time period 720 without risk of improper data recovery by memory device 420. [0067] Figure 8 is an illustration of an embodiment of a flowchart 800 to determine the second boundary condition of the write timing window when the first and second results match each other. An exemplary timing diagram 900 of Figure 9 will be used to facilitate in the explanation of flowchart 800. In reference to timing diagram 900, timing diagrams I and III represent data signal 1300 and write clock signal 150, respectively. Here, similar to the timing relationship between data signal 1300 and clock signal 150 described above with respect to Figure 2, memory device 420 can properly recover data from data signal 130o since write clock signal 150 has a sufficient amount of time to sample data signal 130o (e.g., a sufficient amount of time for memory device 420 to latch data signal 1300). Timing diagram II of Figure 9 is a phase-shifted representation of data signal 1300 and will be described below with respect to flowchart 800. [0068] The steps of flowchart 800 are similar to the steps of flowchart 600, except that the incremental phase shifts in the first data pattern are in the "(-)" direction. In particular, in step 810, processing unit 410 introduces a negative incremental phase shift to the first data pattern. Steps 820-840 perform similar functions as steps 620-640 of flowchart 600, respectively. [0069] In step 850, if the bit information from the first result matches the bit information from the second result, processing unit 410 introduces an additional negative incremental phase delay in the first data pattern (step 810) and steps 820-840 are repeated. [0070] In step 860, if the bit information from the first and second data patterns do not match each other, then phase shift information from the prior phase-shifted first data pattern is stored in processing unit 410. In reference to timing diagram 900 of Figure 9, timing diagram II represents a negative phase-shifted data signal 1300 (i.e., negative phase-shifted first data pattern). Marker 710 in timing diagrams I and II represents a reference point on data signal 1300 to indicate the negative incremental phase shifts indata signal 1300. Further, a marker 930 indicates a boundary condition for a relative phase shift between data signal 130o and write clock signal 150, where if additional increments of negative phase shifts are introduced to data signal 1300 (in timing diagram II of Figure 9), write clock signal 150 will not have a sufficient amount of time to sample data signal 1300 (e.g., a non-sufficient amount of time for memory device 420 to latch data signal 1300). [0071] In reference to timing diagram 900 of Figure 9, a time period 920 represents the second timing boundary of the write timing window according to an embodiment of the present invention. In particular, time period 920 is a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, in which data signal 1300 cannot have an additional negative increment in phase shift without risk of an improper data recovery by memory device 420. In an embodiment, in reference to an original phase position of data signal 1300 in relation to write clock signal 150 (e.g., marker 710 in timing diagram I of Figure 9), data signal 1300 cannot have a negative phase shift more than time period 920 without risk of improper data recovery by memory device 420. [0072] To summarize, with respect to Figures 6-9, the first and second timing boundaries of the write timing window have been defined in terms of a phase shift of data signal 1300 in relation to write clock signal 150. In an embodiment, from an original phase position of data signal 1300 in relation to write clock signal 150, the write timing window is bounded by the first and second timing boundaries. In an embodiment, the first timing boundary is defined as a maximal positive phase shift of data signal 1300 from its original phase position without improper data recovery by memory device 420. Further, the second timing boundary is defined as a maximal negative phase shift of data signal 1300 from its original phase position without improper data recovery by memory device 420, according to an embodiment of the present invention. [0073] Based on the description above, in an embodiment, write clock signal 150 can also be used to determine the first and second boundaries of the write timing window. The following description of Figures 10 and 11 will be used to facilitate in the explanation of how steps similar to those of flowcharts 600 and 800, respectively, can be applied to write clock signal 150 when determining the first and second boundaries of the write timing window, according to an embodiment of the present invention.[0074] Figure 10 is an illustration of an exemplary write timing diagram 1000 that will be used to facilitate in the explanation how steps similar to those in flowchart 600 of Figure 6 can be used to determine the first timing boundary of the write timing window based on write clock signal 150, according to an embodiment of the present invention. In reference to timing diagram 1000, timing diagrams I and II represent data signal 1300 and write clock signal 150, respectively. Here, similar to the timing relationship between data signal 1300 and clock signal 150 described above with respect to Figure 2, memory device 420 can properly recover data from data signal 1300 since write clock signal 150 has a sufficient amount of time to sample data signal 1300 (e.g., a sufficient amount of time for memory device 420 to latch data signal 1300). Timing diagram III of Figure 10 is a phase-shifted representation of write clock signal 150 and will be described in further detail below. [Q075J Similar to step 610 of Figure 6, processing unit 410 introduces a positive incremental phase shift to write clock signal 150. Next, the transmission, receiving, and comparison steps of steps 620-640 can be applied to the positive phase-shifted write clock signal 150. [0076] In reference to timing diagrams II and III of Figure 10, a marker 1010 in timing diagrams II and III represents a reference point on write clock signal 150 to indicate the positive incremental phase shifts in write clock signal 150. Further, a marker 1030 indicates a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, where if additional increments of positive phase shifts are introduced to write clock signal 150 (in timing diagram III of Figure 10), write clock signal 150 will not have a sufficient amount of time to sample data signal 1300 (e.g., a non-sufficient amount of time for memory device 420 to latch data signal 1300). [0077] In reference to timing diagram 1000 of Figure 10, a time period 1020 represents the first timing boundary of the write timing window, according to an embodiment of the present invention. In particular, time period 1020 is a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, in which write clock signal 150 cannot have an additional positive increment in phase shift without risk of an improper data recovery by memory device 420. In an embodiment, in reference to an original phase position of write clock signal 150 in relation to data signal 1300 (e.g., marker 1010 in timing diagram II of Figure 10), write clock signal 150 cannot have apositive phase shift more than time period 1020 without risk of improper data recovery by memory device 420. [0078] Figure 11 is an illustration of an exemplary write timing diagram 1100 that will be used to facilitate in the explanation how steps similar to those in flowchart 800 of Figure 8 can be used to determine the second timing boundary of the write timing window based on write clock signal 150, according to an embodiment of the present invention. In reference to timing diagram 1100, timing diagrams I and II represent data signal 1300 and write clock signal 150, respectively. Here, similar to the timing relationship between data signal 130o and clock signal 150 described above with respect to Figure 2, memory device 420 can properly recover data from data signal 1300 since write clock signal 150 has a sufficient amount of time to sample data signal 1300 (e.g., a sufficient amount of time for memory device 420 to latch data signal 1300). Timing diagram III of Figure 11 is a phase-shifted representation of write clock signal 150 and will be described in further detail below. [0079] Similar to step 810 of Figure 8, processing unit 410 introduces a negative incremental phase shift to write clock signal 150. Next, the transmission, receiving, and comparison steps of steps 820-840 can be applied to the negative phase-shifted write clock signal 150. [0080] In reference to timing diagrams II and III of Figure 11, marker 1010 in timing diagrams II and III represents a reference point on write clock signal 150 to indicate the negative incremental phase shifts in write clock signal 150. Further, a marker 1130 indicates a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, where if additional increments of negative phase shifts are introduced to write clock signal 150 (in timing diagram III of Figure 11), write clock signal 150 will not have a sufficient amount of time to sample a valid data signal 1300 (e.g., a non-sufficient amount of time for memory device 420 to latch data signal 1300). [0081] In reference to timing diagram 1100 of Figure 1 1, a time period 1 120 represents the second timing boundary of the write timing window, according to an embodiment of the present invention. In particular, time period 1120 is a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, in which write clock signal 150 cannot have an additional negative increment in phase shift without risk of an improper data recovery by memory device 420. In an embodiment, in reference toan original phase position of write clock signal 150 in relation to data signal 1300 (e.g., marker 1010 in timing diagram II of Figure 1 1), write clock signal 150 cannot have a negative phase shift more than time period 1120 without risk of improper data recovery by memory device 420. [0082] To summarize, with respect to Figures 10 and 11, the first and second timing boundaries of the write timing window have been defined in terms of a phase shift of write clock signal 150 in relation to data signal 1300. In an embodiment, from an original phase position of write clock signal 150 in relation to data signal 1300, the write timing window is bounded by the first and second timing boundaries. In an embodiment, the first timing boundary is defined as a maximal positive phase shift of write clock signal 150 from its original phase position without improper data recovery by memory device 420. Further, the second timing boundary is defined as a maximal negative phase shift of write clock signal 150 from its original phase position without improper data recovery by memory device 420, according to an embodiment of the present invention. [0083] The description above, with respect to Figures 6-11, describes techniques on determining the first and second boundaries of the write timing window when the first and second results match each other. In the embodiments described above, either data signal 1300 or write clock signal 150 is adjusted by incremental phase shifts such that the relative phase alignment between the two signals allow a proper data recovery by memory device 420. Based on the description herein, a person of ordinary skill in the art will recognize that data signal 1300 and write clock signal 150 can be both adjusted with, for example, a proper combination of positive and negative incremental phase shifts such that the relative phase alignment between the two signals allow a proper data recovery by memory device 420. [0084] The following description with respect to Figures 12-17 describes the determination of the first and second timing boundaries of the write timing window when the first and second results do not match each other. [0085] Figure 12 is an illustration of an embodiment of a flowchart 1200 to determine the first timing boundary of the write timing window when first and second results do not match each other. An exemplary timing diagram 1300 of Figure 13 will be used to facilitate in the explanation of flowchart 1200. In reference to timing diagram 1300, timing diagrams I and III represent data signal 1300 and write clock signal 150,respectively. Here, memory device 420 does not properly recover data signal 130o since write clock signal 150 does not have a sufficient amount of time to sample data signal 130o (e.g., a non-sufficient amount of time for memory device 420 to latch data signal 130o). This is similar to the timing relationship between data signal 1300 and clock signal 150 described above with respect to Figure 3. Timing diagram II of Figure 13 is a phase- shifted representation of data signal 1300 and will be described below with respect to flowchart 1200. [0086] In reference to flowchart 1200 of Figure 12, the starting point of flowchart 1200 considers the situation when the first result does not match the second result. This is the case since, as noted above with respect to timing diagram 1300, memory device 420 does not have a sufficient amount of time to sample data signal 1300. Accordingly, the result of the error detection function on the first data pattern is not identical to the result of the error detection function on the second data pattern, according to an embodiment of the present invention. In an embodiment, processing unit 410 executes the steps of flowchart 1200 when determining the first timing boundary of the write timing window. [0087] In step 1210, processing unit 410 introduces a positive incremental phase shift to the first data pattern. [0088] In step 1220, the phase-shifted first data pattern is transmitted to memory device 420 and stored in memory device 420. [0089] In step 1230, processing unit 410 receives a result from the error detection function applied to a second data pattern. The second data pattern represents the phase- shifted first data pattern received at an interface of memory device 420. In an embodiment, memory device 420 performs the same error detection function on the second data pattern as the error detection function applied by processor 410 on the first data pattern. The result of the error detection function on the second data pattern is transmitted from memory device 420 to processing unit 410 via EDC signal 160 according to an embodiment of the present invention. [0090] In step 1240, processing unit 410 compares a first result from the error detection function applied to the phase-shifted first data pattern to a second result from the error detection function applied to the second data pattern, where controller 450 stores the bit information of the first result. Controller 450 compares the bit information from the first result to bit information from the second result, where the second result is indicative ofthe phase-shifted first data pattern received at an interface of memory device 420, according to an embodiment of the present invention. In an embodiment, the first and second results are compared to each other on a bit-by-bit basis. [0091] In step 1250, if the bit information from the first result does not match the bit information from the second result, processing unit 410 introduces an additional positive incremental phase delay in the first data pattern (step 1210) and steps 1220-1240 are repeated. [0092] In step 1260, if the bit information from the first and second result match each other, then phase shift information from the phase-shifted first data pattern is stored in processing unit 410. In reference to timing diagram 1300 of Figure 13, timing diagram II represents a positive phase-shifted data signal 1300 (i.e., positive phase-shifted first data pattern). A marker 1310 in timing diagrams I and II represents a reference point on data signal 1300 to indicate the positive incremental phase shifts in data signal 1300. Further, a marker 1330 indicates a boundary condition for a relative phase shift between data signal 130o and write clock signal 150, where if additional increments of positive phase shifts are introduced to data signal 1300 (in timing diagram II of Figure 13), write clock signal 150 can be used to sample a valid data signal 1300. This is because, with any additional positive increments in the phase shift of data signal 1300, write clock signal 150 would have sufficient time to sample a valid data signal 1300. [0093] In reference to timing diagram 1300 of Figure 13, a time period 1320 represents the first timing boundary of the write timing window, according to an embodiment of the present invention. In particular, time period 1320 is a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, in which data signal 1300 can have an additional positive increment in phase shift and memory device 420 can properly recover data signal 1300. In an embodiment, in reference to an original phase position of data signal 1300 in relation to write clock signal 150 (e.g., marker 1310 in timing diagram I of Figure 13), data signal 1300 is required to have at least a positive phase shift of time period 1320 in order for memory device 420 to properly recover data signal 1300. [0094] Figure 14 is an illustration of an embodiment of a flowchart 1400 to determine the second boundary condition of the write timing window when the first and second data results do not match each other. An exemplary timing diagram 1500 of Figure 15 will beused to facilitate in the explanation of flowchart 1400. In reference to timing diagram 1500, timing diagrams I and III represent data signal 1300 and write clock signal 150, respectively. Here, similar to the timing relationship between data signal 1300 and clock signal 150 described above with respect to Figure 3, memory device 420 does not properly recover data from data signal 1300 since write clock signal 150 does not have a sufficient amount of time to sample data signal 1300 (e.g., a non-sufficient amount of time for memory device 420 to latch data signal 1300). Timing diagram II of Figure 15 is a phase-shifted representation of data signal 1300 and will be described below with respect to flowchart 1400. [0095] In an embodiment, the starting point for flowchart 1400 is from the positive phase shift of data signal 1300 corresponding to the first write timing boundary described above with respect to flowchart 1200 of Figure 12 and timing diagram 1300 of Figure 13. [0096] The steps of flowchart 1400 are similar to the steps of flowchart 1200. In an embodiment, steps 1410-1440 perform similar functions as steps 1210-1240 of flowchart 1200, respectively. [0097] In step 1450, if bit information from a first result of the error detection function applied to the first data pattern matches bit information from a second result of the error detection function applied to the second data pattern, processing unit 410 introduces an additional positive incremental phase delay in the first data pattern (step 1410) and steps 1420-1440 are repeated. [0098] In step 1460, if the bit information from the first and second results do not match each other, then phase shift information from the prior phase-shifted first data pattern is stored in processing unit 410. In reference to timing diagram 1500 of Figure 15, timing diagram II represents a positive phase-shifted data signal 1300 (i.e., positive phase-shifted first data pattern). Marker 1310 in timing diagrams I and II represents a reference point on data signal 130o to indicate the positive incremental phase shifts in data signal 1300. Further, a marker 1530 indicates a boundary condition for a relative phase shift between data signal 130o and write clock signal 150, where if additional increments of positive phase shifts are introduced to data signal 1300 (in timing diagram II of Figure 15), write clock signal 150 will sample an invalid data signal 1300 (e.g., a transition state of data signal 1300).[0099] In reference to timing diagram 1500 of Figure 15, a time period 1520 represents the second timing boundary of the write timing window, according to an embodiment of the present invention. In particular, time period 1520 is a boundary condition for a relative phase shift between data signal 130o and write clock signal 150, in which data signal 1300 cannot have an additional positive increment in phase shift without risk of an improper data recovery by memory device 420. In an embodiment, in reference to an original phase position of data signal 1300 in relation to write clock signal 150 (e.g., marker 1310 in timing diagram I of Figure 15), data signal 1300 cannot have a positive phase shift more than time period 1520 without risk of improper data recovery by memory device 420. [00100] To summarize, with respect to Figures 12-15, the first and second timing boundaries of the write timing window have been defined in terms of a phase shift of data signal 1300 in relation to write clock signal 150. In an embodiment, from an original phase position of data signal 1300 in relation to write clock signal 150, the write timing window is bounded by the first and second timing boundaries. In an embodiment, the first timing boundary is defined as a minimal positive phase shift of data signal 1300 from its original phase position with proper data recovery by memory device 420. Further, the second timing boundary is defined as a maximal positive phase shift of data signal 1300 from its original phase position with proper data recovery by memory device 420, according to an embodiment of the present invention. [00101] Based on the description above, in an embodiment, write clock signal 150 can also be used to determine the first and second boundaries of the write timing window. The following description of Figures 16 and 17 will be used to facilitate in the explanation of how steps similar to those of flowcharts 1200 and 1400, respectively, can be applied to write clock signal 150 when determining the first and second boundaries of the write timing window, according to an embodiment of the present invention. [00102] Figure 16 is an illustration of an exemplary write timing diagram 1600 that will be used to facilitate in the explanation how steps similar to those in flowchart 1200 of Figure 12 can be used to determine the first timing boundary of the write timing window based on write clock signal 150, according to an embodiment of the present invention. In reference to timing diagram 1600, timing diagrams I and II represent data signal 1300 and write clock signal 150, respectively. Here, similar to the timing relationship between datasignal 130o and clock signal 150 described above with respect to Figure 3, memory device 420 does not properly recover data from data signal 1300 since write clock signal 150 does not have a sufficient amount of time to sample data signal 130o (e.g., a non- sufficient amount of time for memory device 420 to latch data signal 1300). Timing diagram III of Figure 16 is a phase-shifted representation of write clock signal 150 and will be described in further detail below. [00103] Similar to step 1210 of Figure 12, processing unit 410 introduces a positive incremental phase shift to write clock signal 150. Next, the transmission, receiving, and comparison steps of steps 1220-1240 can be applied to the positive phase-shifted write clock signal 150. [00104] In reference to timing diagrams II and III of Figure 16, a marker 1610 in timing diagrams II and III represents a reference point on write clock signal 150 to indicate the positive incremental phase shifts in write clock signal 150. Further, a marker 1630 indicates a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, where if additional increments of positive phase shifts are introduced to write clock signal 150 (in timing diagram III of Figure 16), write clock signal 150 will have a sufficient amount of time to sample data signal 1300 (e.g., a sufficient amount of time for memory device 420 to latch data signal 1300). [00105] In reference to timing diagram 1600 of Figure 16, a time period 1620 represents the first timing boundary of the write timing window, according to an embodiment of the present invention. In particular, time period 1620 is a boundary condition to a relative phase shift between data signal 130o and write clock signal 150, in which write clock signal 150 can have an additional positive increment in phase shift and memory device 420 can properly recover data signal 1300. In an embodiment, in reference to an original phase position of write clock signal 150 in relation to data signal 130o (e.g., marker 1610 in timing diagram II of Figure 16), write clock signal 150 is required to have at least a positive phase shift of time period 1620 in order for memory device 420 to properly recover data signal 1300. [00106] Figure 17 is an illustration of an exemplary write timing diagram 1700 that will be used to facilitate in the explanation how steps similar to those in flowchart 1400 of Figure 14 can be used to determine the second timing boundary of the write timing window based on write clock signal 150, according to an embodiment of the present invention. Inreference to timing diagram 1700, timing diagrams I and II represent data signal 1300 and write clock signal 150, respectively. Here, similar to the timing relationship between data signal 130o and clock signal 150 described above with respect to Figure 3, memory device 420 does not properly recover data from data signal 1300 since write clock signal 150 does not have a sufficient amount of time to sample data signal 1300 (e.g., a non- sufficient amount of time for memory device 420 to latch data signal 1300). Timing diagram III of Figure 17 is a phase-shifted representation of write clock signal 150 and will be described in further detail below. [00107] In an embodiment, similar to flowchart 1400, the starting point for write timing diagram 1700 is from the positive phase shift of data signal 1300 corresponding to the first write timing boundary described above with respect to timing diagram 1600 of Figure 16. [00108] Similar to step 1410 of Figure 14, processing unit 410 introduces a positive incremental phase shift to write clock signal 150. Next, the transmission, receiving, and comparison steps of steps 1420-1440 can be applied to the positive phase-shifted clock write signal 150. [00109] In reference to timing diagrams II and III of Figure 17, marker 1610 in timing diagrams II and III represents a reference point on write clock signal 150 to indicate the positive incremental phase shifts in write clock signal 150. Further, a marker 1730 indicates a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, where if additional increments of positive phase shifts are introduced to write clock signal 150 (in timing diagram III of Figure 17), write clock signal 150 will not have a sufficient amount of time to sample a valid data signal 1300 (e.g., a non-sufficient amount of time for memory device 420 to latch data signal 1300). [001 10] In reference to timing diagram 1700 of Figure 17, a time period 1720 represents the second timing boundary of the write timing window, according to an embodiment of the present invention. In particular, time period 1720 is a boundary condition for a relative phase shift between data signal 1300 and write clock signal 150, in which write clock signal 150 cannot have an additional positive increment in phase shift without risk of an improper data recovery by memory device 420. In an embodiment, in reference to an original phase position of write clock signal 150 in relation to data signal 1300 (e.g., marker 1610 in timing diagram II of Figure 17), write clock signal 150 cannot have apositive phase shift more than time period 1720 without risk of improper data recovery by memory device 420. [00111] To summarize, with respect to Figures 16 and 17, the first and second timing boundaries of the write timing window have been defined in terms of a phase shift of write clock signal 150 in relation to data signal 1300. In an embodiment, from an original phase position of write clock signal 150 in relation to data signal 1300, the write timing window is bounded by the first and second timing boundaries. In an embodiment, the first timing boundary is defined as a minimal positive phase shift of write clock signal 150 from its original phase position with proper data recovery by memory device 420. Further, the second timing boundary is defined as a maximal positive phase shift of write clock signal 150 from its original phase position with proper data recovery by memory device 420, according to an embodiment of the present invention. [00112] The description above, with respect to Figures 12-17, describes techniques on determining the first and second boundaries of the write timing window when the first and second data results do not match each other. In the embodiments described above, either data signal 1300 or write clock signal 150 is adjusted by positive incremental phase shifts such that the relative phase alignment between the two signals allow a proper data recovery by memory device 420. Based on the description herein, a person of ordinary skill in the art will recognize that data signal 1300 and write clock signal 150 can each be adjusted by negative incremental phase shifts to determine the write timing window and to achieve a proper phase alignment between the two signals. In addition, based on the description herein, a person of ordinary skill in the art will recognize that data signal 1300 and write clock signal 150 can be both adjusted with, for example, a proper combination of positive and negative incremental phase shifts such that the relative phase alignment between the two signals allow a proper data recovery by memory device 420. [00113] The description above of the determination of the write timing window, with respect to step 520 of Figure 5, assumes that resources of memory device 420 (e.g., data bus 1307-130o, A/C bus 140i5-140o, and write clock signal 150) are not being used for a read and/or write mode of operation. However, based on the description herein, a person of ordinary skill in the relevant art will recognize that the determination of the write timing window can be made during a write mode of operation of computer system 400.[00114] In an embodiment, during a write mode of operation, the data written to memory device 420 can be used to determine the write timing window. The write data (e.g., first data pattern) can be phase adjusted, as described above with respect to step 520 of Figure 5, such that first and second boundaries of the write timing window can be determined, according to an embodiment of the present invention. In phase-adjusting the write data to determine the write timing window, write errors may occur in computer system 400 due to memory device 420 not having a sufficient amount of time to sample data signals 1307- 130o. Thus, in using the write data to determine the write timing window, a person of ordinary skill in the art will recognize that a threshold condition exists in computer system 400, where a certain number of write errors may noticeably impact the performance of computer system 400 (e.g., graphics on a display monitor may stutter). In this situation, it may be desirable to shorten the number of phase-adjustment iterations in the EDC mode of operation such that the performance of computer system 400 is not affected. The number of read/write operations can be a predetermined value based on the performance of computer system 400, where the predetermined value does not affect system performance, according to an embodiment of the present invention. In an embodiment, the number of read/write operations can be based on a predetermined value that ensures an appropriate timing window from a reference point (e.g., a predetermined number of incremental phase shifts in both "(+)" and "(-)" direction from the original timing position of the EDC data pattern). [001 15] In an embodiment, a boundary of the write timing window can be defined by a relative phase difference, between the data signals on data bus 1307-130o and write clock signal 150, that generates a predetermined number of write errors (also referred to herein as a "programmable threshold value"). That is, if a predetermined number of incorrect checksum values occur as a result of repeated write operations between processing unit 410 and memory device 420, then the relative phase difference between the data signals on data bus 1307-1300 and write clock signal 150 is defined as the boundary of the write timing window. [00116] For instance, at a relative phase difference between the data signals on data bus 1307-1300 and write clock signal 150, a write error (e.g., incorrect checksum value) can occur for a particular data pattern transmitted from processing unit 410 to memory device 420. The same data pattern can be transmitted from processing unit 410 to memorydevice 420 to verify if another write error occurs. If another write error occurs, this process of transmitting the same data pattern to memory device 420 and verifying the checksum value can be repeated. If a predetermined number of write errors have occurred after this iterative process (e.g., the programmable threshold value has been reached), then the relative phase difference between the data signals on data bus 1307- 130o and write clock signal 150 can be defined as the boundary of the write timing window. In an embodiment, the predetermined number of write errors (e.g., programmable threshold value) can be based on the performance of computer system 400, where the predetermined value does not affect system performance (e.g., graphics on a display monitor does not stutter). [00117] In reference to method 500 of Figure 5, in step 530, processing unit 410 adjusts a phase difference between data signals on data bus 1307-1300 and write clock signal 150 based on the write timing window determined in step 520. As described above, with respect to step 520, the write timing window refers to a time period in which data signals on data bus 1307-130o, write clock signal 150, or both the data signals on data bus 1307- 130o and write clock signal 150 can be phase-adjusted in relation to one another such that memory device 420 properly recovers the data signals on data bus 1307-1300. [00118] In reference to Figure 4, based on the write timing window for the first data pattern, controller 450 can adjust the phase delay in the transmission of data signals on data bus 1307-130o and write clock signal 150, via phase delay circuits 430 and 400, respectively, according to an embodiment of the present invention. In an embodiment, the transmission of data signals on data bus 1307-130o can be adjusted, the transmission of write clock signal 150 can be adjusted, or the transmission of both the data signals on data bus 1307-1300 and write clock signal 150 can be adjusted such that the relative phase difference between the data signals on data bus 1307-130o and write clock signal 150 is within the write timing window. [00119] After the relative phase difference between the data signals on data bus 1307-130o and write clock signal 150 has been adjusted based on step 530, processing unit 410 performs write operations on memory device 420 based on the relative phase difference setting, according to an embodiment of the present invention. In an embodiment, the steps of method 500 to adjust the write timing of memory device 420 can be performed on a periodic basis or on an "as-needed" basis as required by computer system 400.[00120] Various aspects of the present invention may be implemented in software, firmware, hardware, or a combination thereof. Figure 18 is an illustration of an example computer system 1800 in which embodiments of the present invention, or portions thereof, can be implemented as computer-readable code. For example, the method illustrated by flowchart 500 of Figure 5 can be implemented in computer system 1800. Various embodiments of the present invention are described in terms of this example computer system 1800. After reading this description, it will become apparent to a person skilled in the relevant art how to implement embodiments of the present invention using other computer systems and/or computer architectures. [00121] It should be noted that the simulation, synthesis and/or manufacture of various embodiments of this invention may be accomplished, in part, through the use of computer readable code, including general programming languages (such as C or C++), hardware description languages (HDL) such as, for example, Verilog HDL, VHDL, Altera HDL (AHDL), or other available programming and/or schematic capture tools (such as circuit capture tools). This computer readable code can be disposed in any known computer- usable medium including a semiconductor, magnetic disk, optical disk (such as CD- ROM, DVD-ROM). As such, the code can be transmitted over communication networks including the Internet. It is understood that the functions accomplished and/or structure provided by the systems and techniques described above can be represented in a core (such as a GPU core) that is embodied in program code and can be transformed to hardware as part of the production of integrated circuits. [00122] Computer system 1800 includes one or more processors, such as processor 1804. Processor 1804 may be a special purpose or a general purpose processor (e.g., a GPU). Processor 1804 is connected to a communication infrastructure 1806 (e.g., a bus or network). [00123] Computer system 1800 also includes a main memory 1808, preferably random access memory (RAM), and may also include a secondary memory 1810. Secondary memory 1810 can include, for example, a hard disk drive 1812, a removable storage drive 1814, and/or a memory stick. Removable storage drive 1814 can include a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, or the like. The removable storage drive 1814 reads from and/or writes to a removable storage unit 1818 in a well known manner. Removable storage unit 1818 can comprise a floppy disk,magnetic tape, optical disk, etc. which is read by and written to by removable storage drive 1814. As will be appreciated by persons skilled in the relevant art, removable storage unit 1818 includes a computer-usable storage medium having stored therein computer software and/or data. [00124] In alternative implementations, secondary memory 1810 can include other similar devices for allowing computer programs or other instructions to be loaded into computer system 1800. Such devices can include, for example, a removable storage unit 1822 and an interface 1820. Examples of such devices can include a program cartridge and cartridge interface (such as those found in video game devices), a removable memory chip (e.g., EPROM or PROM) and associated socket, and other removable storage units 1822 and interfaces 1820 which allow software and data to be transferred from the removable storage unit 1822 to computer system 1800. [00125] Computer system 1800 can also include a communications interface 1824. Communications interface 1824 allows software and data to be transferred between computer system 1800 and external devices. Communications interface 1824 can include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, or the like. Software and data transferred via communications interface 1824 are in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 1824. These signals are provided to communications interface 1824 via a communications path 1826. Communications path 1826 carries signals and can be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, a RF link or other communications channels. [00126] In this document, the terms "computer program medium" and "computer-usable medium" are used to generally refer to media such as removable storage unit 1818, removable storage unit 1822, and a hard disk installed in hard disk drive 1812. Computer program medium and computer-usable medium can also refer to memories, such as main memory 1808 and secondary memory 1810, which can be memory semiconductors (e.g., DRAMs, etc.). These computer program products provide software to computer system 1800. [00127] Computer programs (also called computer control logic) are stored in main memory 1808 and/or secondary memory 1810. Computer programs may also be receivedvia communications interface 1824. Such computer programs, when executed, enable computer system 1800 to implement embodiments of the present invention as discussed herein. In particular, the computer programs, when executed, enable processor 1804 to implement processes of embodiments of the present invention, such as the steps in the method illustrated by flowchart 500 of Figure 5, discussed above. Accordingly, such computer programs represent controllers of the computer system 1800. Where embodiments of the present invention are implemented using software, the software can be stored in a computer program product and loaded into computer system 1800 using removable storage drive 1814, interface 1820, hard drive 1812, or communications interface 1824. [00128] Embodiments of the present invention are also directed to computer program products including software stored on any computer-usable medium. Such software, when executed in one or more data processing device, causes a data processing device(s) to operate as described herein. Embodiments of the present invention employ any computer-usable or -readable medium, known now or in the future. Examples of computer-usable mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory), secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, optical storage devices, MEMS, nanotechnological storage devices, etc.), and communication mediums (e.g., wired and wireless communications networks, local area networks, wide area networks, intranets, etc.). [00129] While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by persons skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the invention as defined in the appended claims. It should be understood that the invention is not limited to these examples. The invention is applicable to any elements operating as described herein. Accordingly, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. |
Systems and methods are directed to selectively bypassing allocation of cache lines in a cache. A bypass predictor table is provided with reuse counters to track reuse characteristics of cache lines,based on memory regions to which the cache lines belong in memory. A contender reuse counter provides an indication of a likelihood of reuse of a contender cache line in the cache pursuant to a miss in the cache for the contender cache line, and a victim reuse counter provides an indication of a likelihood of reuse for a victim cache line that will be evicted if the contender cache line is allocated in the cache. A decision whether to allocate the contender cache line in the cache or bypass allocation of the contender cache line in the cache is based on the contender reuse counter value and the victim reuse counter value. |
1.A method of managing allocations in a cache, the method comprising:Determining a contention reuse counter value indicating a likelihood of reusing the contention cache line in the cache in accordance with a missed contention cache line in the cache;Determining a sacrifice reuse counter value indicating a likelihood of reuse of the victim cache line to be evicted if the contention cache line is allocated in the cache; andDetermining whether to allocate the contention cache line in the cache or bypassing the allocation of the contention cache line in the cache based on the contention reuse counter value and the victim reuse counter value .2.The method of claim 1 wherein the reuse counter value of the memory region comprising the cache line is organized in an entry bypassing the prediction table.3.The method of claim 2, wherein if said bypass prediction table does not include said contention counter counter value in a first entry corresponding to said first memory region comprising said contention cache line, then The first entry is created and the contention counter value is incremented by a first amount.4.The method of claim 2, wherein if the bypass prediction table does not include the sacrifice reuse counter value in a second entry corresponding to a second memory region including the victim cache line, then The victim reuse counter value is determined based on a global eviction counter.5.The method of claim 4 wherein said global eviction counter comprises a running average of a reuse counter of the eviction memory region, said eviction memory region comprising a eviction cache line.6.The method of claim 2, further comprising tagging each entry of the bypass prediction table with at least a portion of a memory address belonging to the memory region corresponding to the entry.7.The method of claim 2 wherein said bypass prediction table is an unlabeled structure and two or more memory regions interfere with said single entry bypassing the prediction table.8.The method of claim 7 wherein the interference is one of constructive interference or destructive interference.9.The method of claim 2 wherein said memory region comprises two or more than two consecutive physical addresses.10.The method of claim 2, further comprising: tracking a plurality of consecutive misses corresponding to a memory region of the entry in a miss counter associated with each entry, and if the consecutive misses The number is greater than a pre-specified threshold, then the allocation of the competing cache line in the cache that bypasses the memory region is prevented until a memory hit is observed in the cache.11.The method of claim 1 including determining to bypass the allocation of the contention cache line in the cache if the contention reuse counter value is less than the victim reuse counter value.12.The method of claim 1 including decrementing said sacrificial reuse counter value by a second amount to produce a decremented sacrificial reuse counter value, and wherein said competing reuse counter value is less than said decrementing sacrifice The allocation of the contention cache line in the cache is determined to be bypassed by multiplying the counter value by the third amount.13.The method of claim 12 including reducing said third amount to increase a limit on bypassing allocation of said competing cache line in said cache.14.The method of claim 13 comprising setting the third amount to a low value if the competing cache behavior prefetches a cache line.15.The method of claim 1, further comprising: determining whether based on the cache miss, the demand load miss, or the prefetch miss based on the miss in the cache for the competing cache line The contention cache line is allocated in the cache or bypasses the allocation of the contention cache line in the cache.16.The method of claim 1 comprising: dividing the cache set into groups and flexibly enabling or disabling bypassing allocations in the cache based on a set competition among the groups, wherein A boot group has enabled bypassing, the second boot group has disabled bypassing, and assigns the first boot to the following group based on respective capabilities of the first boot group and the second boot group One of the bypass policies of the group or the second boot group.17.An apparatus comprising:cache;Bypassing the prediction table, wherein the bypass prediction table includes:a competing reuse counter configured to indicate the likelihood of reusing the competing cache line in the cache in accordance with the missed contention cache line in the cache; andSacrificing a reuse counter configured to indicate a reuse probability of a victim cache line to be evicted if the competing cache line is allocated in the cache; anda cache controller configured to determine whether to allocate the contention cache line in the cache or bypass the contention cache based on the contention reuse counter value and the victim reuse counter value The allocation of rows in the cache.18.The apparatus of claim 17, wherein the entry bypassing the prediction table comprises a reuse counter value of a memory region, the memory region comprising a cache line.19.The apparatus of claim 18, wherein if said bypass prediction table is not included in said competing reuse counter value in a first entry corresponding to said first memory region comprising said competing cache line, then The cache controller is configured to insert the first entry into the bypass prediction table and increment the contention reuse counter value by a first amount.20.The apparatus of claim 19, further comprising a global eviction counter, wherein the cache controller is configured to not include in the second memory region corresponding to the victim cache line at the bypass prediction table Determining the sacrifice reuse counter value in the case of the sacrifice reuse counter value in the second entry21.The apparatus of claim 20 wherein said global eviction counter comprises a running average of a reuse counter of the eviction memory area, said eviction memory area comprising a eviction cache line.22.The apparatus of claim 18, wherein each entry of the bypass prediction table further comprises a tag, wherein the tag comprises at least a portion of a memory address belonging to the memory region corresponding to the entry.23.The apparatus of claim 18, wherein the bypass prediction table is an unlabeled structure and two or more memory regions interfere with the single entry bypassing the prediction table.24.The apparatus of claim 18, wherein each entry further comprises a miss counter configured to track a plurality of consecutive misses corresponding to a memory region of the entry, and wherein if the number of consecutive misses is greater than a specified threshold, then the cache controller is configured to prevent allocation of the contention cache line bypassing the memory area in the cache until a hit is observed in the cache region.25.The apparatus of claim 18, wherein the cache controller is configured to wrap around if the contention reuse counter value is less than a third amount multiplied by a second amount of the victim reuse counter value The allocation of the contention cache line in the cache.26.The device of claim 25, wherein the cache controller is configured to increase a limit on bypassing allocation of the contention cache line in the cache.27.The device of claim 17, wherein the cache controller is further configured to determine whether the miss in the contention cache line in the cache is an instruction cache miss, a demand load miss Or prefetch misses to determine whether to allocate the competing cache line in the cache or to bypass the allocation of the competing cache line in the cache.28.The apparatus of claim 17, wherein the cache controller is configured to apply a set contention among the cache sets divided into groups, wherein the first boot group has enabled bypass, the second boot group The group has disabled bypassing and assigns bypassing of the first or second group to the following group based on respective capabilities of the first and second leader groups One of the strategies.29.An apparatus comprising:cache;Means for indicating a likelihood of reusing a competing cache line in the cache in accordance with a missed contention of the cache line in the cache;Means for indicating a reuse possibility of a victim cache line to be evicted if the contention cache line is allocated in the cache; andDetermining whether the device is to be determined based on the means for indicating the possibility of reusing the contention cache line and the means for indicating the possibility of reusing the victim cache line A device that competes for cache line allocation in the cache or bypasses the allocation of the contention cache line in the cache.30.A non-transitory computer readable storage medium comprising code that, when executed by a processor, causes the processor to perform operations for managing allocations in a cache, the non-transitory computer readable storage The media includes:a code for determining a contention re-use counter value indicating a likelihood of reusing the competing cache line in the cache in accordance with a missed contention cache line in the cache;a code for determining a sacrifice reuse counter value indicating a likelihood of reuse of a victim cache line to be evicted if the contention cache line is allocated in the cache; andDetermining whether to allocate the contention cache line in the cache or bypass the contention cache line in the cache based on the contention reuse counter value and the victim reuse counter value The assigned code. |
Selective bypass of allocations in the cacheClaim priority according to 35 U.S.C. §119This patent application claims the benefit of U.S. Provisional Patent Application No. 62/320,384, filed on Apr. 8, 2016, entitled " SELECTIVE BYPASSING OF ALLOCATION IN A CACHE." The U.S. provisional patent application is pending and is hereby incorporated by reference in its entirety in its entirety in its entirety in its entirety.Technical fieldThe disclosed aspects relate to processing systems. More specifically, an exemplary aspect relates to selectively bypassing allocations in a cache (eg, a last level cache in a memory hierarchy of a processing system).Background techniqueThe processing system can include one or more processors that can be made for accessing memory stored in a memory (eg, a main memory implemented using a dynamic random access memory (DRAM) technology in a double data rate (DDR) implementation) Request for data. The memory request generated by the processor can display temporal locality, which refers to requesting data relating to the most recent request, and correspondingly means that the same data can be requested again in the near future. To take advantage of temporal locality, one or more caches may be provided to store data that determines the likelihood of future use. The cache can be designed to be small in size to achieve high speed (e.g., approximately tens of clock cycles compared to memory access speeds that can be on the order of hundreds or thousands of clock cycles).If the requested data is present in the cache, the cache hit results and data can be directly read from the cache that generated the cache hit. On the other hand, if the requested data does not exist in the cache, the cache miss result and an alternate storage location such as other cache or final memory can be accessed to retrieve the requested data. Since the cache is designed to be small, the limited storage space in the cache may be filled, which means that some cache lines may need to be evicted (called sacrificial cache lines) to accommodate incoming cache lines ( It is called a competing cache line). A cache replacement strategy is known in the art for displacing a victim cache line and replacing it with a contention cache line. The process of choosing which cache lines to evict is called a victim choice.The last level cache (eg, L3 cache) is the alternate storage location prior to the main memory for the processor and the higher level caches (eg, L1 and L2 caches). Research into the access characteristics of the last level cache reveals that multiple cache lines inserted into the last level cache (eg, after replacing the victim cache line) may never be reused or re-stored before they are evicted themselves. take. Therefore, these cache lines may not be useful during their possession in the last level cache. Additionally, these unused cache lines may shift the victim line, which may be more useful without being evicted. (ie, may be reused). Conventional victim selection strategies are designed to select victim cache lines with lower or minimal future usage possibilities as eviction candidates, but the sacrifice selection strategy does not consider the relative usefulness of competing cache lines that shift victim cache lines. Or reuse potential. Therefore, there may be scenarios where more useful cache lines are replaced by less useful cache lines, such as in the last level cache, but conventional cache management strategies do not prevent these scenarios.Summary of the inventionExemplary aspects of the invention relate to systems and methods for cache management. For example, the disclosed systems and methods involve selectively bypassing the allocation of cache lines in a cache. The bypass prediction table has a reuse counter to track the reuse characteristics of the cache line based on the memory area to which the cache line in the memory belongs. The contention counter provides an indication of the likelihood of reusing the contention cache line in the cache in accordance with a missed contention cache line in the cache, and the victim reuse counter provides for allocation of the contention cache line to An indication of the likelihood of reuse of the eviction cache line in the case of a cache. The decision whether to allocate a contention cache line in the cache or to bypass the allocation of the contention cache line in the cache is based on the contention counter reuse counter value and the victim reuse counter value.For example, an exemplary aspect relates to a method of managing allocations in a cache. The method includes determining, in accordance with a missed contention cache line in a cache, a contention reuse counter value indicating a likelihood of a contention cache line in the reuse cache; and determining that the contention line will be allocated to the high speed at the high speed In the case of a cache, the eviction of the victim cache line is reused at the expense of the reuse of the counter value. The content of the contention reuse counter value and the victim reuse counter value are used to determine whether to allocate the contention cache line in the cache or to bypass the allocation of the contention cache line in the cache.Another exemplary aspect relates to a device that includes a cache and a bypass prediction table. The bypass prediction table includes a contention reuse counter configured to indicate the likelihood of reusing the competing cache line in the cache in accordance with the missed contention cache line in the cache; and sacrificing the reuse counter value, which is configured The possibility of reuse of a victim cache line that is eviction with the indication that the contention cache line will be allocated in the cache. The device includes a cache controller configured to determine whether to contend a competing cache line in a cache or bypass a competing cache line in a cache based on a contention reuse counter value and a victim reuse counter value distribution.Another exemplary aspect relates to an apparatus comprising: a cache; means for indicating a likelihood of reusing a cache line in a cache in accordance with a missed contention cache line in the cache; for indicating Means for severing the reuse possibility of the cache line in the case where the contention cache line is allocated in the cache; and means for basing on the possibility of indicating the reuse of the cache line And means for indicating the possibility of reusing the cache line to determine whether to allocate the contention cache line to the cache or to bypass the allocation of the contention cache line in the cache.Another exemplary aspect relates to a non-transitory computer readable storage medium comprising code that, when executed by a processor, causes the processor to perform operations for managing allocations in a cache, the non The transitory computer readable storage medium includes: code for determining a contention replay counter value indicative of a likelihood of reusing a competing cache line in a cache in accordance with a missed contention cache line in the cache; for determining a code indicating a sacrificial reuse counter value of a reuse probability of a victim cache line to be evicted in the case where a contention cache line is allocated in the cache; and for reuse based on a competitive reuse counter value and sacrifice The counter value determines whether the contention cache line is allocated in the cache or bypasses the allocated code of the competing cache line in the cache.DRAWINGSThe drawings are presented to assist in describing aspects of the invention, and are provided for the purpose of illustrationFIG. 1A depicts a block diagram of a processor system configured in accordance with aspects of the present invention.FIG. 1B illustrates aspects of selective bypassing of a cache configuration in accordance with aspects of the present invention.2 illustrates a bypass prediction table configured in accordance with aspects of the present invention.FIG. 3 depicts an exemplary method for cache management in accordance with aspects of the present invention.4 depicts an exemplary computing device in which aspects of the invention may be advantageously employed.Detailed waysAspects of the invention are disclosed in the following description of related aspects of the invention and related drawings. Alternative aspects can be devised without departing from the scope of the invention. In other instances, well-known elements of the present invention are not described in detail or are omitted to avoid obscuring the details of the invention.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous. Also, the term "aspect of the invention" does not require that all aspects of the invention include the features, advantages or modes of operation discussed.The terminology used herein is for the purpose of describing particular aspects and is not intended to limit the invention. As used herein, the singular forms ""," It will be further understood that the terms "comprising" and "including", when used, are used in the context of the meaning of the recited features, integers, steps, operations, components and/or components, but do not exclude one or more other features, The presence or addition of integers, steps, operations, components, components, and/or groups thereof.In addition, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be appreciated that the various actions described herein can be performed by a specific circuit (e.g., an application-specific integrated circuit (ASIC)), by program instructions being executed by one or more processors, or by a combination of the two. Additionally, the sequence of actions described herein can be considered to be embodied entirely in any form of computer readable storage medium having stored therein a corresponding set of computer instructions, the computer instructions being executed The associated processor will be caused to perform the functionality described herein. Accordingly, the various aspects of the invention may be embodied in a number of different forms, and all forms are intended to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, a corresponding form of any such aspect may be described herein as, for example, "logic" configured to perform the described acts.In various aspects of the invention, it is observed that the number of cache misses in the cache can be bypassed by bypassing cache lines that are unlikely to be accessed (or reused/rereferenced) in the future (ie, not allocated Reduced in the cache). To bypass the allocation of such cache lines with lower future access/use possibilities, an exemplary mechanism is provided to track and remember the reuse characteristics of the cache lines in the cache. In this way, if a cache line is encountered in the future, then whether the cache line should be allocated in the cache or whether the cache line can be bypassed in the cache can be made based on reuse characteristics or future access potential. The decision to assign.In the present invention, an exemplary mechanism for tracking a reuse pattern of cache lines, such as in the size or granularity of a contiguous address region called a "memory region", is provided and used to make a bypass decision . It can be observed that although the cache line can reach the last level cache (eg, L3 cache) by a cache closer to the processor (eg, L1, L2 cache, it can also be referred to as higher or inner level) Cache) to localize temporal or spatial filtering around the cache line, there may be higher data level locality with respect to the memory area to which the cache line of the last level cache may belong. In other words, the reuse or re-reference feature can be similar or identical to a cache line belonging to the same memory region. Accordingly, some aspects relate to selectively bypassing the allocation of cache lines in the last level cache based on the reuse characteristics of the memory regions to which the cache lines belong, rather than tracking individual caches when making selective bypass decisions. Reuse characteristics of the line, which results in lower implementation costs.In some aspects, a "memory region" as used in this disclosure refers to a contiguous range of memory addresses or a sequence of contiguous memory address locations. Thus, a higher reuse memory area refers to a memory area whose address corresponds to a higher probability of reusing a cache line in the last level cache; and a lower reuse memory area means that its address corresponds to the last stage A lower likelihood memory area of the cache line is reused in the cache. For example, cache lines from higher reuse memory regions may be preferentially allocated in the last level cache compared to cache lines from lower reuse memory regions. To determine the higher and lower reuse memory regions, a bypass prediction table (BPT) is used to track the reuse history of addresses in the memory region.It will be appreciated that if a positive cache bypass method is implemented, performance may be affected, for example, if the access mode of the memory address in the memory region is not uniform, ie, some addresses in the memory region are considered to have A large number of reuses, while some other addresses in the same memory area are considered to have a smaller number of reuses. To mitigate the potential performance degradation in the selective bypass technique herein, a set-contention-based approach for flexibly enabling or disabling selective cache bypass techniques is disclosed. Moreover, in some aspects, the selective cache bypass decision can also be based on the type of instruction, where different instruction types can produce different selective bypass decisions.It should be noted that although the last level cache is used as a specific example in the present invention, it should be understood that the disclosed aspects are not limited to the last level cache, but may extend to any cache (eg, closer to the processor's cache, For example, L1, L2, etc.).Referring to FIG. 1A, a processing system 100 configured in accordance with an exemplary aspect is illustrated. Processing system 100 can include one or more processors, typically shown as processors 102a through 102b, and respective higher level L1 caches 104a through 104b and L2 caches 106a through 106b. The L3 cache 108 is shown as a shared cache between the processors 102a-102b and in this description is a last-level cache coupled to the memory 110 (e.g., main memory), it should be remembered that various other arrangements and caches Organization is possible within the scope of the invention. Cache controller 109 is shown as a block that encompasses L3 cache 108 to convey logic or functionality related to management of selective allocation bypass in L3 cache 108, but it should be understood that cache controller 109 This representation is not intended to be limiting; rather, any alternative implementation is possible for the functionality of the cache management of the L3 cache 108 described herein (especially the selective allocation bypass of the exemplary aspects). .In FIG. 1B, another view of processing system 100 featuring L3 cache 108 and memory 110 is shown in more detail to illustrate exemplary aspects related to selective bypassing of allocations in L3 cache 108. In this regard, two scenarios are depicted, one being the presence of a cache hit for incoming cache line A in L3 cache 108, and the other being the presence of a high speed for incoming cache line A. Cache missed scenarios. In the presence of a cache miss for incoming cache line A, a determination is made whether to allocate cache line A in L3 cache 108 for cache miss service or whether it can be selectively selected in accordance with an exemplary aspect of the present invention. Bypass the decision of the distribution. In other words, in the presence of a cache miss for incoming cache line A, incoming cache line A is considered a competing cache line and will already exist in another cache in L3 cache 108. The row (e.g., cache line B) is evaluated as a victim cache line or potential candidate for eviction from the L3 cache 108 to accommodate the competing cache line if the contention cache line is to be allocated in the L3 cache 108. In this regard, the reuse feature of the competing cache line (incoming cache line A) is compared to the reuse feature of the potentially victim cache line (cache line B).As discussed previously, in the present invention, the reuse or re-reference feature of the cache line may be determined based on the memory region to which the cache line belongs. Memory 110 is shown to include a number of memory regions, where each memory region is a block of continuous data lines. Some memory regions have been shown with a first shadow associated with the note "Reuse", which represents a feature of frequent reuse of cache lines in corresponding memory regions. Some memory regions have been shown with a second shadow associated with the note "rarely reused", which represents a rare reuse feature of cache lines in the corresponding memory region. In the present invention, it should be understood that cache lines belonging to a memory area and having frequent reuse characteristics may be reused in the L3 cache 108, while cache lines belonging to the memory area and having rare reuse characteristics are unlikely to be reused. In the L3 cache 108. In some examples, a competing cache line that is unlikely to be reused in the L3 cache 108 can be bypassed in the L3 cache 108 and thus prevented from replacing the victim cache line that may be reused.In an exemplary aspect, the bypass prediction table (BPT) 200 is configured to track reuse characteristics of the memory regions of the main memory 110. In some aspects, BPT 200 can be implemented as part of cache controller 109. Selected aspects of the BPT 200 are shown in FIG. 1B, and a more detailed description of the BPT 200 will be provided in the following sections with reference to FIG. As shown in FIG. 1B, BPT 200 can be a structure that includes several entries, where each entry can include at least a counter called a reuse counter 206 that is associated with a respective memory region of memory 110. For the sake of simplicity, the case where both the incoming cache line A and the potentially victim cache line B of FIG. 1B have established a corresponding reuse counter 206 in the associated entry of the BPT 200 (although in later sections, will be more The case where one or both of these entries may not exist in the BPT 200 is discussed in detail). In particular, the entry in BPT 200 associated with incoming cache line A has a first reuse counter represented by "cntr1", which may be used to indicate the possibility of reusing the incoming cache line. The device, and thus, the entry provides an indication of the number of reuses of incoming cache line A. The entry in BPT 200 associated with potentially sacrificial cache line B has a second reuse counter represented by "cntr2", which may be a means for indicating the possibility of reusing potentially sacrificial cache line B, and thus, The entry provides an indication of the number of reuses of potentially sacrificial cache line B.In the presence of a hit for incoming cache line A, the first reuse counter is incremented by, for example, a first amount (eg, represented by the symbol "++cntr1"). In the presence of a miss for incoming cache line A, the incoming cache line A is a competing cache line and the first reuse counter for incoming cache line A is referred to as a contention reuse counter. And the second reuse counter for potentially sacrificing cache line B is referred to as a victim reuse counter. To evaluate whether incoming cache line A will be allocated in L3 cache 108 by shifting or expelling potentially sacrificial cache line B, competing reuse counter (cntr1) and victim reuse counter (cntr2) are passed by comparator 202. Compare. In some aspects, the second counter can be decremented by a second amount (eg, represented by the symbol "--cntr2") to produce a decremented sacrificial reuse counter, and prior to the comparison, the decremented sacrificial reuse counter can be multiplied by a third Quantity, which is called the product factor "f". In other words, the decremented sacrificial reuse counter can be multiplied by the applied multiplicative factor "f" to compare with the competing reuse counter. Based on the comparison, if the contention reuse counter is less than the victim reuse counter and the applied multiplier, for example, represented by the symbol cntr1 <f*(--cntr2), then the allocation of the incoming cache line A can be bypassed, Because potentially sacrificial cache line B can be determined to have a higher reuse probability, and thus remain in L3 cache 106 and will not be evicted for accommodating cache line A with lower reuse possibilities. Accordingly, some cache lines can be selectively bypassed in the L3 cache 106 based on the reuse counter associated with the cache line.A bypass prediction table (BPT) 200 configured in accordance with aspects of the present invention is described in greater detail in conjunction with FIGS. 1-2. As previously discussed, BPT 200 can be configured to record information (also referred to as "metadata") that includes reuse features of the memory region. While it may be within the scope of the present invention to track the reuse of each cache line to provide a more accurate indication of the reuse characteristics of each cache line, in this regard, in an exemplary aspect, to reduce tracking resources Cache lines belonging to the same memory area can be grouped together and, conversely, the reuse characteristics of the memory area can be tracked. Accordingly, BPT 200 includes a plurality of entries that are representatively shown as n entries 210a through 210n, each entry corresponding to a memory region. In addition to the reuse counter 206 discussed with respect to FIG. 1B, each entry 210a through 210n may include a valid flag that is shown as valid 202 and an optional field that is shown as tag 204 and miss counter 208. The entries 210a through 210n may be accessed, for example, using a hash function implemented by the hash block 212 on one or more bits (e.g., high order bits) of the memory address 214 associated with the cache access. It is possible to configure the BPT 200 as a labeled structure or an unlabeled structure, and correspondingly, the indicia 204 may exist or be excluded from the entries 210a to 210n.In particular, if BPT 200 is implemented as an unlabeled structure, then tag 204 may be excluded because one of the fields of entries 210a through 210n and memory address 214 (or a hash of some or all of memory address 214) are available The entries of the BPT 200 are directly indexed without additional tag comparisons to determine that the indexed entries correspond to a particular memory region. Thus, in an unlabeled implementation, multiple memory regions can be mapped to the same entry of BPT 200. Valid 202 may indicate whether the corresponding entries 210a through 210n are valid. Correspondingly, the reuse counter 206 in each entry (eg, the first counter cntr1 and the second counter cntr2 discussed with reference to FIG. 1B) may be modified based on the reuse characteristics of the plurality of memory regions represented by each entry, and This can lead to interference. The interference may be constructive interference in nature, wherein multiple memory regions affect the reuse counters 206 of the corresponding entries 210a through 210n in the same manner, which may result in faster training of the reuse counter 206 to represent multiple memories Similar reuse characteristics of the area. On the other hand, interference may also be destructive interference in nature, where multiple memory regions mapped to the same entry 210a through 210n may have different reuse features, and thus, multiple memory regions in the same entry 210a through 210n The reuse counter 206 may not be the correct indication of the reuse behavior of one or more of the plurality of memory regions. Accordingly, it should be appreciated that while the unlabeled implementation of BPT 200 may consume less resources and thus be more offset than the implemented implementation of BPT 200, the above considerations may be based on specific design goals and possible interference. A decision is made to make a choice between the marked or unmarked implementation of the BPT 200.With continued reference to Figure 2, the labeled embodiment of the BPT 200 will now be described in greater detail. As mentioned previously, if BPT 200 is implemented as a tagged structure, then tag 204 can be a field that exists in entries 210a through 210n. The tag 204 can include some bits of the memory address 214 (eg, not including the least significant bit corresponding to the block offset and the set index bits to point to the entries 210a through 210n). The tag 204 can be used to confirm whether entries 210a through 210n corresponding to a particular memory region indexed by the memory address 214 reside in the BPT 200 (thus providing more specificity than an unlabeled structure, where multiple memory regions can be pointed The same entry).In addition to the indicia 204, each entry 210a through 210n can include a valid field (valid 202, which can be a valid flag, such as a valid bit) to indicate whether the corresponding entry 210a through 210n is valid. As previously discussed, the reuse counter 206 in entries 210a through 210n can be configured as a saturation counter that reuses the number of cache lines in the in-memory area corresponding to entries 210a through 210n (eg, according to cache hits). Count. A miss counter 208, also referred to as a zone miss counter, can be another optional field, which will be discussed in more detail below, and configured to serve for memory corresponding to entries 210a through 210n The number of consecutive cache misses raised by the region is counted.Considering in more detail an example implementation of BPT 200 (e.g., as implemented in cache controller 109), if there is a hit for a cache line (e.g., cache line A in Figure IB) in L3 cache 108, hit The memory address 214 of the cache line is used to access the corresponding entries 210a through 210n, and the reuse counter 206 of the entry is incremented, for example, by a first amount, which may be an empirically determined constant (as shown in FIG. 1B, This increment is expressed as ++cntr1).If there is a miss in the L3 cache 108 for an incoming cache line (eg, the cache line in FIG. 1B), then the allocation of the incoming cache line in the L3 cache 108 is not bypassed. The incoming or competing cache line can potentially evict an existing cache line in the L3 cache 108 (eg, the victim cache line B in Figure IB). Thus, for incoming cache lines, a reuse counter 206 corresponding to entries 210a through 210n of the memory region including the incoming cache line is read, which is referred to as a contention reuse counter. A reuse counter 206 corresponding to the memory region of the victim row is also read, which is referred to as a victim reuse counter. The victim reuse counter decrements the second amount, which can be another empirically determined constant (this decrement is generally indicated as --cntr2 in Figure 1B). The decremented sacrificial reuse counter can then be multiplied by a third amount, which can be another empirically determined constant (referred to as "f"), and then the decremented sacrificial reuse counter and the competitive reuse counter Comparison.In the above aspect, the victim reuse counter may be decremented as mentioned above to cause a memory region (referred to as a "sacrificial memory region") corresponding to the victim cache line to be not reused in the victim memory region. In this case, the BPT 200 is gradually evicted (i.e., the victim reuse counter value is a sufficiently low value to indicate that the corresponding entry 210a to 210n or the victim memory region will be evicted). Furthermore, adjusting the third amount or the multiplication factor "f" can change the nature of the bypass decision: for example, if the product factor is higher (for example, "1" in the normalized ratio from 0 to 1), then it is said Apply a positive bypass scheme where more decisions are made to bypass the allocation; and if the multiplier is lower (for example, "0.5" on the same normalized scale), then the application is said to be less aggressive or more A restrictive bypass scheme in which fewer decisions are made to bypass the allocation.With continued reference to BPT 200 of FIG. 2, if the memory region corresponding to the potentially victim cache line does not have an entry in BPT 200 (ie, the victim reuse counter is not available), then a global eviction counter (GEC) can be relied upon as a replacement. Aspects of the GEC (not explicitly shown) will now be explained.In accordance with each cache access to the incoming cache line (regardless of whether it caused a hit or miss), the BPT 200 is accessed to determine if the entries 210a through 210n corresponding to the memory region including the incoming cache line are present. In the BPT 200. As previously discussed, if an entry for an incoming or competing cache line does not exist, the entries 210a through 210n are inserted into the BPT 200, such as by the cache controller 109, and correspondingly, the contention created in the entry is again The counter is incremented (eg, incremented by the first amount). If the cache access causes a miss, then entries 210a through 210n corresponding to the memory region including the potentially victim cache line are also viewed, for example, by cache controller 109, but may have been evicted, for example, because entries 210a through 210n may have been logged out Therefore, entries 210a through 210n corresponding to the victim cache line may not be present in the BPT 200. In such cases where the entries 210a through 210n corresponding to the victim memory region are not in the BPT 200 (it should be remembered that entries 210a through 210n corresponding to the memory regions or "competing memory regions" of the competing cache line will be present in the BPT 200, Because the entries 210a through 210n are not present, they are inserted into the BPT 200 at the time of cache access), and the GEC is used to make the following decision.The GEC calculates a running average of the reuse counter 206 corresponding to the entries 210a through 210n of the memory region that have been evicted from the BPT 200. The GEC is updated by averaging the current value of the GEC and the reuse counter 206 of the eviction entry in accordance with the eviction from the BPT 200, which may occur when the entry is evoked during the insertion of another entry into the BPT 200. In accordance with a cache miss in which the entry corresponding to the victim memory region does not exist in the BPT 200, the GEC can act as a proxy for the victim reuse counter, where the GEC and the contention reuse counter can be performed in the same manner as explained previously. Compare to make a decision to bypass.With continued reference to the BPT 200 of Figure 2, the miss counter 208 will now be considered in greater detail. The miss counter 208 of the entries 210a through 210n or the corresponding region miss counter of the memory region can be used to define a bypass, for example, if the bypass can adversely affect performance. The miss counter 208 can be used in addition to or in conjunction with the set contention method for enabling or disabling the exemplary allocation bypass technique discussed below. The miss counter 208 of entries 210a through 210n may count consecutive misses thrown in the L3 cache 108 for memory regions corresponding to entries 210a through 210n. When the number of consecutive misses becomes greater than a pre-specified threshold, the allocation of the cache lines (or entries 210a through 210n) belonging to the memory region in the L3 cache 108 may be closed until the observed A cache hit of the memory area. Once a cache hit is observed, the miss counter 208 can be reset and the above process can be repeated.As mentioned previously, in some exemplary aspects, a set contention method (eg, applied by cache controller 109) may be used to flexibly enable or disable the selective bypass technique described above. In the set contention method, a certain number of sets in the cache (eg, in a set associative implementation of L3 cache 108) are divided into multiple set groups, such as three groups.Thus, in an example, two of the three groups contain a small set and these groups are referred to as a lead group. Two boot groups are assigned specific policies for enabling or disabling selective bypass (the policies may be referred to as "enable bypass" and "disable bypass"). For the first boot group of the two boot groups, the assigned policy may be "enable bypass", and for the second boot group of the two boot groups, the assigned policy may be "disable bypass" Over." A cache counter can be continuously monitored, for example, using a saturation counter to detect cache misses in the first and second boot groups. For example, a cache miss in the first boot group will increment the saturation counter, while a cache miss in the second boot group will decrement the saturation counter.In the set contention method, the third group of sets of L3 caches 108 is referred to as a "follow" group, which includes a set of followers. For a follow set, the bypass policy can be assigned as a bypass policy for a group of a first boot group or a second boot group with a lower number of cache misses. The saturation counter can be used to determine which of the first and second boot groups has a lower number of cache misses. Thus, if the first boot group has a lower number of cache misses, then bypass is enabled for the set in the following group, otherwise bypassing is disabled for the set in the following group. In this manner, by using the set contention method, the above-described selective bypass technique using BPT 200 can be enabled or disabled for most of the set of L3 caches 108 (following groups) based on observed performance benefits (eg, less Cache misses suggest better performance). Viewed another way, in some aspects, in the case of using set contention, the cache allocation for each set of L3 caches 108 can be based on whether bypassing is enabled or disabled for the set, and if enabled (eg, for the first boot group and for the following group if the first boot group shows better performance), then the above selective bypass of the BPT 200 based reuse counter can be implemented.In some aspects, cache misses can be divided into multiple categories and the bypass mechanism can be based on the categories. For example, the bypass mechanism can be specifically defined for three categories: (a) instruction cache misses; (b) demand load misses; and (c) prefetch misses.In the first category of instruction cache misses, bypassing the allocation can be avoided because instruction cache misses are expensive and positive or false bypass decisions can be detrimental to performance.Among the second and third categories of demand loading and prefetch loading, more restrictive criteria can be used in the bypass decision of the third category (prefetch loading). The qualification criteria may be related to reducing the aforementioned third amount or the multiplier factor applied to the sacrifice reuse counter (eg, the product factor f that can be applied as "0.5" to make the bypass more restrictive). Since a dedicated prefetch engine can be employed in the case of prefetch loading to issue a request prefetched from a memory address for intended future use, the cache line prefetched from the memory address has a higher probability of being used in the future ( Assume that the prefetch engine is accurate or well trained). Therefore, for prefetch loading, the value of the sacrificial reuse counter is likely to be significantly higher than the competitive reuse compared to the corresponding relative value of the victim reuse counter and the competing reuse counter for the second category (demand load miss). The value of the counter.Moreover, if a request to the prefetch cache line issued to the L2 caches 106a through 106b is also missed, for example, in the L3 cache 108, then the cache line corresponding to the prefetch request may be bypassed in the L3 cache 108. Allocation (assuming that the memory controller not shown in the figure will supply the prefetch request to the L2 caches 106a through 106b). This scenario can be generated when the timing of the prefetch engine is inaccurate. For example, a prefetch request for a particular address may be issued earlier or later than required by the L3 cache 108 such that when a miss of the same address L2 caches 106a through 106b reaches the L3 cache 108, The cache line for the address prefetch may have been evicted (if the prefetch request was issued earlier) or has not yet arrived at the L3 cache 108 (if the prefetch request was issued later).Thus, it can be seen that the exemplary aspects have several features that are advantageous, for example, compared to previous methods. The following is a summary of some of the key aspects and related advantages of the present invention.In an exemplary aspect, the set contention for enabling or disabling bypass avoids the effects of performance degradation without providing such flexible enable/disable, as lack of it may result in poor bypass decisions .In an exemplary aspect, as described above, the memory region counter may only increment in accordance with cache hits (ie, cache hits due to cache hits), rather than each cache access The memory area counter is incremented. It should be appreciated that cache hits better represent the reuse behavior of the memory region than just any cache access that also contains cache misses.In the exemplary BPT 200, a bypass decision can be made as to whether an entry corresponding to a contention or victim cache line is present in the BPT 200, as discussed above. For example, an exemplary aspect can include the global eviction counter (GEC) described above for making a bypass decision when a victim cache line does not exist in the BPT 200. Additionally, the BPT 200 can be implemented as a labeled structure or an unmarked structure based on specific design needs.Yet another advantage of the exemplary aspect relates to the miss counter 208 described above for counting the number of consecutive misses initiated in the memory region. When the miss counter 208 reaches a pre-specified threshold (say, the number "N"), the allocation of cache lines belonging to the memory region is bypassed in the L3 cache 108, even though the cache line otherwise satisfies the winding Over criteria (for example, based on a comparison of incoming cache lines with reuse counters for potentially sacrificial cache lines). The memory area miss counter 206 can be reset when a hit is observed in the memory area.In an exemplary aspect, in making a bypass policy, multiple categories related to instruction cache misses, demand load misses, and prefetch misses can be handled in different ways, which increases the effectiveness of the bypass policy.Accordingly, it will be appreciated that the exemplary aspects include various methods for performing the processes, functions and/or algorithms disclosed herein. For example, FIG. 3 illustrates a method 300 of managing allocations in a cache (eg, L3 cache 108).In block 302, a contention re-use counter value indicating the likelihood of reusing the competing cache line in the cache is determined in accordance with a missed contention cache line in the cache (eg, cache line A in FIG. 1B). For example, the first counter in Figure 1B).Block 304 includes determining a victim reuse counter value indicating the likelihood of reuse of a victim cache line (eg, cache line B) that will be evicted if the contention cache line is allocated in the cache (eg, the first in FIG. 1B) Two counters).Block 306 includes determining whether to allocate the competing cache line in the cache or bypassing the competing cache line at a high speed based on the contention reuse counter value and the victim reuse counter value (eg, using the BPT 200 by the cache controller 109). The allocation in the cache (eg, L3 cache 108). For example, in some aspects, the allocation of bypassing the contention cache line in the cache can be determined if the contention reuse counter value is less than the victim reuse counter value, and in some aspects, can further involve: sacrificing Reusing the counter value to decrement the second amount to produce a decremented sacrificial reuse counter value; and multiplying the contention of the compensating reuse counter value by the decremented sacrificial reuse counter value by a third amount (eg, a product factor "f") The case is determined to bypass the allocation of the competing cache line in the cache.Moreover, cache controller 109 can be configured to reduce the third amount, which can cause an increase in the limit on bypassing the allocation of competing cache lines in the cache, and thus, when competing for cache behavior prefetching cache lines Setting the third amount to a low value may be an option. In addition, determining whether to allocate the competing cache line to the cache or bypassing the allocation of the competing cache line in the cache may be further based on the miss of the competing cache line in the cache, the instruction cache miss, the demand load Missing or prefetching misses.As previously discussed, the above method 300 can be implemented, for example, with a bypass scheme enabled based on a set contention method. For example, method 300 may initially include dividing a set of caches into groups and involving flexibly enabling or disabling the bypassing of allocations in the cache according to method 300 of FIG. 3 based on set competition among the groups, Wherein the first boot group has enabled bypassing, the second boot group has disabled bypass, and assigns the first boot group or the following group to the following group based on respective capabilities of the first boot group and the second boot group Two of the bypassing group's bypass policies.Moreover, as discussed with respect to FIGS. 1B and 2, method 300 can also include organizing an reuse counter value (eg, reuse counter 206) of the memory region including the cache line in an entry that bypasses the prediction table (eg, BPT 200) ( For example, in entries 210a through 210n), where the memory region includes two or more than two consecutive physical addresses. If the bypass prediction table is not included in the contention counter counter value in the first entry corresponding to the first memory region including the contention cache line, the method 300 may include creating the first entry and causing the contention reuse counter value Increase the first amount. On the other hand, if the bypass prediction table is not included in the victim reuse counter value in the second entry corresponding to the second memory region including the victim cache line (eg, there is no sacrifice cache line B in FIG. 1B) Cntr2), then method 300 can include determining a victim reuse counter value based on a global eviction counter (GEC), wherein, as previously described, the global eviction counter includes a running average of a reuse counter of the eviction memory region, The eviction memory area includes the eviction cache line.Moreover, method 300 can also include, for example, marking, in a tagged implementation of BPT 200, each entry that bypasses the prediction table with at least a portion of a memory address belonging to a memory region corresponding to the memory region (eg, using a tag) 204). In an alternative, an unlabeled implementation may be selected in which the bypass prediction table is an unlabeled structure and two or more memory regions interfere with the single entry bypassing the prediction table, wherein the interference is constructive One of interference or destructive interference.In some aspects, method 300 can also include tracking a plurality of consecutive misses in a memory region corresponding to the entry in a miss counter associated with each entry (eg, miss counter 206), and if consecutive misses The number of times greater than a pre-specified threshold prevents the allocation of contention cache lines that bypass the memory area in the cache (e.g., by cache controller 109) until the memory area is hit in the cache.Example devices that may utilize exemplary aspects of the present invention will now be discussed with reference to FIG. FIG. 4 shows a block diagram of computing device 400. Computing device 400 may correspond to an implementation of processing system 100 shown in FIG. 1 and is configured to perform method 300 of FIG. In the depiction of FIG. 4, computing device 400 is shown to include processor 102a, L1 cache 104a, L2 cache 106a, and L3 cache 108, wherein BPT 200 of FIG. 1 is communicatively coupled to L3 cache 108 for use The allocation in the L3 cache 108 is determined in accordance with an exemplary aspect. Various other details of the components discussed with reference to Figures 1 through 2 have been omitted from Figure 4 for clarity. The memory 110 of the computing device 400 can be configured in a similar manner to the main memory 110 discussed with reference to FIG. In FIG. 4, processor 102a is illustratively shown as being coupled to memory 110, where three levels of cache include L1 cache 104a, L2 cache 106a, and L3 cache 108, but it should be understood that it is known in the art. Other memory configurations may also be supported by computing device 400.FIG. 4 also shows display controller 426 coupled to processor 102a and display 428. In some cases, computing device 400 can be used for wireless communication and FIG. 4 also shows optional blocks in dashed lines, such as an encoder/decoder (codec) 434 coupled to processor 102a (eg, audio and/or A voice codec, and a speaker 436 and microphone 438 can be coupled to the codec 434; and a wireless antenna 442 coupled to the wireless controller 440, the wireless controller 440 being coupled to the processor 102a. In a particular aspect, processor 102a, display controller 426, memory 110, and wireless controller 440 are included in an in-package system or on-chip system device 422 in the presence of one or more of these optional blocks.Thus, in a particular aspect, input device 430 and power supply 444 are coupled to system-on-chip device 422. Moreover, in a particular aspect, as illustrated in FIG. 4, display 428, input device 430, speaker 436, microphone 438, wireless antenna 442, and power supply 444 are on the chip in the presence of one or more optional blocks. System device 422 is external. However, each of display 428, input device 430, speaker 436, microphone 438, wireless antenna 442, and power supply 444 can be coupled to a component of system-on-chip device 422, such as an interface or controller.It should be noted that although FIG. 4 generally depicts computing device, processor 102a, and memory 110, it can also be integrated into a set top box, server, music player, video player, entertainment unit, navigation device, personal digital assistant (PDA). , fixed location data unit, computer, laptop, tablet, communication device, mobile phone or other similar device.Those skilled in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and symbols that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields, or magnetic particles, light fields, or light particles, or any combination thereof. Chip.In addition, the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system. The described functionality may be implemented in a different manner for each particular application, but such implementation decisions should not be construed as causing a departure from the scope of the invention.The methods, sequences, and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of hardware and software modules. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from the storage medium and write the information to the storage medium. In the alternative, the storage medium may be integral to the processor.Accordingly, aspects of the invention may include a computer readable medium embodying a method for managing allocation of a cache. Thus, the present invention is not limited to the illustrated examples, and any means for performing the functions described herein are included in the aspects of the invention.While the foregoing disclosure shows illustrative aspects of the invention, it should be noted that various changes and modifications may be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims according to the aspects of the invention described herein are not required to be performed in any particular order. In addition, although the elements of the invention may be described or claimed in the singular, the singular |
The invention relates to a microelectronic device, an electronic system and a related method. The microelectronic device includes: a stack including an alternating sequence of dielectric structures and conductive structures; and a channel structure extending vertically through an opening of the stack and including a first semiconductor material having a first band gap. The microelectronic device also includes a conductive plug structure within the opening and in direct contact with the channel region; and an energy band offset structure within the opening and in direct physical contact with the channel structure and the conductive plug structure. The energy band offset structure includes a second semiconductor material having a second band gap different from the first band gap. The microelectronic device also includes a conductive line structure electrically coupled to the conductive plug structure. |
1. A microelectronic device comprising:a channel material extending vertically through the stack of vertically alternating conductive and dielectric materials;a conductive line adjacent to the channel material;a conductive plug in direct physical contact with the channel material along a vertical interface between the channel materials, the conductive plug in direct physical contact with the conductive line; andA band offset material is in direct physical contact with the channel material along a single vertical interface between the channel materials, the band offset material is in direct physical contact with the conductive plug and is not in direct physical contact with the conductive line, the band offset material has a band gap that is different from an additional band gap of the channel material.2 . The microelectronic device of claim 1 , wherein the conductive plug includes a lower portion and an upper portion having a larger lateral extent than the lower portion, the band-shifting material laterally surrounding the lower portion of the conductive plug.3 . The microelectronic device of claim 1 , wherein an outer sidewall of the band offset material is aligned with an outer sidewall of the conductive plug.4 . The microelectronic device of claim 1 , wherein the band offset material extends laterally between interior sidewalls of the channel material, and the entire conductive plug is located above an upper surface of the band offset material.5. The microelectronic device according to claim 1, further comprising:an inner oxide material adjacent to the channel material;an outer oxide material adjacent to the conductive material of the stack; anda nitride material located between the inner oxide material and the outer oxide material,Wherein upper surfaces of each of the channel material, the inner oxide material, the nitride material, and the outer oxide material are coplanar with each other.6 . The microelectronic device of claim 1 , wherein the conductive line plug comprises a first conductive material and the band-shifting material comprises a second, different conductive material.7. The microelectronic device of claim 1, wherein the conductive plug comprises a first dopant and the band-shifting material comprises a second, different dopant.8. A microelectronic device comprising:A semiconductor structure comprising a channel material extending vertically through a stack of vertically alternating conductive and dielectric materials;a conductive line structure vertically disposed above the semiconductor structure;a band offset structure in direct physical contact with the semiconductor structure along a single vertically extending inner surface of the semiconductor structure, the band offset structure comprising a material having a band gap that is different from an additional band gap of the semiconductor structure; andA conductive plug structure is in direct physical contact with each of the semiconductor structure, the conductive line structure, and the band offset structure.9 . The microelectronic device according to claim 8 , wherein the conductive plug structure is vertically interposed between the conductive line structure and the band offset structure.10 . The microelectronic device according to claim 8 , wherein an upper boundary of the band offset structure is vertically located below upper boundaries of the semiconductor structure and the conductive plug structure.11. The microelectronic device according to claim 8, further comprising a dielectric material laterally surrounded by the semiconductor structure, horizontal centers of the band offset structure and the conductive plug structure being aligned with a horizontal center of the dielectric material.12 . The microelectronic device according to claim 11 , wherein the band offset structure is vertically interposed between the conductive plug structure and the dielectric material.13. The microelectronic device of claim 11, wherein an upper boundary of the band-offset structure is coplanar with a lowermost surface of the conductive plug structure, and the band-offset structure is in direct contact with an upper surface of the central dielectric material.14. An electronic system comprising:Processor; anda microelectronic device operatively coupled to the processor, the microelectronic device comprising:structures extending vertically through a stack of vertically alternating conductive and dielectric materials, each of the structures comprising:Semiconductor channel materials;a plug material extending directly between and in physical contact with the inner sidewalls of the semiconductor channel material; anda band offset material directly beneath a portion of the plug material, the band offset material extending directly between and in physical contact with the inner sidewalls of the semiconductor channel material; andA conductive line is over the structure and in physical contact with an upper surface of the plug material.15 . The electronic system of claim 14 , wherein a first band gap of the band offset material is relatively smaller than a second band gap of the plug material and a third band gap of the semiconductor channel material.16 . The electronic system of claim 15 , wherein the second band gap of the plug material is equal to the third band gap of the semiconductor channel material.17 . The electronic system of claim 14 , wherein the plug material comprises polysilicon, and the band offset material comprises one or more of silicon germanium, germanium, and indium gallium arsenide.18 . The electronic system of claim 14 , wherein a height of a vertical interface between the band offset material and the semiconductor channel material is greater than a height of an additional vertical interface between the plug material and the semiconductor channel material.19. The electronic system of claim 14, wherein the band offset material at least partially vertically overlaps an uppermost conductive gate material within the stack of vertically alternating conductive and dielectric materials, and the plug material is located at a vertical height above the uppermost conductive gate material.20. The electronic system of claim 14, wherein the microelectronic device comprises a 3D NAND flash memory device including at least one memory cell array, at least some of the conductive materials of the stack being configured as access lines for individual memory cells of the at least one memory cell array. |
Microelectronic devices and electronic systemsDivisional Application InformationThis application is a divisional application of the invention patent application with application date of September 6, 2019, application number 201910843545.1, and invention name “Semiconductor devices, electronic systems and related methods”.Technical FieldEmbodiments of the present invention relate to the field of semiconductor device design and fabrication. More particularly, embodiments of the present invention relate to semiconductor devices including vertical memory cell strings, and to related electronic systems and methods.Background techniqueA continuing goal of the semiconductor industry has been to increase the memory density (e.g., the number of memory cells per memory die) of memory devices, such as non-volatile memory devices (e.g., NAND flash memory devices). One method of increasing memory density in non-volatile memory devices is to utilize an architecture that includes an array of vertical memory cell strings. An example of a conventional vertical memory cell string includes a semiconductive material (e.g., channel material) extending vertically through an opening in a stack of alternating conductive gate materials (e.g., word lines, control gates, access lines) and dielectric materials, and an oxide-nitride-oxide (ONO) structure positioned laterally between the stack and the semiconductive pillars. Each memory cell of the vertical string includes one of the conductive gate materials and a portion of the ONO structure and a semiconductive material laterally adjacent to the one of the conductive gate materials. This configuration permits a larger number of memory cells to be positioned in a given unit of die surface area by building up the memory cell array upward (e.g., vertically) on the die, compared to a structure having a conventional planar (e.g., two-dimensional) arrangement of cells.As technology advances in 3D memory devices, arrays of vertical memory cell strings are created and designed with an increased number of alternating conductive gate materials and dielectric materials to increase the number of memory cell access devices (e.g., transistors). This increase results in a stack with a larger height, and a larger vertical memory cell string through the stack with the larger height. The semiconductor material (e.g., channel material) in the larger vertical memory cell string may need to carry an increased current, the so-called "string current", to effectively operate all the memory cells in the vertical string. Whether the increase in string current is necessary depends primarily on the band-to-band tunneling ("BTBT") at the select gate drain ("SGD") (e.g., the top select gate near the data line). In addition, conventional polysilicon (also referred to as "poly") material located between the conductive connection (e.g., bit line) and the channel material may result in gate induced drain leakage ("GIDL") that is insufficient for the erase function on such a long vertical memory string. Therefore, polysilicon or silicon nitride channel materials alone may not be sufficient to generate sufficient GIDL current within a reasonable time frame in a stack with a larger height. Band offset materials, such as low band gap ("LBG") materials located between the conductive connection and the channel material can achieve increased GIDL current due to enhanced BTBT generation to facilitate erase operations. However, LBG materials often have detrimental effects (e.g., increased defects and traps) that result in reduced string current.Therefore, there is a need for new semiconductor devices, such as 3D nonvolatile memory devices (eg, 3D NAND flash memory devices), that exhibit improved GIDL current to facilitate erase operations of corresponding vertical memory strings without affecting string current, and electronic systems including the same.Summary of the inventionIn one embodiment, a semiconductor device includes: a stack including an alternating sequence of dielectric structures and conductive structures; a channel structure within an opening extending vertically through the stack and including a first semiconductor material having a first band gap. The semiconductor device also includes: a conductive plug structure within the opening and in direct contact with the channel region; and a band offset structure within the opening and in direct physical contact with the channel structure and the conductive plug structure. The band offset structure includes a second semiconductor material having a second band gap different from the first band gap. The semiconductor device also includes a conductive line structure electrically coupled to the conductive plug structure.In another embodiment, a method of forming a semiconductor device includes forming an opening extending vertically through a stack of alternating conductive gate materials and dielectric materials. The method includes forming a channel material within the opening. The method also includes forming a band offset material within the opening and adjacent to the channel material. The method also includes forming a plug material within the opening and electrically coupled to the channel material. The band offset material is electrically coupled to the channel material and the plug material, and the band gap of the band offset material is different from the band gap of each of the channel material and the plug material.In yet another embodiment, an electronic system including a semiconductor device includes a processor and a semiconductor device electrically coupled to the processor. The semiconductor device includes vertical structures within an opening extending vertically through a stack of vertically alternating conductive materials and dielectric materials. Each of the vertical structures includes a channel material, a plug material adjacent to the channel material, and a band offset material in direct contact with each of the channel material and the plug material. The band gap of the band offset material is different from the band gap of each of the channel material and the plug material. The semiconductor device also includes a data line above the opening extending vertically through the stack, and an uppermost conductive gate material laterally adjacent to the opening. The plug material vertically overlaps the uppermost conductive gate material at least partially.BRIEF DESCRIPTION OF THE DRAWINGS1A-1E are simplified partial cross-sectional views illustrating an embodiment of a method of forming a semiconductor device structure, in accordance with an embodiment of the present invention.1F is a simplified partial cross-sectional side view of a portion of a vertical string of memory cells of the semiconductor device structure of FIG. 1E .2A-2E are simplified partial cross-sectional views illustrating embodiments of methods of forming semiconductor device structures, according to additional embodiments of the present invention.3 is a partial cross-sectional perspective view of a vertical memory device including a semiconductor device structure having a stair-type structure according to an embodiment of the present invention.FIG. 4 is a schematic block diagram of an electronic system according to an embodiment of the present invention.Detailed waysSemiconductor device structures, and related semiconductor devices and electronic systems are described. In some embodiments, the semiconductor device includes: a channel region including a first semiconductor material having a first band gap; a plug region directly contacting the channel region; a conductive connector electrically coupled to the plug region; and a band offset region including a second semiconductor material having a second band gap different from the first band gap. The band offset region may be in direct contact with each of the channel region and the plug region.The following description provides specific details, such as material composition and processing conditions, in order to provide a sufficient description of embodiments of the present invention. However, a person of ordinary skill in the art will understand that embodiments of the present invention may be practiced without adopting these specific details. In fact, embodiments of the present invention may be practiced in conjunction with conventional semiconductor manufacturing techniques used in the industry. In addition, the description provided below does not form a complete process flow for manufacturing semiconductor devices (e.g., memory devices). The semiconductor device structure described below does not form a complete semiconductor device. Only those process actions and structures necessary for understanding the embodiments of the present disclosure are described in detail below. Additional actions to form a complete semiconductor device from a semiconductor device structure may be performed by conventional manufacturing techniques.The drawings presented herein are for illustrative purposes only and are not intended to be actual views of any particular material, component, structure, device or system. Changes in the shapes depicted in the drawings, for example, due to manufacturing techniques and/or tolerances, should be expected. Therefore, the embodiments described herein should not be interpreted as being limited to specific shapes or regions as described, but rather include shape deviations, for example, caused by manufacturing. For example, a region described or described as box-shaped may have rough and/or nonlinear features, and a region described or described as circular may include some rough and/or linear features. In addition, the sharp angles described may be rounded, and vice versa. Therefore, the regions illustrated in the figures are schematic in nature, and their shapes are not intended to illustrate the precise shape of the regions and do not limit the scope of the claims of the present invention. The accompanying drawings are not necessarily drawn to scale. In addition, common elements between the drawings may retain the same numbering.As used herein, the terms "vertical," "longitudinal," "horizontal," and "lateral" are relative to a principal plane of a structure and are not necessarily defined by the Earth's gravitational field. A "horizontal" or "lateral" direction is a direction generally parallel to a principal plane of a structure, while a "vertical" or "longitudinal" direction is a direction generally perpendicular to a principal plane of a structure. A principal plane of a structure is defined by a surface of the structure having a relatively large area compared to other surfaces of the structure.As used herein, spatially relative terms such as "below," "beneath," "lower," "bottom," "above," "upper," "top," "front," "rear," "left," "right," and similar may be used to conveniently describe the relationship of one element or feature to another, as illustrated in the figures. Unless otherwise specified, spatially relative terms are intended to encompass different orientations of materials in addition to the orientation depicted in the drawings. For example, if the materials in the drawings are reversed, an element described as being "below," "below," "under," or "on the bottom" of other elements or features would be oriented "above," or "on the top" of the other elements or features. Thus, the term "below" may encompass both above and below orientations, depending on the context in which the term is used, as will be apparent to one of ordinary skill in the art. The material may be oriented in other ways (e.g., rotated 90 degrees, inverted, flipped), and the spatially relative descriptors used herein may be interpreted accordingly.As used herein, the terms "forming" and "formed" mean and include any method of producing, building, depositing and/or patterning a material. For example, the forming may be achieved by atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), sputtering, co-sputtering, spin coating, diffusion, deposition, growth, or any other technique known in the semiconductor manufacturing art. The material may be formed and/or patterned into various shapes and configurations using known techniques, such as isotropic etching, anisotropic etching, chemical mechanical polishing (CMP), stripping, etc. Depending on the specific material to be formed, the technique for forming the material may be selected by one of ordinary skill in the art.As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.As used herein, "and/or" includes any and all combinations of one or more of the associated listed items.As used herein, the term "configured" refers to the size, shape, material composition, orientation, and arrangement of one or more of at least one structure and at least one device to facilitate operation of the one or more of the structures and the devices in a predetermined manner.As used herein, the phrase "coupled to" refers to structures that are operatively connected to each other, such as through a direct resistive connection or through an indirect connection (eg, electrically connected via another structure).As used herein, the term "substantially" with respect to a given parameter, characteristic, or condition means and encompasses the degree to which a given parameter, characteristic, or condition is met with a degree of variance (e.g., within an acceptable tolerance) as would be understood by one of ordinary skill in the art. By way of example, depending on a particular parameter, characteristic, or condition being substantially met, the parameter, characteristic, or condition may be met by at least 90.0%, may be met by at least 95.0%, may be met by at least 99.0%, may be met by at least 99.9%, or even by 100.0%.As used herein, "about" or "approximately" with respect to a numerical value of a particular parameter includes numerical values and degrees of variation of numerical values that one of ordinary skill in the art would understand to be within an acceptable tolerance for the particular parameter. For example, "about" or "approximately" with respect to a numerical value may include additional numerical values that are within the 90.0% to 110.0% range of the numerical value, such as within the 95.0% to 105.0% range of the numerical value, within the 97.5% to 102.5% range of the numerical value, within the 99.0% to 101.0% range of the numerical value, within the 99.5% to 100.5% range of the numerical value, or within the 99.9% to 100.1% range of the numerical value.As used herein, the term "substrate" means and includes a base material or structure on which additional materials are formed. The substrate may be a semiconductor substrate, a base semiconductor layer on a support structure, a metal electrode, or a semiconductor substrate on which one or more layers, structures, or regions are formed. The substrate may be a conventional silicon substrate or other bulk substrate including a layer of semiconducting material. As used herein, the term "bulk substrate" means and includes not only silicon wafers, but also silicon-on-insulator (SOI) substrates, such as silicon-on-sapphire (SOS) substrates and silicon-on-glass (SOG) substrates, silicon epitaxial layers on a base semiconductor foundation, and other semiconductor or optoelectronic materials, such as silicon germanium, germanium, gallium arsenide, gallium nitride, and indium phosphide. The substrate may be doped or undoped. By way of non-limiting example, the substrate may include at least one of the following: silicon, silicon dioxide, silicon with a native oxide, silicon nitride, carbon-containing silicon nitride, glass, a semiconductor, a metal oxide, a metal, titanium nitride, carbon-containing titanium nitride, tantalum, tantalum nitride, carbon-containing tantalum nitride, niobium, niobium nitride, carbon-containing niobium nitride, molybdenum, molybdenum nitride, carbon-containing molybdenum nitride, tungsten, tungsten nitride, carbon-containing tungsten nitride, copper, cobalt, nickel, iron, aluminum, and precious metals.1A-1E illustrate a method of forming a semiconductor device structure including an opening for a memory cell at various stages of the method according to an embodiment of the present invention. For simplicity, the formation of a single opening for a memory cell is illustrated, but one skilled in the art will appreciate that the method may include forming multiple (e.g., more than one, a series of) openings for a memory cell simultaneously.1A , a semiconductor device structure 100 includes a stack 104 of alternating layers of a conductive gate material 106 and a dielectric material 108 overlying a substrate 102 (e.g., a conductive line, such as a source line). An opening 110 may extend vertically through the stack 104. An outer oxide liner 112 may be formed in the opening 110 laterally adjacent to the sidewalls of the stack 104. A nitride liner 114 may be formed inwardly adjacent to the outer oxide liner 112 in the opening 110. An inner oxide liner 116 may be formed inwardly adjacent to the nitride liner 114 in the opening 110. A channel material 118 may be formed inwardly adjacent to the inner oxide liner 116 in the opening 110. In some embodiments, the channel material 118 may include a liner having a thickness of less than about 25 nm, such as in a range from about 5 nm to about 20 nm. The channel material 118 may or may not exhibit a substantially homogeneous distribution of its elements. A bottom plug material 122 (e.g., a source contact plug material) may be formed within the opening 110 between the substrate 102 and the channel material 118. The channel material 118 may be formed along the inner sidewalls of the inner oxide liner 116 and above the exposed upper surface of the bottom plug material 122, as illustrated in FIG. 1A. The bottom plug material 122 may extend upward from the substrate 102 to vertically overlap at least partially with the lowermost conductive gate material 106A. The uppermost conductive gate material 106B may be formed away from the lowermost conductive gate material 106A and close to the upper surface of the stack 104. A central dielectric material 130 may be formed within the opening 110 adjacent to the channel material 118. The upper surface of the central dielectric material 130 may be lower than the lower surface of the uppermost conductive gate material 106B. The central dielectric material 130 may be or include, for example, an oxide material or an air-filled void.The alternating conductive gate material 106 and dielectric material 108 of the stack 104 may each be individually formed using conventional material processes, which are not described in detail herein. As a non-limiting example, the conductive gate material 106 and the dielectric material 108 may each be individually formed by one or more conventional deposition processes (e.g., PVD process, CVD process, ALD process, spin coating process) to form the stack 104. As another non-limiting example, an initial stack including a vertical alternating sequence of sacrificial dielectric material and dielectric material is formed by a conventional process (e.g., a conventional deposition process, such as one or more of PVD, CVD, and ALD), and then portions of the sacrificial dielectric material are removed and replaced with the conductive gate material 106 to form the stack 104 by a so-called “replacement gate” process. To remove the sacrificial dielectric material, one or more narrow grooves can be formed through the initial stack to laterally expose the sacrificial dielectric material, an isotropic etch can be performed to selectively remove portions of the sacrificial dielectric material and form gaps (e.g., undercuts) between the dielectric material 108, and then a conductive material (e.g., one or more of titanium, titanium nitride, tantalum, tantalum nitride, tungsten, or tungsten nitride) can be deposited within the gaps to form a conductive gate material 106.1A, the individual (e.g., single, one) conductive gate material 106 of the stack 104, as well as the outer oxide liner 112, the nitride liner 114, the inner oxide liner 116, and the portion of the channel material 118 laterally adjacent to the individual conductive gate material 106, can form an individual vertical memory cell 120 having a so-called metal-oxide-nitride-oxide-semiconductor ("MONOS") configuration. The vertical stack of multiple (e.g., more than one) vertical memory cells 120 within the opening 110 can in turn form a vertical string (e.g., a vertical series connection) of memory cells 120. The channel material 118 can be undoped, or can include a p-type dopant or an n-type dopant.Referring next to FIG. 1B , a band offset material 125 may be formed (e.g., conformally formed) over exposed surfaces (e.g., exposed upper surfaces, exposed side surfaces) of the semiconductor device structure 100 inside and outside the opening 110. The band offset material 125 may be in direct physical contact with the channel material 118 and electrically coupled to the channel material. In some embodiments, the band offset material 125 may be in direct physical contact with the channel material 118 along a single interface, such as along a vertical interface therebetween. By way of example and not limitation, the band offset material 125 may include one or more of silicon germanium (which has a room temperature band gap of approximately 0.85 eV), germanium (which has a room temperature band gap of approximately 0.66 eV), and indium gallium arsenide (which has a room temperature band gap of approximately 0.7 eV). In some embodiments, the band offset material 125 may include a p-type dopant. In other embodiments, the band offset material 125 may be undoped. As described in further detail below, the band gap of the band offset material 125 may be different (eg, lower) than the band gap of the channel material 118. The band offset material 125 may be formed by a conformal deposition process such as CVD or ALD. The band offset material 125 may alternatively be epitaxially grown within the opening 110.1C, portions of band-shifting material 125 outside opening 110 and above at least a central portion of an upper surface of central dielectric material 130 within opening 110 may be selectively removed, for example, by etching. An uppermost surface of the remaining portion of band-shifting material 125 within opening 110 may be below an uppermost surface of stack 104 and may be above an upper surface of uppermost conductive gate material 106B. Additionally, the remaining portion of band-shifting material 125 may be positioned such that a lower surface thereof extends beyond a lower surface of uppermost conductive gate material 106B. Band-shifting material 125 may at least partially (e.g., substantially) vertically overlap with uppermost conductive gate material 106B.Next, referring to FIG. 1D , a top plug material 124 (e.g., a drain contact plug material) may be formed within and fill the remainder of the opening 110. The top plug material 124 may be electrically coupled to the channel material 118. The top plug material 124 may include a semiconductor material, such as one or more of polysilicon, silicon germanium, and germanium. The top plug material 124 may be conductively doped. By way of non-limiting example, the top plug material 124 may include a first concentration of n-type dopants, and the channel material 118 may include a second concentration of n-type dopants that is relatively lower than the first concentration. The process for forming the top plug material 124 may be, for example, CVD or ALD. The band gap of the top plug material 124 may be different from (e.g., greater than) the band gap of the band offset material 125. By way of example and not limitation, the top plug material 124 and/or the channel material 118 may exhibit a room temperature bandgap of at least about 1.40 eV, and the band offset material 125 may exhibit a room temperature bandgap of less than about 1.10 eV. Forming the band offset material 125 from a material exhibiting a bandgap less than about 1.10 eV may increase gate induced drain leakage (“GIDL”) current in a vertical string erase operation of a resulting device (e.g., a memory device) compared to providing only the top plug material 124 exhibiting a larger bandgap. In other embodiments, the bandgap of the top plug material 124 may be smaller than the bandgap of the band offset material 125. In such embodiments, for example, the top plug material 124 may include a germanium-containing material, while the band offset material 125 includes one or more larger bandgap materials, such as polysilicon. The bandgap of the top plug material 124 may be similar to (e.g., substantially the same as) the bandgap of the channel material 118.In addition to within the opening 110, the top plug material 124 may also be initially formed above the upper surface of the stack 104. The portion of the top plug material 124 that vertically extends beyond the plane of the upper surface of the stack 104 may be subsequently removed, for example, by CMP or etching. The remaining portion of the top plug material 124 may be in direct physical contact with and electrically coupled to each of the band offset material 125 and the channel material 118. For example, the portion of the top plug material 124 may vertically overlie and be in direct physical contact with the upper and side surfaces of the remaining portion of the band offset material 125, while being internally adjacent to and in direct physical contact with the sidewalls of the channel material 118. In such embodiments, the top plug material 124 may include a lower portion and an upper portion having a larger radial extent than its lower portion, while the band offset material 125 may be radially adjacent to (e.g., laterally surround) the lower portion of the top plug material 124. Additionally, the side surface of the upper portion of the top plug material 124 can be in direct physical contact with the channel material 118, and the bottom surface of the lower portion of the top plug material 124 can be in direct physical contact with the center dielectric material 130. In other embodiments, the remainder of the top plug material 124 vertically overlies a portion of the channel material 118.The uppermost conductive gate material 106B may have a vertical thickness TL that is greater than the corresponding thicknesses of the other conductive gate materials 106 of the stack 104. The relatively large vertical thickness TL of the uppermost conductive gate material 106B may facilitate a relatively large margin of error when forming a combined range of the band offset material 125 and the top plug material 124 to at least partially vertically overlap with the uppermost conductive gate material 106B. By way of example and not limitation, the vertical thickness TL of the uppermost conductive gate material 106B may be greater than or equal to approximately 45 nm, while the corresponding vertical thicknesses of the other conductive gate materials 106 may be approximately 35 nm.1E, capping dielectric material 128 may be formed on or over the upper surface of stack 104, outer oxide liner 112, nitride liner 114, and inner oxide liner 116; and data line 126 (e.g., bit line, digit line) may be formed on or over the uppermost surface of top plug material 124. Capping dielectric material 128 may include one or more dielectric materials, such as one or more of silicon oxide (e.g., silicon dioxide) and silicon nitride. Data line 126 may provide electrical access to the vertical string of memory cells 120 without direct physical contact with band offset material 125, and without being electrically coupled to band offset material 125.The capping dielectric material 128 and the data line 126 can be formed using conventional processes (e.g., conventional deposition processes, conventional material removal processes) and conventional processing equipment, which are not described in detail herein. For example, the capping dielectric material 128 can be deposited (e.g., by one or more of CVD, PVD, ALD, spin coating) over the upper surface of the stack 104, the outer oxide liner 112, the nitride liner 114, and the inner oxide liner 116, and the upper surface of the top plug material 124; a portion of the capping dielectric material 128 overlying the top plug material 124 can be removed (e.g., by conventional photolithography patterning and etching processes) to form a plug opening overlying the top plug material 124; a conductive material (e.g., tungsten, tungsten nitride, titanium, titanium nitride) can be deposited into the plug opening; and a portion of the conductive material can be removed (e.g., by a CMP process) to form the data line 126. The data line 126 can extend laterally perpendicular to the conductive gate material 106 of the stack 104.Continuing with reference to FIG. 1E , the band offset material 125 and the top plug material 124 may be located between the channel material 118 and the data line 126 (e.g., a bit line). One or more of the top plug material 124 and the band offset material 125 may vertically overlap at least partially with the uppermost conductive gate material 106B. One or more (e.g., from one to five) of the lowermost conductive gate materials 106, 106A may be configured as a select gate source (“SGS”). One or more (e.g., from one to five) of the uppermost conductive gate materials 106, 106B may be configured as a select gate drain (“SGD”). The conductive gate material 106 between the select gate source and the select gate drain may be configured as an access line (e.g., a word line). There may be any suitable number of access lines in the stack 104, such as about 32, about 64, about 72, about 96, or about 128. 1E , the opening 110 may include a linear elongated opening (e.g., an orifice, a through hole) that exhibits one end at the uppermost surface of the stack 104 and another end at the lowermost surface of the stack 104. In additional embodiments, the opening 110 may exhibit a so-called "U-shaped" configuration having a pair of ends at the uppermost surface of the stack 104.FIG. 1F illustrates a simplified partial cross-sectional side view of a portion of the semiconductor device structure 100 shown in FIG. 1E . As shown in FIG. 1F , the data line 126 and the top plug material 124 may be coupled to each other along an interface 144. Additionally, the top plug material 124 and the channel material 118 may be coupled to each other along an interface 146. The interface 146 may be a so-called “homojunction”, wherein the materials of the top plug material 124 and the channel material 118 are substantially similar (e.g., identical) and thus exhibit similar (e.g., equal) band gaps on each side of the interface 146. By way of example and not limitation, each of the top plug material 124 and the channel material 118 may include a polycrystalline silicon (also referred to as “polysilicon”) material exhibiting substantially equal band gaps. Alternatively, the channel material 118 may include a nitride (e.g., silicon nitride) material. Additionally, the band offset material 125 and the channel material 118 may be coupled to each other along an interface 142. Interface 142 may be a so-called "heterojunction" in which the materials of band offset material 125 and channel material 118 are different and/or exhibit one or more of different dopant concentrations and different dopant profiles. Thus, band offset material 125 and channel material 118 may have different (e.g., unequal) band gaps from each other.During operation, current may flow between the material of the top plug material 124 and the channel material 118 (e.g., polysilicon material) while being proximate to but outside of a region of material containing the band offset material 125 (e.g., silicon germanium or germanium material) because this region exhibits a different (e.g., smaller) band gap. One skilled in the art will appreciate that providing a current path between homojunctions at the interface 146 while not providing a current path through a heterojunction at the interface 142 may be based on adjacent regions of the interface 142 having one or more of different materials, different dopant species, different dopant concentrations, and different dopant distributions. Providing the band offset material 125 proximate to but outside of the current path provides increased surface area along the interface 142 while allowing a larger cross-sectional area to generate GIDL current. In other words, the orientation of the interface 142 (e.g., vertical orientation) provides an extended region in which the GIDL sensing holes for bulk erase of the memory cells will be generated, as opposed to the lateral orientation (e.g., horizontal orientation) of the region within the channel material 118, which is limited by the width of the channel material 118.During operation of the semiconductor device structure 100, current may be applied to the data line 126, thereby establishing a flow of current (e.g., string current) through at least a portion of the top plug material 124 and to the channel material 118, as indicated by the dashed directional arrow 150 in FIG. 1F. In some embodiments, current does not flow through the band offset material 125, at least in part due to the band offset material 125 having a different (e.g., smaller) band gap. When current flows from the data line 126 through the top plug material 124 to the channel material 118, a generation region 140 may be established along an interface 142 located between the band offset material 125 and the channel material 118. When current flows proximate to the interface 142 while bypassing the band offset material 125 during the GIDL mode, band-to-band tunneling (“BTBT”) may be generated (e.g., enhanced) in the generation region 140 located along the interface 142. Since the current flows proximate to the band offset material 125 but outside thereof, the flow of current is not reduced (e.g., attenuated) during the sensing operation. In other words, current may flow from the data line 126 through the top plug material 124 without flowing through the band offset material 125. Thus, due at least in part to the band offset material 125 having a different band gap than each of the top plug material 124 and the channel material 118, BTBT may be established or increased without reducing the current supplied to the vertical string of memory cells 120 (FIG. 1E). The increased GIDL current allows a more reliable flow of charge into the channel material 118 to bias the body region of the individual memory cells 120. A reliable bias voltage is desirable in several memory operations where a large voltage difference is used, such as an erase operation.During a programming operation, a boost operation may be used to bias the channel material 118 of the unselected string to inhibit the charge storage structure of the unselected string from being erased in the memory cells 120 ( FIG. 1E ) that are not selected for the erase operation. In the boost operation, a voltage may be applied to the channel material 118 at least in part through capacitive coupling of the channel material 118 with an applied voltage on the respective gates of the individual memory cells 120. For example, a voltage (e.g., about 10 volts) may be applied on the gate, and some amount of the bias voltage (e.g., about 7 volts) may be transferred to the channel material 118 by coupling. In some embodiments, the applied voltage may be a negative voltage, such as applied to the uppermost conductive gate material 106B. Using the boost operation, charge may be maintained within the channel material 118. Therefore, a low GIDL current is desirable during the boost operation. Therefore, using materials having different band gaps as described above may provide reliable biasing of the channel material 118 during an erase operation, and may also provide reliable charge maintenance in the channel material 118 during a boost operation.Those skilled in the art will appreciate that the features and feature configurations described above with respect to FIGS. 1A to 1F can be easily adapted to the design needs of different semiconductor devices (e.g., different memory devices) according to additional embodiments of the present invention. By way of non-limiting example, FIGS. 2A to 2E illustrate simplified partial cross-sectional views of a method of forming a semiconductor device structure having a configuration different from semiconductor device structure 100 according to additional embodiments of the present invention. Throughout the remaining description and drawings, functionally similar features (e.g., structures, devices) are referenced with similar reference numerals incremented by 100. To avoid repetition, all features shown in the remaining figures (including FIGS. 2A to 2E) are not described in detail herein. Rather, unless otherwise described below, features designated by reference numerals incremented by 100 for previously described features (whether the previously described features are first described before this paragraph or first described after this paragraph) should be understood to be substantially similar to the previously described features.Figure 2A illustrates a simplified partial cross-sectional view of a semiconductor device structure 200. At the processing stage depicted in Figure 2A, the semiconductor device structure 200 may be substantially similar to the semiconductor device structure 100 at the processing stage depicted in Figure 1A.Referring to FIG. 2B , band offset material 225 may be formed (e.g., non-conformally) inside and outside of opening 210 extending vertically through stack 204 over the exposed surface of semiconductor device structure 200. Band offset material 225 may be in direct physical contact with channel material 218. Band offset material 225 may include substantially the same materials, properties, and band gaps as band offset material 125 described above with reference to FIG. 1B . In some embodiments, band offset material 225 may be epitaxially grown within opening 210. Band offset material 225 may substantially completely fill the remainder of opening 210 (e.g., a cylindrical opening) so as to completely extend laterally within opening 210 between the inner sidewalls of channel material 218.Referring to FIG. 2C , band offset material 225 may be vertically recessed. Portions of band offset material 225 on the uppermost surface of stack 204, outer oxide liner 112, nitride liner 114, and inner oxide liner 116 may be removed, and band offset material 225 may be vertically recessed within opening 210. The remaining portion of band offset material 225 within opening 210 may be positioned such that its bottom surface extends beyond the bottom surface of uppermost conductive gate material 206B. Additionally, the upper surface of the remaining portion of band offset material 225 within opening 210 may extend beyond the upper surface of uppermost conductive gate material 206B. In other words, band offset material 225 may at least partially (e.g., substantially) vertically overlap with uppermost conductive gate material 206B.2D , a top plug material 224 may be formed within the opening 210 over the band offset material 225 and may be electrically coupled to the channel material 218. The top plug material 224 may include substantially the same materials and properties (e.g., band gap) as the top plug material 124 described above with reference to FIG. 1D , but the top plug material 224 may not extend to the upper surface of the center dielectric material 230. Instead, the band offset material 225 may extend completely over the upper surface of the center dielectric material 230, and the top plug material 224 may be formed over the exposed upper surface of the band offset material 225. The top plug material 224 may be formed using processes (e.g., deposition and material removal processes) substantially similar to those previously described with respect to the formation of the top plug material 124 ( FIG. 1D ).The uppermost conductive gate material 206B may have a vertical thickness TL that is greater than the corresponding thicknesses of the other conductive gate materials 206 of the stack 204 to provide a larger margin of error when forming the top plug material 224 and/or the extent of the portion of the top plug material 224 to at least partially vertically overlap with the uppermost conductive gate material 206B. By way of example and not limitation, the vertical thickness TL of the uppermost conductive gate material 206B may be greater than or equal to approximately 45 nm, while the corresponding vertical thicknesses of the other conductive gate materials 206 may be approximately 35 nm.2E , capping dielectric material 228 may be formed on or over the upper surface of stack 204, outer oxide liner 212, nitride liner 214, and inner oxide liner 216; and data line 226 (e.g., bit line, digit line) may be formed on or over the uppermost surface of top plug material 224. Capping dielectric material 228 and data line 226 may be substantially similar to capping dielectric material 128 and data line 126 previously described with reference to FIG. 1E and may be formed in substantially the same manner.3 illustrates a partial cross-sectional perspective view of a portion of a semiconductor device 300 (e.g., a vertical memory device such as a 3D NAND flash memory device) including a semiconductor device structure 302 including a level 304 of conductive and insulating structures defining a stepped structure 306, and a contact structure 308 electrically connected to the steps of the stepped structure 306. Although a vertical memory device such as a 3D NAND flash memory device is shown by way of example, one skilled in the art will appreciate that increasing GIDL current through enhanced BTBT generation utilizing a band offset material 125 in combination with a top plug material 124 and a channel material 118 ( FIG. 1F ) is not dependent on a particular storage medium, and that the band offset material 125 may be used in any such memory device including similar materials and processes. In the present example, the semiconductor device structure 302 (e.g., the level 304 including the conductive and insulating structures, the stepped structure 306, and the contact structure 308) can be substantially similar to the semiconductor device structures 100, 200 (e.g., the level including the conductive gate material 106, 206 and the dielectric material 108, 208) previously described with respect to FIGS. 1A-1E and 2A-2E, respectively, and can be formed in substantially the same manner. The semiconductor device 300 can further include a vertical string 312 of memory cells 320 coupled in series with each other, a data line 326 (e.g., a bit line), a source level 318, an access line 310, a first select gate 314 (e.g., an upper select gate, a drain select gate (SGD)), a select line 322, a second select gate 324 (e.g., a lower select gate, a source select gate (SGS)), and an additional contact structure 316. The vertical string 312 of memory cells 320 extends vertically and orthogonally to the conductive lines and levels (e.g., data line 326, source level 318, level 304 of semiconductor device structure 302, access line 310, first select gate 314, select line 322, second select gate 324), and contact structures 308 and additional contact structures 316 can electrically couple the components to each other (e.g., select line 322 to first select gate 314, access line 310 to level 304 of semiconductor device structure 302) as shown. The semiconductor device 300 can also include a control unit 328, which can include one or more of the following: a string driver circuit, a pass gate, a circuit for selecting a gate, a circuit for selecting a conductive line (e.g., data line 326, access line 310), a circuit for amplifying a signal, and a circuit for sensing a signal. The control unit 328 may be electrically coupled to the data line 326 , the source layer 318 , the access line 310 , the first select gate 314 , and the second select gate 324 , for example.Semiconductor devices including device structures according to embodiments of the present invention (e.g., semiconductor device structures 100, 200) may be used in embodiments of electronic systems of the present invention. For example, FIG. 4 is a block diagram of an illustrative electronic system 400 according to embodiments of the present invention. Electronic system 400 may include, for example, a computer or computer hardware component, a server or other networking hardware component, a cellular phone, a digital camera, a personal digital assistant (PDA), a portable media (e.g., music) player, a tablet computer with Wi-Fi or cellular capabilities such as an iPod® or iPod tablet computer, an e-book, a navigation device, etc. Electronic system 400 includes at least one memory device 420. Memory device 420 may include, for example, an embodiment of a semiconductor device (e.g., semiconductor device structures 100, 200) previously described herein as previously described herein. Electronic system 400 may further include at least one electronic signal processor device 410 (often referred to as a "microprocessor"). Electronic signal processor device 410 may optionally include an embodiment of a semiconductor device (e.g., semiconductor device structures 100, 200) previously described herein. The electronic system 400 may further include one or more input devices 430, such as a mouse or other pointing device, keyboard, touch pad, buttons, or control panel, for inputting information into the electronic system 400 by a user. The electronic system 400 may further include one or more output devices 440, such as a monitor, display, printer, audio output jack, speaker, etc., for outputting information (e.g., visual or audio output) to the user. In some embodiments, the input device 430 and the output device 440 may include a single touch screen device that can be used to input information to the electronic system 400 and output visual information to the user. The input device 430 and the output device 440 may be in electrical communication with one or more of the memory device 420 and the electronic signal processor device 410.The band-shifting materials disclosed herein can provide enhanced current transport in 3D memory arrays, which can be suitable for use with devices having an increased number of stacked transistors. The different (e.g., smaller) band gap of the disclosed band-shifting materials can result in increased GIDL current values for improved string erase operations compared to using only conventional bit line plug materials such as polysilicon. Additionally, the band-shifting materials can be applicable to all 3D memory architectures including select gate source and select gate drain transistors.Although the present invention is susceptible to various modifications and alternative forms, specific embodiments have been shown in the drawings by way of example and have been described in detail herein. However, the present invention is not limited to the specific form disclosed. In fact, the present invention covers all modifications, equivalents and alternatives falling within the scope of the appended claims and their legal equivalents. |
A method and computing device, for enabling selective enforcement of complex task dependencies. The method and allows a computing device to determine whether to enforce task-dependencies based on programmer or end-user goals concerning efficiency and quality of runtime experience. A computing device may be configured to schedule executing a first task, identify an operation (e.g., a "+>" operation) of the first task as being selectively dependent on a second task finishing execution, and determining whether to enforce the dependency of the first task on the second task based on an evaluation of one or more enforcement conditions. If the enforcement conditions are not met, enforcing the dependency, executing the second task, and withholding execution of the first task until execution of the second task has finished. If the enforcement conditions are met, commencing execution of the first task prior to, or parallel to the second task finishing execution. |
CLAIMSWhat is claimed is:1. A method of executing tasks in a computing device, comprising implementing a first operator for selective enforcement of intertask execution dependencies.2. The method of claim 1, further comprising implementing a second operation for mandatory enforcement of intertask execution dependencies.3. The method of claim 1, wherein implementing a first operator for selective enforcement of intertask execution dependencies comprises:commencing execution of a first task via a first thread of a thread pool in the computing device;identifying whether there exists a second task ready for execution such that an operation of the second task identifies the second task as either being dependent on the first task finishing execution or being selectively dependent on the first task finishing execution;identifying whether there exists a third task ready for execution such that an operation of the second task identifies the second task as either being dependent on the third task finishing execution or being selectively dependent on the third task finishing execution;commencing execution of the second task via a second thread of the thread pool only after determining that the third task has finished execution in response to identifying that there exists a third task ready for execution such that an operation of the second task identifies the second task as being dependent on the third task finishing execution;determining whether to enforce the selective dependency of the second task on the third task by determining whether one or more enforcement conditions are satisfied in response to identifying that there exists a third task ready for execution such that an operation of the second task identifies the second task as being selectively dependent on the third task finishing execution;ignoring the selective dependency, and commencing execution of the second task via a second thread of the thread pool in response to determining that the one or more enforcement conditions are met; andcommencing execution of the second task via a second thread of the thread pool only after determining that the third task has finished execution in response to determining that the one or more enforcement conditions are not met, enforcing the selective dependency.4. A computing device, comprising:a processor configured with processor-executable instructions to perform operations comprising implementing a first operation for selective enforcement of intertask execution dependencies.5. The computing of claim 4, wherein the processor is configured with processor- executable instructions to perform operations further comprising implementing a second operation for mandatory enforcement of intertask execution dependencies..6. The computing device of claim 4, wherein the processor is configured with processor-executable instructions to perform operations such that implementing a first operation for selective enforcement of intertask execution dependencies comprises: commencing execution of a first task via a first thread of a thread pool in the computing device;identifying whether there exists a second task ready for execution such that an operation of the second task identifies the second task as either being dependent on the first task finishing execution or being selectively dependent on the first task finishing execution; identifying whether there exists a third task ready for execution such that an operation of the second task identifies the second task as either being dependent on the third task finishing execution or being selectively dependent on the third task finishing execution;commencing execution of the second task via a second thread of the thread pool only after determining that the third task has finished execution in response to identifying that there exists a third task ready for execution such that an operation of the second task identifies the second task as being dependent on the third task finishing execution;determining whether to enforce the selective dependency of the second task on the third task by determining whether one or more enforcement conditions are satisfied in response to identifying that there exists a third task ready for execution such that an operation of the second task identifies the second task as being selectively dependent on the third task finishing execution;ignoring the selective dependency, and commencing execution of the second task via a second thread of the thread pool in response to determining that the one or more enforcement conditions are met; andcommencing execution of the second task via a second thread of the thread pool only after determining that the third task has finished execution in response to determining that the one or more enforcement conditions are not met, enforcing the selective dependency.7. The computing device of claim 6, wherein the processor is configured with processor-executable instructions to perform operations such that implementing a first operation for selective enforcement of intertask execution dependencies comprises: identifying whether any additional operations of the second task are either dependent or selectively dependent on any additional tasks other than the first task and third tasks finishing execution.8. The computing device of claim 6, wherein the processor is configured with processor-executable instructions to perform operations such that implementing a first operation for selective enforcement of intertask execution dependencies comprises: commencing execution of the second task via a second thread of the thread pool only after determining that additional tasks have finished execution in response to identifying that there exist additional tasks ready for execution such that an operation of the second task identifies the second task as being dependent on the additional tasks finishing execution.9. The computing device of claim 8, wherein the processor is configured with processor-executable instructions to perform operations such that implementing a first operation for selective enforcement of intertask execution dependencies further comprises:in response to identifying that there exist additional tasks ready for execution such that an operation of the second task identifies the second task as being selectively dependent on the additional tasks finishing execution:determining whether to enforce the selective dependency of the second task on the additional tasks by determining whether the one or moreenforcement conditions are satisfied;ignoring the selective dependency and commencing execution of the second task via a second thread of the thread pool in response to determining that the one or more enforcement conditions are not met; andenforcing the dependency and commencing execution of the second task via a second thread of the thread pool only after determining that the additional tasks have finished execution in response to determining that the one or more enforcement conditions are met.10. The computing device of claim 6, wherein the processor is configured with processor-executable instructions to perform operations such that: commencing execution of the first task via the first thread of the thread pool comprises executing the first task in a first processing core of the computing device; andcommencing execution of the second task via the second thread of the thread pool comprises executing the second task in a second processing core of the computing device prior to or concurrent with execution of the first task in the first processing core.1 1. The computing device of claim 6, wherein the processor is configured with processor-executable instructions to perform operations such that the one or more enforcement conditions are evaluated at the time of execution.12. The computing device of claim 6, wherein the processor is further configured with processor-executable instructions to perform operations comprising:receiving execution preference information prior to commencing execution of the first task; andsetting the one or more enforcement conditions in response to receiving the execution preference information and based upon the execution preference information.13. A non- transitory processor-readable storage medium having stored thereon processor-executable instructions to cause a processor to perform operations comprising implementing a first operation for selective enforcement of intertask execution dependencies.14. The non-transitory processor-readable storage medium of claim 13, wherein the processor-executable instructions are configured to cause a processor to perform operations further comprising implementing a second operation for mandatory enforcement of intertask execution dependencies.15. The non- transitory processor-readable storage medium of claim 13, wherein the processor-executable instructions are configured to cause a processor of a computing device to perform operations such that implementing a first operation for selective enforcement of intertask execution dependencies comprises:commencing execution of a first task via a first thread of a thread pool in the computing device;identifying whether there exists a second task ready for execution such that an operation of the second task identifies the second task as either being dependent on the first task finishing execution or being selectively dependent on the first task finishing execution;identifying whether there exists a third task ready for execution such that an operation of the second task identifies the second task as either being dependent on the third task finishing execution or being selectively dependent on the third task finishing execution;commencing execution of the second task via a second thread of the thread pool only after determining that the third task has finished execution in response to identifying that there exists a third task ready for execution such that an operation of the second task identifies the second task as being dependent on the third task finishing execution;determining whether to enforce the selective dependency of the second task on the third task by determining whether one or more enforcement conditions are satisfied in response to identifying that there exists a third task ready for execution such that an operation of the second task identifies the second task as being selectively dependent on the third task finishing execution;ignoring the selective dependency, and commencing execution of the second task via a second thread of the thread pool in response to determining that the one or more enforcement conditions are met; and commencing execution of the second task via a second thread of the thread pool only after determining that the third task has finished execution in response to determining that the one or more enforcement conditions are not met, enforcing the selective dependency.16. The non-transitory processor-readable storage medium of claim 15, wherein the processor-executable instructions are configured to cause a processor of a computing device to perform operations such that that implementing a first operation for selective enforcement of intertask execution dependencies comprises:identifying whether any additional operations of the second task are either dependent or selectively dependent on any additional tasks other than the first task and third tasks finishing execution.17. The non-transitory processor-readable storage medium of claim 16, wherein the processor-executable instructions are configured to cause a processor of a computing device to perform operations such that that implementing a first operation for selective enforcement of intertask execution dependencies comprises:commencing execution of the second task via a second thread of the thread pool only after determining that the additional tasks have finished execution in response to identifying that there exist additional tasks ready for execution such that an operation of the second task identifies the second task as being dependent on the additional tasks finishing execution.18. The non-transitory processor-readable storage medium of claim 16, wherein the processor-executable instructions are configured to cause a processor of a computing device to perform operations such that that implementing a first operation for selective enforcement of intertask execution dependencies comprises further comprises: in response to identifying that there exist additional tasks ready for execution such that an operation of the second task identifies the second task as being selectively dependent on the additional tasks finishing execution:determining whether to enforce the selective dependency of the second task on the additional tasks by determining whether the one or moreenforcement conditions are satisfied;ignoring the selective dependency and commencing execution of the second task via a second thread of the thread pool in response to determining that the one or more enforcement conditions are not met; andenforcing the dependency and commencing execution of the second task via a second thread of the thread pool only after determining that the additional tasks have finished execution in response to determining that the one or more enforcement conditions are met.19. The non-transitory processor-readable storage medium of claim 15, wherein the processor-executable instructions are configured to cause a processor of a computing device to perform operations such that:commencing execution of the first task via the first thread of the thread pool comprises executing the first task in a first processing core of the computing device; andcommencing execution of the second task via the second thread of the thread pool comprises executing the second task in a second processing core of the computing device prior to or concurrent with execution of the first task in the first processing core.20. The non-transitory processor-readable storage medium of claim 15, wherein the processor-executable instructions are configured to cause a processor of a computing device to perform operations further comprising: receiving execution preference information prior to commencing execution of the first task; andsetting the one or more enforcement conditions in response to the receiving and based upon the execution preference information. |
TITLEDevices and Methods Implementing Operations for Selective Enforcement of Task DependenciesRELATED APPLICATIONS[0001] This application claims the benefit of priority to U.S. Provisional Application No. 62/100,690 entitled "Programmatic Specification and Enforcement of Complex Task Dependencies in Parallel Programs" filed January 7, 2015 the entire contents of both of which are hereby incorporated by reference.BACKGROUND[0002] Mobile and wireless technologies have seen explosive growth over the past several years. This growth has been fueled by better communications, hardware, and more reliable protocols. Wireless service providers are now able to offer their customers an ever-expanding array of features and services, and provide users with unprecedented levels of access to information, resources, and communications. To keep pace with these enhancements, mobile electronic devices (e.g., cellular phones, watches, headphones, remote controls, etc.) have become more complex than ever, and now commonly include multiple processors, system-on-chips (SoCs), and other resources that allow mobile device users to execute complex and power intensive software applications (e.g., video streaming, video processing, etc.) on their mobile devices.[0003] Due to these and other improvements, smartphones and tablet computers have grown in popularity, and are replacing laptops and desktop machines as the platform of choice for many users. As mobile devices continue to grow in popularity, improved processing solutions that better utilize the multiprocessing capabilities of the mobile devices will be desirable to consumers. SUMMARY[0004] The methods and apparatuses of various embodiments provide circuits and methods for managing execution of tasks that exploit the concurrency and parallelism enabled by modern multiprocessor architectures to generate and execute software applications in order to achieve fast response times, high performance, and high user interface responsiveness. The method may include scheduling execution of tasks via a thread in particular processor cores, identifying a first operation (e.g., a "+>" operation) of a task as being selectively dependent on a second task finishing execution, and determining whether to enforce the dependency of the first task on the second task based on an evaluation of a set of enforcement conditions. Embodiment methods may include implementing a first operation (e.g., "+>" operation) for selective enforcement of intertask execution dependencies.[0005] Embodiment methods may further include implementing a second operation (e.g., a "->") for mandatory enforcement of intertask execution dependencies.[0006] Embodiment methods may include commencing execution of a first task via a first thread of a thread pool in the computing device, identifying whether there exists a second task ready for execution such that an operation of the second task identifies the second task as either being dependent on the first task finishing execution or being selectively dependent on the first task finishing execution, and identifying whether there exists a third task ready for execution such that an operation of the second task identifies the second task as either being dependent on the third task finishing execution or being selectively dependent on the third task finishing execution.Embodiment methods may further include commencing execution of the second task via a second thread of the thread pool only after determining that the third task has finished execution in response to identifying that there exists a third task ready for execution such that an operation of the second task identifies the second task as being dependent on the third task finishing execution, and determining whether to enforce the selective dependency of the second task on the third task by determining whether one or more enforcement conditions are satisfied in response to identifying that there exists a third task ready for execution such that an operation of the second task identifies the second task as being selectively dependent on the third task finishing execution. An embodiment method may also include ignoring the selectivedependency, and commencing execution of the second task via a second thread of the thread pool in response to determining that the one or more enforcement conditions are met, and commencing execution of the second task via a second thread of the thread pool only after determining that the third task has finished execution in response to determining that the one or more enforcement conditions are not met, thus enforcing the selective dependency.[0007] Embodiment methods may further include identifying whether any additional operations of the second task are either dependent or selectively dependent on any additional tasks other than the first task and third tasks finishing execution.[0008] In some embodiment methods, identifying whether any additional operations of the second task are either dependent or selectively dependent on any additional tasks other than the first task and third tasks finishing execution, may include commencing execution of the second task via a second thread of the thread pool only after determining that the additional tasks have finished execution in response to identifying that there exist additional tasks ready for execution such that an operation of the second task identifies the second task as being dependent on the additional tasks finishing execution.[0009] In some embodiments, identifying whether any additional operations of the second task are either dependent or selectively dependent on any additional tasks other than the first task and third tasks finishing execution may include in response to identifying that there exist additional tasks ready for execution such that an operation of the second task identifies the second task as being selectively dependent on the additional tasks finishing execution, determining whether to enforce the selective dependency of the second task on the additional tasks by determining whether the one or more enforcement conditions are satisfied, ignoring the selective dependency and commencing execution of the second task via a second thread of the thread pool in response to determining that the one or more enforcement conditions are met, and enforcing the dependency and commencing execution of the second task via a second thread of the thread pool only after determining that the additional tasks have finished execution in response to determining that the one or more enforcement conditions are not met[0010] In some embodiments, commencing execution of the first task via the first thread of the thread pool may include executing the first task in a first processing core of the computing device, and commencing execution of the second task via the second thread of the thread pool may include executing the second task in a second processing core of the computing device prior to or concurrent with execution of the first task in the first processing core.[0011] In some embodiments, the one or more enforcement conditions are evaluated at the time of execution.[0012] Embodiment methods may further include receiving execution preference information prior to commencing execution of the first task, and setting the one or more enforcement conditions in response to the receiving and based upon the execution preference information.[0013] Embodiments include a computing device having a processor configured with processor-executable instructions to perform operations of one or more of the embodiment methods described above. Such embodiments may include a computing device having a processor configured with processor-executable instructions to perform operations comprising implementing a first operation for selectiveenforcement of intertask execution dependencies.[0014] Embodiments include a non-transitory processor-readable medium having stored thereon processor-executable software instructions to cause a processor to perform operations of one or more of the embodiment methods described above.Some embodiments may include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions to cause a processor to perform operations comprising implementing a first operation for selectiveenforcement of intertask execution dependencies.BRIEF DESCRIPTION OF THE DRAWINGS[0015] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiment of the claims, and together with the general description given above and the detailed description given below, serve to explain the features of the claims.[0016] FIG. 1 is an architectural diagram of an example system on chip suitable for implementing the various embodiments.[0017] FIGs. 2A through 2C are illustrations of example prior art solutions for displaying data fetched from many remote sources.[0018] FIGs. 3 through 7 are illustrations of procedures suitable for executing tasks in accordance with various embodiments.[0019] FIG. 8 is a block diagram illustrating state transitions of a task in accordance with various embodiments.[0020] FIG. 9A is an illustration of a procedure that uses the +> statement to optionally decouple task execution dependencies in accordance with an embodiment.[0021] FIG. 9B is a timing diagram illustrating operations of the tasks of the procedure illustrated in FIG. 9A.[0022] FIG. 10 is a process flow diagram illustrating a method of executing tasks in accordance with an embodiment. [0023] FIG. 1 1 is a block diagram of an example laptop computer suitable for use with the various embodiments.[0024] FIG. 12 is a block diagram of an example smartphone suitable for use with the various embodiments.[0025] FIG. 13 is a block diagram of an example server computer suitable for use with the various embodiments.DETAILED DESCRIPTION[0026] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the claims.[0027] In overview, the various embodiments include methods, and computing devices configured to perform the methods, of using techniques that exploit the concurrency/parallelism enabled by modern multiprocessor architectures to generate and execute software applications in order to achieve fast response times, high performance, and high user interface responsiveness.In the various aspects, a computing device may be configured to schedule executing a first task via a first thread (e.g., in a first processing core), identify an operation (e.g., a "+>" operation) of the first task as being selectively dependent on a second task finishing execution, determining whether to enforce the dependency of the first task on the second task based on an evaluation of a set of enforcement conditions. The computing device may be configured to enforce the dependency and commencing execution of the second task via a second thread (e.g., in a second processing core) and withhold execution of the first task until execution of the second task has finished if the enforcement conditions are met, and commencing execution of the first task prior or parallel to the second task finishing execution if the enforcement conditions are not met. In other words, the computing device may implement a first operation for selective enforcement of intertask executiondependencies.[0028] In various embodiments, the enforcement conditions for ignoring a task dependency may include expiration of a timer, evaluation of runtime systemresources, evaluation of task execution time and resource requirements against a predefined quality of end-user experience, evaluation of task execution time and resource requirements against a user specified quality of end-user experience, as defined at runtime, and incorporation of other quality-of-experience metrics.[0029] By enabling selective enforcement of task execution dependencies (as opposed to waiting for a first task to finish prior to beginning execution of a second task) the various embodiments allow the computing device to determine whether to enforce task-dependencies based on programmer or end-user goals concerning efficiency and quality of runtime experience. These operations improve the functioning of the computing device by potentially reducing the latencies associated with executing software applications on the device, or by improving software application output quality and reducing the need to execute an application multiple times to achieve desired results. These operations may also improve the functioning of the computing device by improving its efficiency, performance, and power consumptioncharacteristics.[0030] The terms "computing system" and "computing device" are used generically herein to refer to any one or all of servers, personal computers, and mobile devices, such as cellular telephones, smartphones, tablet computers, laptop computers, netbooks, ultrabooks, palm-top computers, personal data assistants (PDA's), wireless electronic mail receivers, multimedia Internet enabled cellular telephones, Global Positioning System (GPS) receivers, wireless gaming controllers, and similar personal electronic devices which include a programmable processor. While the various embodiments are particularly useful in mobile devices, such as smartphones, which have limited processing power and battery life, the embodiments are generally useful in any computing device that includes a programmable processor.[0031] The term "system on chip" (SOC) is used herein to refer to a single integrated circuit (IC) chip that contains multiple resources and/or processors integrated on a single substrate. A single SOC may contain circuitry for digital, analog, mixed-signal, and radio-frequency functions. A single SOC may also include any number of general purpose and/or specialized processors (digital signal processors, modem processors, video processors, etc.), memory blocks (e.g., ROM, RAM, Flash, etc.), and resources (e.g., timers, voltage regulators, oscillators, etc.). SOCs may also include software for controlling the integrated resources and processors, as well as for controlling peripheral devices.[0032] The term "system in a package" (SIP) may used herein to refer to a single module or package that contains multiple resources, computational units, cores and/or processors on two or more IC chips or substrates. For example, a SIP may include a single substrate on which multiple IC chips or semiconductor dies are stacked in a vertical configuration. Similarly, the SIP may include one or more multi-chip modules (MCMs) on which multiple ICs or semiconductor dies are packaged into a unifying substrate. A SIP may also include multiple independent SOCs coupled together via high speed communication circuitry and packaged in close proximity, such as on a single motherboard or in a single mobile computing device. The proximity of the SOCs facilitates high-speed communications and the sharing of memory and resources. A SOC may include multiple multicore processors, and each processor in an SOC may be referred to as a core. The term "multiprocessor" is used herein to refer to a system or device that includes two or more processing units configured to read and execute program instructions.[0033] The term "context" is used herein to refer to any information available to a process or thread running in a host operating system (e.g., Android, Windows 8, LINIX, etc.), and may include operational state data and permissions and/or access restrictions that identify the operating system services, libraries, file systems, and other resources that the process or thread may access.[0034] In an embodiment, a process may be a software representation of a software application. Processes may be executed on a processor in short time slices so that it appears that multiple applications are running simultaneously on the same processor (e.g., by using time-division multiplexing techniques). When a process is removed from a processor at the end of a time slice, information pertaining to the current operating state of the process (i.e., the process's operational state data) is stored in memory so the process may seamlessly resume its operations when it returns to execution on the processor.[0035] A process's operational state data may include the process's address space, stack space, virtual address space, register set image (e.g., program counter, stack pointer, instruction register, program status word, etc.), accounting information, permissions, access restrictions, and state information. The state information may identify whether the process is a running state, a ready or ready-to-run state, or a blocked state. A process is in the ready-to-run state when all of its dependencies or prerequisites for execution have been met (e.g., memory and resources are available, etc.), and is waiting to be assigned to the next available processing unit. A process is in the running state when its procedure is being executed by a processing unit. A process is in the blocked state when it is waiting for the occurrence of an event (e.g., input/output completion event, etc.).[0036] A process may spawn other processes, and the spawned process (i.e., a child process) may inherit some of the permissions and access restrictions (i.e., context) of the spawning process (i.e., the parent process). A process may also be a heavy-weight process that includes multiple lightweight processes or threads, which are processes that share all or portions of their context (e.g., address space, stack, permissions and/or access restrictions, etc.) with other processes/threads. Thus, a single process may include multiple threads that share, have access to, and/or operate within a single context (e.g., a processor, process, or software application's context).[0037] A multiprocessor system may be configured to execute multiple threads concurrently or in parallel to improve a process's overall execution time. In addition, a software application, operating system, runtime system, scheduler, or another component in the computing system may be configured to create, destroy, maintain, manage, schedule, or execute threads based on a variety of factors or considerations. For example, to improve parallelism, the system may be configured to create a thread for every sequence of operations that could be performed concurrently with another sequence of operations.[0038] Creating and managing threads may require that the computing system perform complex operations that consume a significant amount of time, processor cycles, and device resources (e.g., processing, memory, or battery resources, etc.). As such, software applications that maintain a large number of idle threads, or frequently destroy and create new threads, often have a significant negative or user-perceivable impact on the responsiveness, performance, or power consumption characteristics of the computing device.[0039] To reduce the number of threads that are created and/or maintained by the computing system, a software application or multiprocessor system may be configured to generate, use, and/or maintain a thread pool that includes approximately one thread for each of the available processing units. For example, a four-core processor system may be configured to generate and use a thread pool that maintains four threads - one for each of its four processing cores. A process scheduler or runtime system of the computing device may schedule these threads to execute in any of the available processing cores, which may include physical cores, virtual cores, or a combination thereof. As such, each thread may be a software representation of a physical execution resource (e.g., processing core, etc.) that is provided by the hardware platform of the computing device (e.g., for the execution of a process or software application).[0040] To provide adequate levels of parallelism without requiring the creation or maintenance of a large number of threads, the software application or multiprocessor system may implement or use a task-parallel programming model or solution. Such solutions allow the computing system to split the computation of a softwareapplication into tasks, assign the tasks to the thread pool that maintains a near-constant number of threads (e.g., one for each processing unit), and execute assigned tasks via the threads of the thread pool. A process scheduler or runtime system of the computing system may schedule tasks for execution on the processing units, similar to how more conventional solutions schedule threads for execution.[0041] A task may include any procedure, unit of work, or sequence of operations that may be executed in a processing unit via a thread. A task may be process-independent to other tasks, yet dependent on other tasks. For example, a first task may be dependent on another task (i.e., a predecessor task) finishing execution, and other tasks (i.e., successor tasks) may depend on the first task finishing execution. These relationships are known as inter-task dependencies.[0042] Tasks may be unrelated to each other except via their inter-task dependencies. The runtime system of a computing device may be configured to enforce these intertask dependencies (e.g., by executing tasks after their predecessor tasks have finished execution). A task may finish execution by successfully completing its procedure (i.e., by executing all of its operations) or by being canceled. In an embodiment, the runtime system may be configured to cancel dependent (successor) tasks if a task finishes execution as a result of being canceled.[0043] A task may include state information that identifies whether the task is launched, ready, or finished. In an embodiment, the state information may also identify whether the task is in an "executed" state. A task is in the launched state when it has been assigned to a thread pool and is waiting for a predecessor task to finish execution and/or for other dependencies or prerequisites for execution to be met. A task is in the ready state when all of its dependencies or prerequisites for execution have been met (e.g., all of its predecessors have finished execution), and is waiting to be assigned to the next available thread. A task may be marked as finished after its procedure has been executed by a thread or after being canceled.[0044] Task-parallel programming solutions may be used to build high-performance software applications that are responsive, efficient, and which otherwise improve the user experience. These software applications may be executed or performed in variety of computing devices and system architectures, an example of which is illustrated in FIG 1.[0045] FIG. 1 illustrates an example system-on-chip (SOC) 100 architecture that may be included in an embodiment computing device configured to execute run software applications that implement the task-parallel programming model and/or to execute tasks in accordance with the various embodiments. The SOC 100 may include a number of heterogeneous processors, such as a digital signal processor (DSP) 102, a modem processor 104, a graphics processor 106, and an application processor 108. The SOC 100 may also include one or more coprocessors 1 10 (e.g., vector coprocessor) connected to one or more of the heterogeneous processors 102, 104, 106, 108. In an embodiment, the graphics processor 106 may be a graphics processing unit (GPU).[0046] Each processor 102, 104, 106, 108, 1 10 may include one or more cores (e.g., processing cores 108a, 108b, 108c, and 108d illustrated in the application processor 108), and each processor/core may perform operations independent of the other processors/cores. SOC 100 may include a processor that executes an operating system (e.g., FreeBSD, LINUX, OS X, Microsoft Windows 8, etc.) which may include a scheduler configured to schedule sequences of instructions, such as threads, processes, or data flows, to one or more processing cores for execution. [0047] The SOC 100 may also include analog circuitry and custom circuitry 1 14 for managing sensor data, analog-to-digital conversions, wireless data transmissions, and for performing other specialized operations, such as processing encoded audio and video signals for rendering in a web browser. The SOC 100 may further include system components and resources 1 16, such as voltage regulators, oscillators, phase- locked loops, peripheral bridges, data controllers, memory controllers, system controllers, access ports, timers, and other similar components used to support the processors and software programs running on a computing device.[0048] The system components and resources 1 16 and/or custom circuitry 1 14 may include circuitry to interface with peripheral devices, such as cameras, electronic displays, wireless communication devices, external memory chips, etc. Theprocessors 102, 104, 106, 108 may communicate with each other, as well as with one or more memory elements 1 12, system components and resources 1 16, and custom circuitry 1 14, via an interconnection/bus module 124, which may include an array of reconfigurable logic gates and/or implement a bus architecture (e.g., CoreConnect, AMBA, etc.). Communications may be provided by advanced interconnects, such as high performance networks-on chip (NoCs).[0049] The SOC 100 may further include an input/output module (not illustrated) for communicating with resources external to the SOC, such as a clock 1 18 and a voltage regulator 120. Resources external to the SOC (e.g., clock 1 18, voltage regulator 120) may be shared by two or more of the internal SOC processors/cores (e.g., a DSP 102, a modem processor 104, a graphics processor 106, an application processor 108, etc.).[0050] In addition to the SOC 100 discussed above, the various embodiments(including, but not limited to, embodiments discussed below with respect to FIGs. 3-7, 8B, 9A, 9B and 10) may be implemented in a wide variety of computing systems, which may include multiple processors, multicore processors, or any combination thereof. [0051] FIGs. 2A through 2C illustrate different prior art procedures 202, 204, 206 for accomplishing the operations of fetching multiple webpages from remote servers and building a composite display of the webpages. Each of these procedures 202, 204, 206 includes functions or sequences of instructions that may be executed by a processing core of a computing device, including a fetch function, a render function, a display webpage function, and a compose webpages function.[0052] The procedure 202 illustrated in FIG. 2A is a sequential procedure that performs the operations of the functions one at a time. For example, thecompose_webpages function sequentially calls the display_webpage function for each URL in a URL array. By performing these operations sequentially, the illustrated procedure 202 does not exploit the parallel processing capabilities of the computing device or provide the system, programmer, or end-user with any flexibility in modifying the order of task execution based on runtime parameters.[0053] The procedure 204 illustrated in FIG. 2B implements a conventional task- parallel programming model by splitting some of the functions (modularly) into tasks and identifying task dependencies. For example, FIG. 2B illustrates that the compose_webpages function creates and uses the tasks to execute thedisplay webpage function for each URL in the URL array. Each of the tasks may be executed in parallel with the other tasks (if they have no inter-task dependencies) without creating new threads.[0054] While procedure 204 is an improvement over the sequential procedure 202 (illustrated in FIG. 2A), it does not provide any flexibility in order of task execution or in optional enforcing inter-task dependencies. This is because procedure 204 uses the 'all' statements to respect the semantics of sequential synchronous function calls and synchronize tasks correctly. The 'all' statement establishes an inter-task dependency by blocking task execution until all predecessor tasks and their inter-taskdependencies are resolved [0055] For example, the display webpage function of procedure 204 is not finished until tasks 'fd', 'fs', and 'r' are finished. The presence of the 'all' statement requires that all the tasks 'fd' and 'fs' must finish execution before task 'r' can execute and the display webpage function can be marked as finished.[0056] Such waiting may adversely affect the responsiveness of the application (and thus the computing device). The 'all' statement blocks the thread executing the task (i.e., by causing the thread to enter a blocked state), which may result in the computing device spawning new threads (i.e., to execute other tasks that are ready for execution). As discussed above, the creation/spawning of a large number of threads may have a negative impact on the performance and power-consumptioncharacteristics of the computing device. For all these reasons, procedure 204 is not an adequate solution for handling of complex task dependencies of a computing device.[0057] The procedure 206 illustrated in FIG. 2C implements a task-parallel programming model that uses the parent-child relationships among tasks to avoid redundant dependency operations. For example, when the display webpage function of procedure 206 is invoked inside a task created in the compose webpages function, any task that it further creates is deemed to be its child task. Within thedisplay_webpage function, an 'any' statement is employed to define the dependency of task 'r' on the completion of any of tasks 'fd' or 'fs'. Procedure 206 is insufficient to adequately address the need for selective handling of complex task dependencies. In the example, rendering of a webpage may occur prior to completion of data and style fetching tasks, because the programmer deemed expediency of deliverable results more important than the completeness of a displayed webpage. This approach does not permit modification to the dependencies based on changes outcome goals.[0058] FIG. 3 illustrates an embodiment procedure 302 that uses tasks to fetch multiple webpages from remote servers and to build a composite display of multiple webpages. The procedure 302 may be performed by one or more processing units of a multiprocessor system. The code, instructions, and/or statements of procedure 302 are similar to those of the procedure 204 described with reference to FIG. 2B, except that the 'any' and 'all' statements have been replaced by '+>' and '->' statements.[0059] When performing the procedure 302, the thread that executes the render task 'r' may not be forced to wait on all of the fetch tasks 'fd' and 'fs' to finish before beginning execution of task 'r.' Inter-task dependencies may be established via the '+>' statement and optionally by an '->' statement. The '->' statement may establish that the dependency of the task 'r' on the task 'fd' is mandatory and must be honored, therefore task 'fd', data fetching, must finish execution prior to execution of the task 'r.' Conversely, the statement '+>' may establish that the dependency of task 'r' on task 'fs' is selectively enforceable according to whether the enforcement condition represented by the 'timer_exp' parameter returns true. Thus, if the enforcement conditions established in a function returning 'timer exp' are met (e.g., a function having a countdown timer that returns true if the timer has expired when the render call is made), 'timer_exp' may return true, resulting in the ignoring of the dependency between the task 'r' and the task 'fs.' If the dependency between tasks is ignored due to satisfaction of a condition, task 'r' may execute during, or prior to task 'fs' finishing. If the condition is not satisfied (i.e., timer_exp returns false), thedependency of task 'r' on task 'fs' may be enforced in the same or similar manner as mandatory dependencies.[0060] Enforcement conditions for ignoring an inter-task dependency established by the +>' statement may be determined by a software programmer, or computer or code generator tool, or may be provided as options selectable by an end-user of the application. In an embodiment, a timer may be set to allow for a predecessor task to finish execution. If the timer expires, the task dependency may be ignored and the successor task allowed to execute in parallel with the predecessor task. In various embodiments, conditions may amount to an evaluation of runtime system resource conditions, in which inter-task dependency is honored only if the running of tasks in parallel would require substantial resources. In various embodiments, inter-task dependencies may be ignored if linear execution would require a substantial length of wait time. In various embodiments, one or more end-users may provide input to a function based on desired characteristics of their user experience, and inter-task dependencies may be ignored or enforced according to the time and resource allocation needs necessary to meet the desired experience characteristics.[0061] This is in contrast to procedure 204 (illustrated in FIG. 2B) in which the thread executing the task 'r' will be blocked at the 'all' operation and forced to wait on completion of tasks "fd' and 'fs' regardless of exterior conditions, quality of experience metrics, or end-user execution expectations.[0062] Thus, in contrast to the 'all' or 'any' statements, the +> statement selectively enforces inter-task dependencies, adds little or no overhead to the runtime system, and allows a software designer or end-user to specify the conditions required for a task to achieve desired execution. The +> statement also allows the computing system to perform more complex operations on tasks than solutions that use parent-child relationships of tasks (e.g., procedure 206 illustrated in FIG. 2C).[0063] In addition, the +> statement may be used to create modular and composable selective task dependency programming solutions, and to overcome any or all the above-described limitations of conventional solutions. For example, the +> statement allows a programmer to programmatically specify and selectively enforce complex task dependencies.[0064] The +> statement also empowers the programmer to relate tasks to each other in several useful ways. For example, FIG. 4 illustrates that the +> statement may be used to identify a task as selectively dependent on multiple tasks, and further specify that the multiple task dependencies are selectively honored according to the same conditions. As another example, FIG. 5 illustrates that the +> statement may be used to identify a task as selectively dependent on a group of tasks. As a further example, FIG. 6 illustrates that the +> statement may be used to identify a current task as selectively dependent on tasks that were not created or spawned by the current task. As a further example, FIG. 7 illustrates that the +> statement may be used by multiple tasks to identify that they are selectively dependent on the same task, but subject to different enforcement conditions. These and other capabilities provided by the +> statement and its corresponding operations are new capabilities not provided by conventional solutions (e.g., solutions that require all or any of the predecessor tasks to finish execution, etc.), and that have the potential to improve the functioning and performance of computing devices implementing software using the statement.[0065] The +> statement may also be used by a computing system to better implement the 'all' relationship among tasks. For example, when a first task (task A) is selectively dependent on a second task (task B) and a third task (C), the runtime system can internally mark the first task (task A) as finishing after the second task and the third task (e.g., via a +>(B, C, A) operation). The first task (task A) will execute after the second task (task B) and the third task (task C) finish, absent satisfaction of the enforcement condition, giving the exact same semantics as those provided by the 'all' statement. In an embodiment, a mandatory dependency statement '->' may be included. Similar results may be obtained through use of the mandatory dependency statement -> such that the system internally marks the first task (task A) as being dependent on the second task (task B) and the third task (task C) finishing execution.[0066] By using the +> statement, a software designer is able to specify and selectively enforce complex task dependencies in a task-parallel programming model in a modular and composable manner, while enabling extraction of maximum performance from the parallel hardware.[0067] FIG. 8 illustrates state transitions for a task having one or more inter-task dependencies. Specifically, FIG. 8 illustrates that the task transitions from the launched state to the ready state when all of its predecessors, mandatory or selective, have finished execution or been ignored. After transition to the ready state, the task may transition from the ready state to the finished state after its procedure is executed by a thread. The decision whether to execute predecessor tasks whose optional dependencies are ignored by the successor task, may be determined based on programmer specifications, runtime parameters, or end-user input.[0068] FIG. 9A illustrates a procedure 900 that uses the +> and the -> statements so as to define selective and mandatory task dependencies in accordance with the various embodiments. The procedure 900 creates four tasks (Tasks A-D). Task B includes a - > statement that indicates it will not be completely finished until Task A finishes execution. Task D is selectively dependent on Task C according to the enforcement condition represented by the parameter 'user_select_speed', and thus becomes ready for execution after Task C is marked as finished, unless the end-user indicates that the speed of results return is highly desired and the enforcement condition evaluates true, in which case, Task D may be allowed to execute in parallel with or prior to Task C finishing. Similarly, Task D is selectively dependent on Task B according to the enforcement condition represented by the parameter 'user_select_more_results', which will return true if the end-user indicates that the quantity of useful results returned is more important than the execution speed, in which case the dependency will be ignored. In this manner, the end-user may toggle the quality of their user experience at software application runtime by providing input that triggers selective enforcement of the inter-task dependencies established via the +> statement.[0069] FIG. 9B is an illustration of a timeline of the execution of the tasks of the procedure 900 via a first thread (Thread 1) and a second (Thread 2). In block 902, Task A becomes ready for execution. In block 906, Task A begins execution via the first thread. Task B has the predecessor Task A as established by the -> mandatory dependency statement and waits for Task A to finish execution before becoming ready. In block 904, Task C becomes ready for execution. Task D has selective dependency on Task B and Task C, and evaluates the enforcement conditions represented by the parameters 'user_select_speed' and 'user_select_more_results' prior to determining if the predecessor dependencies should be honored. In block 908, Task C begins execution via the second thread. In block 910, Task C finishes executing its procedure.[0070] In block 914 Task A finishes execution. In block 912 Task B, which has a mandatory dependence on Task A and thus waited for Task A to finish execution at block 914, becomes ready and in block 916 begins execution. Task B finishes execution in block 924.[0071] In the example illustrated in FIG. 9B, the enforcement condition represented by the parameter 'user_select_more_results' returned true, indicating that Task D may ignore the inter- task dependency on Task B. Task C has already finished execution at block 910, thus all of Task D's dependencies are resolved. In block 918 Task D becomes ready (since its dependencies on Task C and Task B have been resolved). In block 920 Task D begins execution, and in block 922 Task D finishes execution without waiting for Task B to finish execution.[0072] While in many instances the first and second tasks will be from different threads, there are cases in which the first and second tasks may be part of the same thread. An example of such an instance is illustrated in the following sequence:task A = create_task([] {});task B = create_task([&] {+>(A, enforce_cond);});launch(A);launch(B).[0073] FIG. 10 illustrates a method 1000 of executing tasks in a computing device according to various embodiments. The method may include tasks in a computing device including the implementing of a first operation (e.g., a "+>" operation) for selective enforcement of intertask execution dependencies. The method may further include implementing a second operation (e.g., a "->" operation) for mandatory enforcement of intertask execution dependencies. The method 1000 may be performed by one or more processing cores of the computing device even though the method is described with reference to a single processing core. In block 1002, the processing core may commence the execution of a first task via a first thread of a thread pool of the computing device. In some embodiments, commencing the execution of a first task may include the execution and finishing of the first task. In some embodiments, the finished task may be added to a scheduling queue to enable scheduling of successor tasks.[0074] In block 1004, the processor may identify an operation (e.g., a +> or a -> statement) of the second task as being dependent on the first task finishing execution. Thus, the processor may identify any successor tasks such that the successor (a second task) is either selectively dependent (e.g., via a +> statement) or mandatorily dependent (e.g., via a -> statement) on the first task finishing execution. In some embodiments, any identified successors may be added to a ready queue to await execution, because their dependency on the first task has been resolved at block 1002.[0075] In block 1006 the processor may identify an operation (e.g., a +> or -> statement) of the second task as being dependent on a third task finishing execution. Thus, the processor may identify whether there exists any predecessor tasks to the second task such that execution of the second task is either selectively dependent (e.g., via a+> statement) or mandatorily dependent (e.g., via a -> statement) on the predecessor task(s) (a third task). In some embodiments, if no predecessor tasks exist, the second task may remain in the ready queue to await execution. In someembodiments, if predecessor tasks do exist, the second task may be removed from the ready queue until all its inter-task dependencies, mandatory or selective, are resolved.[0076] In block 1008, the processor executing the tasks via one or more threads of a thread pool, may determine whether the inter-task dependencies identified at block 1006 are selective, i.e., linked to enforcement conditions, or mandatory. If the dependency is mandatory, the third task may be moved to the ready queue (if it is not already in the ready queue) to await commencing of execution via an open thread of the thread pool, as in block 1014. In some embodiments, commencing the execution of the third task may occur similarly to the process described in block 1002. If the dependency is selective, the processor may determine whether to enforce the dependency.[0077] In block 1010, the processor may determine whether to enforce the inter-task dependency by determining whether the one or more enforcement conditions are satisfied. In an embodiment, the enforcement conditions may be an evaluation of runtime conditions to determine the most resource efficient order of task execution, which instructs the scheduler to enforce a dependency only if it is efficient to do so. In an embodiment, the enforcement conditions may incorporate metrics of user satisfaction such as quality-of-output, latency, throughput, power, etc. to allow the scheduler to decide whether a selective dependency should be enforced or ignored. In an embodiment, the enforcement condition may be a timer during which the successor may wait for the predecessor to finish execution, and which may instruct the scheduler to enforce the dependency so long as time is left on the timer, and ignore the dependency after the expiration of the timer. In an embodiment, the enforcement conditions may be one or more options provided to an end-user by the software application at runtime, wherein the end-user's selection may determine which dependencies will be enforced. For example, an end-user may be provided with a sliding toggle offering a range between graphics quality and character motion speed in a game. When the end-user adjusts the sliding toggle, the scheduler determines whether or not enforcing task dependencies comports with the enforcement condition set by the end-user. In another embodiment, the end-user may be prompted by the software application at the time of execution, to decide whether to enforce the dependency.[0078] If one or more of the enforcement conditions associated with the selective dependency are not satisfied, the dependency may be enforced and the third task may commence execution as in block 1014. Thus, the second task may not be allowed to begin execution until the third task has finished. [0079] If the one or more enforcement conditions are satisfied the second task's dependency on the third task may be ignored. In block 1012, the processor may determine whether any unexecuted predecessor dependencies of the second task remain unresolved. The determination at block 1012 may also be performed after the third task finishes execution.[0080] If no unexecuted predecessor dependencies remain, the process may return to block 1002, where the second task begins execution via an available thread of the thread pool, and may be added to the scheduling queue so that the second task's successors may be identified.[0081] In an embodiment, commencing of the execution of the first task in block 1002 includes executing the first task in a first processing core of the computing device, and commencing execution of the second task includes executing the second task in a second processing core of the computing device prior to or concurrent with the first task due to ignoring of a selective dependency whose enforcement conditions have been satisfied.[0082] The various embodiments (including but not limited to embodiments discussed above with respect to FIGs. 1, 3-7, 8, 9A, 9B and 10) may be implemented on a variety of computing devices, examples of which are illustrated in FIGs. 1 1-13.[0083] Computing devices will have in common the components illustrated in FIG. 1 1, which illustrates an example personal laptop computer 1 100. Such a personal computer 1 100 generally includes a multi-core processor 1 101 coupled to volatile memory 1 102 and a large capacity nonvolatile memory, such as a disk drive 1 104. The computer 1 100 may also include a compact disc (CD) and/or DVD drive 1 108 coupled to the processor 1 101. The personal laptop computer 1 100 may also include a number of connector ports coupled to the processor 1 101 for establishing data connections or receiving external memory devices, such as a network connection circuit for coupling the processor 1 101 to a network. The personal laptop computer 1 100 may have a radio/antenna 1 1 10 for sending and receiving electromagnetic radiation that is connected to a wireless data link coupled to the processor 1 101. The computer 1 100 may further include keyboard 1 1 18, a pointing a mouse pad 1 120, and a display 1 122 as is well known in the computer arts. The multi-core processor 1 101 may include circuits and structures similar to those described above and illustrated in FIG. 1.[0084] Various embodiments may include a computing device having a processor configured with processor-executable instructions to perform operations comprising implementing a first operation for selective enforcement of intertask execution dependencies. FIG. 12 illustrates an exemplary computing device, a smartphone 1200 that includes a multi-core processor 1201 coupled to internal memory 1204, a display 1212, and to a speaker 1214. Additionally, the smartphone 1200 may include an antenna for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 1208 coupled to the processor 1201. Smartphones 1200 typically also include menu selection buttons or rocker switches 1220 for receiving user inputs. A typical smartphone 1200 also includes a sound encoding/decoding (CODEC) circuit 1206, which digitizes sound received from a microphone into data packets suitable for wireless transmission and decodes received sound data packets to generate analog signals that are provided to the speaker to generate sound. Also, one or more of the processor 1201, transceiver 1208 and CODEC 1206 may include a digital signal processor (DSP) circuit (not shown separately).[0085] The various embodiments may also be implemented on any of a variety of commercially available server devices, such as the server 1300 illustrated in FIG. 13. Such a server 1300 typically includes multiple processor systems one or more of which may be or include a multi-core processor 1301. The processor 1301 may be coupled to volatile memory 1302 and a large capacity nonvolatile memory, such as a disk drive 1303. The server 1300 may also include a floppy disc drive, compact disc (CD) or DVD disc drive 1304 coupled to the processor 1301. The server 1300 may also include network access ports 1306 coupled to the processor 1301 for establishing data connections with a network 1308, such as a local area network coupled to other broadcast system computers and servers.[0086] The processors 1 101, 1201, 1301 may be any programmable multi-core multiprocessor, microcomputer or multiple processor chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions and operations of the various embodiments described herein. Multiple processors may be provided, such as one processor dedicated to wirelesscommunication functions and one processor dedicated to running other applications. Typically, software applications may be stored in the internal memory 1 102, 1204, 1302 before they are accessed and loaded into the processor 1 101, 1201, 1301. In some mobile computing devices, additional memory chips (e.g., a Secure Data (SD) card) may be plugged into the mobile device and coupled to the processor 1 101, 1201, 1301. The internal memory 1 102, 1204, 1302 may be a volatile or nonvolatile memory, such as flash memory, or a mixture of both. For the purposes of this description, a general reference to memory refers to all memory accessible by the processor 1 101, 1201, 1301, including internal memory, removable memory plugged into the mobile device, and memory within the processor 1 101, 1201, 1301 itself.[0087] Computer program code or "code" for execution on a programmable processor for carrying out operations of the various embodiments may be written in a high level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, a Structured Query Language (e.g., Transact-SQL), Perl, or in various other programming languages. Program code or programs stored on a computer readable storage medium as used herein refer to machine language code (such as object code) whose format is understandable by a processor.[0088] Computing devices may include an operating system kernel that is organized into a user space (where non-privileged code runs) and a kernel space (where privileged code runs). This separation is of particular importance in Android® and other general public license (GPL) environments where code that is part of the kernel space must be GPL licensed, while code running in the user-space may not be GPL licensed. It should be understood that the various software components discussed in this application may be implemented in either the kernel space or the user space, unless expressly stated otherwise.[0089] As used in this application, the terms "component," "module," and the like are intended to include a computer-related entity, such as, but not limited to, hardware, firmware, a combination of hardware and software, software, or software in execution, which are configured to perform particular operations or functions. For example, a component may be, but is not limited to, a process running on a processor, aprocessor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be referred to as a component. One or morecomponents may reside within a process and/or thread of execution and a component may be localized on one processor or core, and/or distributed between two or more processors or cores. In addition, these components may execute from various non- transitory computer readable media having various instructions and/or data structures stored thereon. Components may communicate by way of local and/or remote processes, function or procedure calls, electronic signals, data packets, memory read/writes, and other known computer, processor, and/or process relatedcommunication methodologies.[0090] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the blocks of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of blocks in the foregoing embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the blocks; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular.[0091] The various illustrative logical blocks, modules, circuits, and algorithm blocks described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and designconstraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the claims.[0092] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a fieldprogrammable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function. [0093] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable medium or non-transitory processor- readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a non- transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non- transitory processor- readable medium and/or computer-readable medium, which may be incorporated into a computer program productThe preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the claims. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
The disclosure relates to a system for and a method of forming a local interconnect in an integrated circuit using microcontact printing. An exemplary method of the disclosure can include applying an active agent to a stamp, stamping the stamp on a portion of an integrated circuit wafer to form an aperture in a layer of material on the integrated circuit wafer, and providing a conductive material in the aperture formed by the stamp. The stamp preferably has a wedge-shaped extrusion with a length corresponding to a length of an interconnect to be formed in the portion of the integrated circuit wafer. The conductive material in the aperture defines the interconnect. In one example, the interconnect can be as narrow as 20 to 50 nanometers (nm). |
What is claimed is: 1. A method of forming local interconnect in an integrated circuit using microcontact printing, the method comprising:applying an material to a stamp, the stamp having a wedge-shaped extrusion, the wedge-shaped extrusion having a length corresponding to a length of an interconnect to be formed in a portion of an integrated circuit wafer; stamping the stamp on the portion of the integrated circuit wafer to form an aperture in a layer of material on the integrated circuit wafer; and providing a conductive material in the aperture formed by the stamp, the conductive material in the aperture defining the interconnect. 2. The method of claim 1, wherein the step of applying an material to a stamp comprises covering a surface of the stamp which comes into contact with the integrated circuit wafer during the step of stamping the stamp.3. The method of claim 1, wherein the material serves as a catalytic colloid.4. The method of claim 1, further comprising transferring the material to the integrated circuit wafer.5. The method of claim 4, wherein the transferred material provides a seeding layer for electroless copper (Cu) deposition.6. The method of claim 5, wherein the conductive material provided in the providing a conductive material in the aperture step comprises copper (Cu).7. The method of claim 1, wherein the step of providing a conductive material in the aperture formed by the stamp comprises electroless copper (Cu) deposition.8. The method of claim 1, wherein the stamp is a poly(dimethylsiloxane) (PDMS) stamp.9. The method of claim 1, wherein the interconnect has a width of between 20 and 50 nanometers (nm).10. A method of fabricating an integrated circuit, the method comprising:providing a dielectric layer over an integrated circuit wafer; selectively forming a trench in the dielectric layer by stamping using an extrusion from a stamp; and filling the trench with a conductive material to form an interconnect. 11. The method of claim 10, wherein the step of selectively forming a trench in the dielectric layer using an extrusion from a stamp comprises stamping the extrusion of the stamp in a location of the dielectric layer where an interconect is to be formed.12. The method of claim 11, wherein the location in the dielectric layer comprises over an active region of the integrated circuit wafer.13. The method of claim 12, wherein the active region comprises a source region.14. The method of claim 10, further comprising providing an material to the stamp, the material serving as a catalytic colloid.15. The method of claim 14, further comprising transferring at least a portion of the material from the stamp to the integrated circuit wafer, the material serving as a seeding layer for electroless copper (Cu) deposition.16. The method of claim 10, wherein the trench formed in the step of selectively forming a trench in the dielectric layer using an extrusion from a stamp has a width of between 20 and 50 nanometers (nm).17. The method of claim 10, wherein the stamp is a poly(dimethylsiloxane) (PDMS) stamp.18. The method of claim 10, wherein the conductive material comprises copper (Cu).19. The method of claim 10, wherein the extrusion has a wedge shape.20. The method of claim 19, wherein the extrusion is between about 250 and about 750 nm long. |
FIELD OF THE INVENTIONThe present specification relates generally to the field of integrated circuits and to methods of manufacturing integrated circuits (ICs). More particularly, the present specification relates to a system for and a method of forming local interconnects using microcontact printing.BACKGROUND OF THE INVENTIONThe semiconductor industry desires to manufacture integrated circuits (ICs) with higher and higher densities of devices on a smaller chip area to achieve greater functionality and to reduce manufacturing costs. This desire for large-scale integration has led to a continued shrinking of the circuit dimensions and features of the devices.The ability to reduce the size of structures such as shorter gate lengths in field-effect transistors is driven by lithographic technology which is, in turn, dependent upon the wavelength of light used to expose the photoresist. In current commercial fabrication processes, optical devices expose the photoresist using light having a wavelength of 248 nm (nanometers). Research and development laboratories are experimenting with the 193 nm wavelength to reduce the size of structures. Further, advanced lithographic technologies are being developed that utilize radiation having a wavelength of 157 nm and even shorter wavelengths, such as those used in Extreme Ultra-Violet (EUV) lithography (e.g., 13 nm).One challenge facing lithographic technology is fabricating features below 100 nm. Although photolithography is the most widely used technology in IC fabrication, other fabrication technologies are being explored. One such technology is "soft lithography", which is a non-photolithographic strategy based on such techniques as self-assembly, replica molding, and stamping. Examples are provided in U.S. Pat. Nos. 5,512,131 (Kumar et al.), 5,900,160 (Whitesides et al.), and 6,060,121 (Hidber et al.), and also in Xia, Y. and Whitesides, G., "Soft Lithography", Annu. Rev. Mater. Sci. 1998, 28:153-84.As explained by Xia and Whitesides, soft lithography utilizes an elastomeric block or stamp with patterned relief structures on its surface. The elastomeric block is cast molded, coated with a self-assembled monolayer (SAM), then printed onto a suitable medium, such as Au or Ag-a thin monolayer of material having a desired chemical property results. Soft lithography has been proposed for such applications as microcontact printing of SAMs, patterned SAMs as resists in selective wet etching, patterned SAMs as templates in selective deposition, micromolding, and related techniques.One area of lithography which requires further development is the area of local interconnects. Certain integrated circuits (ICs) and IC fabrication processes utilize local interconnects to electrically couple transistor elements. Local interconnects can connect a drain, source, or gate of one transistor to a drain, source, or gate of another transistor. Additionally, local interconnects can connect the drain, source, or gate of one transistor to the drain, source, or gate of the same transistor or to other circuits or conductors within the IC. Generally, conventional local interconnects are formed below a first aluminum (Al) or metal layer associated with an IC (e.g., at the same level or below the top surface of a first thick insulating layer over the semiconductor substrate).Local interconnects can be created in a trench etch and fill process before the first metal layer is provided over the first thick insulating layer. Local interconnects are generally formed after transistors are formed on the semiconductor substrate and covered by the first thick insulating layer. The thick insulating layer is etched to form trenches which connect the various circuit and transistor elements in accordance with the particular design of the IC. The trenches are filled with a conductive material, such as, polysilicon, tungsten, or other metal to complete the local interconnect. In this way, connections between transistors, nodes, and other elements can be achieved locally without using the first metal layer. As device sizes continue to decrease, the reduction in local interconnect size has remained an obstacle.Thus, there is a need for microcontact printing for interconnects. Further, there is a need for narrower interconnects. Yet further, there is a need for a system for and method of forming local interconnects using microcontact printing.The teachings hereinbelow extend to those embodiments which fall within the scope of the appended claims, regardless of whether they accomplish one or more of the above-mentioned needs.SUMMARY OF THE INVENTIONAn exemplary embodiment is related to a method of forming a local interconnect in an integrated circuit using microcontact printing. This method can include applying an active agent to a stamp, stamping the stamp on a portion of an integrated circuit wafer to form an aperture in a layer of material on the integrated circuit wafer, and providing a conductive material in the aperture formed by the stamp. The stamp preferably has a wedge-shaped extrusion with a length corresponding to a length of an interconnect to be formed in the portion of the integrated circuit wafer. The conductive material in the aperture defines the interconnect. In one example, the interconnect can be as narrow as 20 to 50 nanometers (nm).Another exemplary embodiment is related to a method of fabricating an integrated circuit. This method can include providing a dielectric layer over an integrated circuit wafer, selectively forming a trench in the dielectric layer using an extrusion from a stamp, and filling the trench with a conductive material to form an interconnect.Another embodiment is related to a system for forming local interconnect using microcontact printing. This system can include a stamp having a wedge-shaped extrusion with a length corresponding to the length of an interconnect to be formed in a portion of an integrated circuit.Other principle features and advantages of the present invention will become apparent to those skilled in the art upon review of the following drawings, the detailed description, and the appended claims.BRIEF DESCRIPTION OF THE DRAWINGSThe exemplary embodiments will hereafter be described with reference to the accompanying drawings, wherein like numerals denote like elements, and:FIG. 1 is a schematic representation of a stamp in accordance with an exemplary embodiment;FIG. 2 is a schematic cross-sectional view of a portion of an integrated circuit and a stamp in accordance with another exemplary embodiment;FIG. 3 is a schematic representation of a stamp and an integrated circuit wafer configured for use with the stamp in accordance with yet another exemplary embodiment;FIG. 4 is a flow diagram of a method of forming local interconnect in an integrated circuit using microcontact printing in accordance with an exemplary embodiment;FIG. 5 is a schematic representation of an exemplary step in the formation of a stamp;FIG. 6 is a schematic representation of an exemplary etching step in the formation of a stamp; andFIG. 7 is a schematic representation of an exemplary step in the formation of a stamp.DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTSThe description made below with reference to FIGS. 1-7 provides details regarding exemplary embodiments. These exemplary embodiments are intended only to illustrate examples of various configurations, process steps, dimensions, materials, and uses of the invention defined by the claims. Therefore, details describing the exemplary embodiments should not be construed in a limiting manner.Referring first to FIG. 1, a stamp 10 is illustrated. In an exemplary embodiment, stamp 10 comprises an elastomer, such as, poly(dimethylsiloxane) (PDMS), a silicon rubber, a polyurethane, a polyimide, or other elastomers. Stamp 10 can include a base portion 12 and a stamping surface 14. In an exemplary embodiment, stamp 10 has the size of a whole die. Stamping surface 14 is configured in this exemplary embodiment with a wedge-shaped extrusion 16. Wedge-shaped extrusion 16 can have a sharp edge, which allows for a cutting function. In an exemplary embodiment, wedge-shaped extrusion 16 can have a length corresponding to the length of an interconnect to be formed in an integrated circuit. The length can be the thickness of the highest point of the integrated circuit to the bottom surface of the dielectric layer. In an exemplary embodiment, the length of wedge-shaped extrusion 16 is 250-750 nm. Accordingly, stamp 10 is suitable for printing an interconnect without a planar process. The elasticity of stamp 10 enables patterning on non-planar surfaces. In alternative embodiments, extrusion 16 can have any of a variety of different configurations which allow a trench or aperture to be formed. For example, instead of a wedge-shape, a trapezoidally-shaped extrusion could be used.Stamp 10 can be formed by making a groove in silicon (Si) using KOH. In an exemplary embodiment, stamp 10 has the same size as a die. Wedge-shaped extrusion 16 has a length of the interconnect width, which can be 10-30 nm. In an alternative embodiment, stamp 10 can include more than one wedge-shaped extrusion. Advantageously, stamp 10 provides for the formation of trenches with widths of 100 nm, which corresponds to the interconnect pitch (i.e., minimum line width +spacing to the next line) of less than 200 nm.Stamp 10 can have fiducials like reticles. Once stamp 10 is hardened in the stamp formation process, it can be dipped into a catalytic colloid in a controlled way. For example, only the tips of the wedges are dipped into the colloid. Stamp 10 is very elastic and can be easily shaped conformally with the topography. Advantageously, stamping is very inexpensive. From one wafer, it is possible to get hundreds of stamps and each one can be used for tens of thousands of stamping processes. An exemplary process of stamp formation is described with reference to FIGS. 5-7 below.FIG. 2 illustrates a portion 20 of an integrated circuit, including a substrate 22, active regions 24, trench 26, spacers 30, gates 32, gate oxide layers 34, and dielectric layer 36. Substrate 22 can be an entire IC wafer or part of an IC wafer. Substrate 22 can be part of an integrated circuit, such as, a memory, a processing unit, an input/output device, etc. Active regions 24 can include a source, drain, source extension, drain extension, or any other structure doped for electrical activity within portion 20.Trench 26 can be filled with any of a variety of insulative materials. Trench 26 can be a shallow trench isolation (STI) structure and is located to electrically isolate sections in portion 20 from each other. Spacers 30 can be any dielectric material, such as, silicon nitride, silicon oxynitride, and silicon rich nitride, and are located abutting lateral sides of gates 32. Gates 32 can be aluminum or any other electrically conductive or semiconductive material. Gates 32 are separated from substrate 22 by oxide layers 34. Oxide layers 34 can be any of a variety of dielectric materials.Dielectric layer 36 is a layer of dielectric material disposed over active regions 24, trench 26, spacers 30, and gate 32. In an exemplary embodiment, an aperture 40 is created in dielectric layer 36 using stamp 10. Aperture 40 can have a width of 120 nm and a depth of 100 nm. In alternative embodiments, aperture 40 has a width of 80-160 nm and a depth of 100-250 nm.Referring still to FIG. 2, stamp 10 is shown from a perspective view. As shown from this view, stamp 10 includes extrusion 16 having a length 17. In an exemplary embodiment, length 17 can be between 250 and 750 nm long. As described above with reference to FIG. 1, the length of extrusion 16 corresponds to the length of aperture 40 or the length of the interconnect. In an exemplary embodiment, the stamping surface is covered by an active agent 19, such as, palladium (for Cu) and cobalt (for Ni) which serves as a catalytic colloid. A copper (Cu) electroless deposition palladium can be used.During formation of the local interconnect in aperture 40, stamp 10 transfers the layer of active agent onto the wafer. The transferred layer of active agent serves as a seeding layer for electroless copper (Cu) deposition. Electroless copper deposition is one way to fill aperture 40 with a conductive material and form an interconnect. Electroless copper deposition can include plating copper, nickel or silver by a redox reaction of complexed metal in formaldehyde without any electrical current. As such, the deposition can be done on dielectric layers, too. In other embodiments, different processes can be utilized to fill aperture 40 with a variety of different materials.FIG. 3 illustrates alignment of stamp 10 to portion 20. Alignment structures 60 can be used to align portion 20 during lithography processes and during stamping. Since poly(dimethylsiloxane) (PDMS) is transparent for wavelengths of light or radiation less than 300 nanometers (nm), it is possible to use the same alignment structures to achieve a good alignment with stamp 10 as used in conventional lithography. Such alignment structures can include any of a variety of fiducials, such as, alignment marks. In an exemplary embodiment, alignment involves shining a laser light through fiducials on the stamp on the wafer. Alignment marks can be well-controlled trenches with a depth that maximizes reflected intensity. Reflected laser light is collected by a photo detector.Advantageously, stamping utilizing stamp 10 described with reference to FIGS. 1-3 simplifies the process of interconnect formation and allows very narrow (e.g., 20-50 nanometers (nm)) local interconnects. Thus, stamping simultaneously forms aperture 40 for the local interconnect and seeds the aperture for the conductive material of the local interconnect.FIG. 4 is a flowchart 80 illustrating steps in an exemplary method of forming local interconnect in an integrated circuit using microcontact printing. In a step 82, an active agent is applied to the surface of stamp 10 described with reference to FIGS. 1-3. After step 82, a step 84 is performed in which stamp 10 is stamped or depressed against portion 20 of the integrated circuit. Stamping of stamp 10 against portion 20 creates a trench or aperture in a layer of portion 20 from wedge-shaped extrusion 16. In an exemplary embodiment, the trench or aperture is created in dielectric layer 36 (described with reference to FIG. 2).After step 84, a step 86 is performed in which the trench or aperture is filled with a conductive material. Once the trench or aperture is filled, it is an interconnect. Advantageously, wedge-shaped extrusion 16 has a small width, resulting in a narrow interconnect. In an exemplary embodiment, the trench (and thus the interconnect) can be as narrow as 20 to 50 nanometers (nm).FIG. 5 illustrates an exemplary stamp formation technique involving a fabrication portion 100. Portion 100 includes a silicon wafer 110 and a patterned hard mask 120. Patterned hard mask 120 can be a nitride layer having an aperture or opening 130. Referring now to FIG. 6, an etching process is performed to form wedge structures on the stamp. In an exemplary embodiment, a KOH etch makes a groove 140 in silicon wafer 110 due to preferential etch. By controlling the etch time, it is possible to determine the length of the wedge which defines the width of the local interconnect. To achieve a groove height of approximately 100 nm, for example, it is necessary to have trenches in nitride hard mask 120 of comparable sizes.Once groove 140 is formed, patterned hard mask 120 is removed and a PDMS stamp material layer 150 is provided (FIG. 7). Groove 140 is silicon wafer 110 creates a wedge shaped extrusion in PDMS stamp material layer 150.While the exemplary embodiments illustrated in the FIGURES and described above are presently preferred, it should be understood that these embodiments are offered by way of example only. Other embodiments may include, for example, stamps having a variety of different shaped extrusions or trench-forming structures. The invention is not limited to a particular embodiment, but extends to various modifications, combinations, and permutations that nevertheless fall within the scope and spirit of the appended claims. |
A memory array comprises a vertical stack comprising alternating insulative tiers and wordline tiers. The wordline tiers comprise gate regions of individual memory cells. The gate regions individually comprise part of a wordline in individual of the wordline tiers. Channel material extends elevationally through the insulative tiers and the wordline tiers. The individual memory cells comprise a memory structure laterally between the gate region and the channel material. Individual of the wordlines comprise laterally-outer longitudinal-edge portions and a respective laterally-inner portion laterally adjacent individual of the laterally-outer longitudinal-edge portions. The individual laterally-outer longitudinal-edge portions project upwardly and downwardly relative to its laterally-adjacent laterally-inner portion. Methods are disclosed. |
1. A method for forming a memory array, comprising:Forming a stack including vertically alternating insulating layers including sacrificial material and word line layers, the insulating layers including opposing longitudinal edges including respective word lines to be formed in each of the word line layers the longitudinal shape of the longitudinal profile, the word line layer including the first conductive material of the respective word lines to be formed;A second conductive material is selectively laterally deposited from the first conductive material beyond the opposing longitudinal edges of the insulating layer, the selectively deposited second conductive material protruding upwardly and downwardly to each of the insulating layers. layer, and includes portions of the respective word lines;After the selective deposition, removing the sacrificial material from the insulating layer; andAfter said removal, an insulator material is formed to line and partially fill each insulating layer.2. The method of claim 1, comprising forming the insulator material between upwardly projecting portions and downwardly projecting portions of the selectively deposited second conductive material of vertically adjacent word line layers Extend fully vertically to form a longitudinally elongated void in each insulating layer.3. The method of claim 1, wherein upon initiating the selective deposition, the first conductive material is laterally recessed from the opposing longitudinal edges of the insulating layer.4. The method of claim 1, wherein the first conductive material and the second conductive material have the same composition relative to each other.5. The method of claim 1, comprising:forming a channel material extending vertically through the insulating layer and the word line layer; andEach memory cell of the array is formed to include a memory structure in a gate region and laterally between the gate region and the channel material.6. The method of claim 5, comprising forming the memory structure to include:a charge blocking region of each memory cell vertically along each gate region;charge storage material of each memory cell vertically along each of the charge blocking regions; andAn insulating charge transfer material is located laterally between the channel material and the charge storage material.7. A method for forming a memory array, comprising:A stack is formed that includes vertically alternating insulating layers and word line layers, the insulating layers and word line layers including opposing longitudinal edges including respective pixels to be formed in each of the word line layers. a longitudinal shape of a longitudinal profile of a word line, the word line layer including a first sacrificial material, and the insulating layer including a second material having a different composition than the first sacrificial material;A third sacrificial material is selectively laterally deposited from the first sacrificial material beyond the opposing longitudinal edges of the insulating layer, the selectively deposited third sacrificial material protruding upwardly and downwardly to each of the insulating layers. layer, the third sacrificial material has a different composition than the composition of the second material;A fourth material protruding upwardly and downwardly into each insulating layer is formed directly above and directly below the selectively deposited third sacrificial material, and the fourth material has a structure similar to that of the first and third sacrificial materials. The composition of materials varies in composition;Selectively removing the first and third sacrificial materials relative to the second and fourth materials to form: a) a cavity extending upwardly and downwardly in the fourth material, and b) a word line layer gap;forming conductive material in the cavity and in the word line layer voids and forming the respective word lines to include the conductive material in the cavity and in the word line layer voids; andThe fourth material is formed to line and partially fill each insulating layer.8. The method of claim 7, wherein the first sacrificial material and the third sacrificial material have the same composition relative to each other.9. The method of claim 7, wherein the second material and the fourth material have the same composition relative to each other.10. The method of claim 7, wherein the conductive material completely fills the cavity and the word line layer gap.11. The method of claim 7, comprising selectively removing said second and fourth materials relative to said conductive material after said forming said conductive material.12. The method of claim 7, comprising:forming a channel material extending vertically through the insulating layer and the word line layer; andEach memory cell of the array is formed to include a memory structure in a gate region and laterally between the gate region and the channel material.13. The method of claim 12, comprising forming the memory structure to include:a charge blocking region of each memory cell vertically along each gate region;charge storage material of each memory cell vertically along each of the charge blocking regions; andAn insulating charge transfer material is located laterally between the channel material and the charge storage material.14. A memory array, comprising:Comprising a vertical stack of alternating insulating layers and word line layers, the word line layers including gate regions of respective memory cells, the gate regions respectively including portions of word lines in each of the word line layers;a channel material extending vertically through the insulating layer and the word line layer;The respective memory cells include a memory structure laterally located between the gate region and the channel material; andEach of said word lines includes a laterally outer longitudinal edge portion and a corresponding laterally inner portion laterally adjacent each said laterally outer longitudinal edge portion, said respective laterally outer longitudinal edge portion being upwardly and outwardly relative to its laterally adjacent laterally inner portion. protruding downward; andAn insulator material lining and partially filling the insulating layer.15. The memory array of claim 14, wherein the insulating layers each comprise a longitudinally elongated void.16. The memory array of claim 15, including the insulator material extending completely vertically between the respective laterally outer longitudinal edge portions of the vertically adjacent word line layers.17. The memory array of claim 15, wherein the longitudinally elongated void is laterally and circumferentially surrounded by an insulator material that is vertically adjacent each of the laterally outer longitudinal edges of the wordline layer. Fully extended vertically between sections.18. The memory array of claim 14, wherein each word line has a generally horizontal I-beam shape in a vertical cross-section orthogonal to a primary longitudinal orientation of the respective word line.19. The memory array of claim 14, wherein each of the laterally outer longitudinal edge portions includes an upper projection projecting upward at an angle from a laterally adjacent upper surface and projecting downward at an angle from a laterally adjacent lower surface. The lower protrusion.20. The memory array of claim 19, wherein each of the angles is 90°.21. The memory array of claim 19, wherein each of the laterally adjacent upper and lower surfaces are horizontal.22. The memory array of claim 21, wherein each of the angles is 90°.23. The memory array of claim 19, wherein the upper and lower tabs respectively project the same maximum amount from their respective laterally adjacent upper and lower surfaces.24. The memory array of claim 19 having a total number of upper tabs and a total number of lower tabs, the total numbers being the same as each other.25. The memory array of claim 19, having one and only one upper tab and one and only one lower tab.26. The memory array of claim 19, having a plurality of upper protrusions and having a plurality of lower protrusions.27. A memory array comprising:Comprising a vertical stack of alternating insulating layers and word line layers, the word line layers including gate regions of respective memory cells, the gate regions respectively including portions of word lines in each of the word line layers;a channel material extending vertically through the insulating layer and the word line layer;The respective memory cells include a memory structure laterally located between the gate region and the channel material;Each of said word lines includes a laterally outer longitudinal edge portion and a corresponding laterally inner portion laterally adjacent each of said memory structures, said respective laterally outer longitudinal edge portion being higher than each said laterally inner portion; andAn insulator material lining and partially filling the insulating layer.28. A memory array comprising:A vertical stack comprising alternating insulating layers and word line layers, the word line layers comprising control gate regions of respective memory cells, the control gate regions respectively comprising portions of word lines in respective word line layers;a charge blocking region of each memory cell vertically along each of the control gate regions;charge storage material of each memory cell vertically along each of the charge blocking regions;a channel material extending vertically through the insulating layer and the word line layer;an insulating charge transfer material located laterally between the channel material and the charge storage material;Each of said word lines includes a laterally outer longitudinal edge portion and a corresponding laterally inner portion laterally adjacent each said laterally outer longitudinal edge portion, said respective laterally outer longitudinal edge portion being upwardly and outwardly relative to its laterally adjacent laterally inner portion. protruding downward; andAn insulator material lining and partially filling the insulating layer.29. A memory array comprising:Comprising a vertical stack of alternating insulating layers and word line layers, the word line layer includes a control gate region of each memory cell, the control gate region respectively includes a portion of a word line in each of the word line layers;a charge blocking region of each memory cell vertically along each of the control gate regions;charge storage material of each memory cell vertically along each of the charge blocking regions;a channel material extending vertically through the insulating layer and the word line layer;an insulating charge transfer material located laterally between the channel material and the charge storage material;Each of said word lines includes a laterally outer longitudinal edge portion and a corresponding laterally inner portion laterally adjacent each of said memory cells, said respective laterally outer longitudinal edge portion being higher than each said laterally inner portion; andAn insulator material lining and partially filling the insulating layer. |
Memory arrays and methods for forming memory arraysTechnical fieldEmbodiments disclosed herein relate to memory arrays and to methods for forming memory arrays.Background techniqueMemory is a type of integrated circuit system and is used in computer systems to store data. Memory may be fabricated as one or more arrays of individual memory cells. Digital lines (which may also be referred to as bit lines, data lines, or sense lines) and access lines (which may also be referred to as word lines) may be used to write to and read from memory cells. Sense lines may conductively interconnect memory cells along columns of the array, and access lines may conductively interconnect memory cells along rows of the array. Each memory cell is uniquely addressable through a combination of sense and access lines.Memory cells may be volatile, semi-volatile, or non-volatile. Non-volatile memory cells can store data for long periods of time without power. Non-volatile memory is typically specified as having a retention time of at least about 10 years. Volatile memory dissipates and is therefore refreshed/rewritten to maintain data storage. Volatile memory can have a retention time of a few milliseconds or less. Regardless, the memory unit is configured to retain or store memory in at least two different selectable states. In a binary system, states are considered either "0" or "1". In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.Field-effect transistors are a type of electronic component that can be used in memory cells. These transistors include a pair of conductive source/drain regions with a semiconducting channel region therebetween. A conductive gate is adjacent to the channel region and separated from the channel region by a thin gate insulator. Applying a suitable voltage to the gate allows current to flow from one of the source/drain regions to the other through the channel region. When voltage is removed from the gate, current flow through the channel region is greatly prevented. Field effect transistors may also include additional structures, such as a reversibly programmable charge storage region as part of the gate construction between the gate insulator and the conductive gate.Flash memory is a type of memory used extensively in modern computers and devices. For example, modern personal computers can have the BIOS stored on a flash memory chip. As another example, it is increasingly common for computers and other devices to utilize flash memory in the form of solid-state drives in place of traditional hard drives. As yet another example, flash memory is popular in wireless electronic devices because flash memory enables manufacturers to support new communication protocols as they become standardized and enables manufacturers to provide enhanced Features The ability to remotely upgrade the device.NAND may be the basic architecture of integrated flash memory. A NAND cell device includes at least one selection device coupled in series with a series combination of memory cells (and the series combination is commonly referred to as a NAND string). NAND architectures may be configured in a three-dimensional arrangement that includes vertically stacked memory cells that individually include vertical transistors that are reversibly programmable. Controls or other circuitry may be formed beneath the vertically stacked memory cells. Other volatile or non-volatile memory array architectures may also include vertically stacked memory cells that individually include transistors.Description of drawings1 is a schematic cross-sectional view of a portion of a substrate in a process according to an embodiment of the present invention.FIG. 1A is an enlarged view of a portion of FIG. 1 .2 is a view of the substrate of FIG. 1 taken at a process step subsequent to the process step shown in FIG. 1 and through line 2-2 in FIG. 3. FIG.Figure 3 is a view taken through line 3-3 in Figure 2.FIG. 4 is a view of the substrate of FIG. 3 at a process step subsequent to the process step shown in FIG. 3 .5 is a view of the substrate of FIG. 4 taken at a process step subsequent to the process step shown in FIG. 4 and through line 5 - 5 in FIG. 6 .Figure 6 is a view taken through line 6-6 in Figure 5.FIG. 7 is a view of the substrate of FIG. 6 at a process step subsequent to the process step shown in FIG. 6 .FIG. 8 is a view of the substrate of FIG. 7 at a process step subsequent to the process step shown in FIG. 7 .9 is a view of the substrate of FIG. 8 taken at a process step subsequent to the process step shown in FIG. 8 and through line 9-9 in FIG. 10. FIG.FIG. 10 is a view taken through line 10 - 10 in FIG. 9 .FIG. 11 is an enlarged view of a portion of FIG. 10 .FIG. 12 is a view of the substrate of FIG. 10 at a process step subsequent to the process step shown in FIG. 10 .FIG. 12A is an enlarged view of a portion of FIG. 12 .FIG. 13 is a view of the substrate of FIG. 12 at a process step subsequent to the process step shown in FIG. 12 .FIG. 13A is an enlarged view of a portion of FIG. 13 .FIG. 14 is a view of the substrate of FIG. 13 at a process step subsequent to the process step shown in FIG. 13 .FIG. 14A is an enlarged view of a portion of FIG. 14 .FIG. 14B is an enlarged view of a portion of FIG. 14 .FIG. 15 is a view of the substrate of FIG. 14 at a process step subsequent to the process step shown in FIG. 14 .FIG. 15A is an enlarged view of a portion of FIG. 15 .Figure 16 is a cross-sectional schematic and edited view of a portion of the substrate of Figure 15.Figure 17 is a schematic cross-sectional view of a portion of a substrate in a process according to an embodiment of the present invention.FIG. 17A is an enlarged view of a portion of FIG. 17 .Figure 18 is a view of the substrate of Figure 17A at a process step subsequent to the process step shown in Figure 17A.FIG. 19 is a view of the substrate of FIG. 18 at a process step subsequent to the process step shown in FIG. 18 .20 is a view of the substrate of FIG. 19 at a process step subsequent to the process step shown in FIG. 19. FIG.FIG. 21 is a view of the substrate of FIG. 20 at a process step subsequent to the process step shown in FIG. 20 .22 is a view of the substrate of FIG. 21 at a process step subsequent to the process step shown in FIG. 21 .23 is a view of the substrate of FIG. 22 at a process step subsequent to the process step shown in FIG. 22. FIG.Figure 24 is a schematic cross-sectional view of a portion of a substrate in accordance with an embodiment of the present invention.Detailed waysEmbodiments of the present invention encompass methods for forming arrays of transistors and/or memory cells, such as arrays of NAND or other memory cells with under-array peripheral control circuitry (eg, under-array CMOS). Embodiments of the present invention encompass so-called "gate last" or "replacement gate" processes, so-called "gate first" processes, and other processes, whether existing or developed in the future, that are independent of the time at which the transistor gates are formed. Embodiments of the present invention also encompass arrays of transistors and/or memory cells (eg, NAND or other memory cells) independent of the manufacturing method. A first example method embodiment is described with reference to FIGS. 1-15 (including FIGS. 1A, 12A, 13A, 14A, 14B, and 15A), which may be considered a "gate last" or "replacement gate" process.1 and 1A illustrate a substrate construction 10 in process during a method of forming a vertically extending string array 12 of transistors and/or memory cells (not shown). Substrate construction 10 includes conductive/conductor/conductive (ie, herein electrically conductive), semiconductive/semiconductor/semiconductive, or insulating/insulator/insulating (ie, herein electrically insulating) ) base substrate 11 of any one or more materials. Various materials have been formed vertically above the base substrate 11 . The material can be alongside, vertically inward, or vertically outward as depicted in Figures 1 and 1A. For example, other partially or fully fabricated components of the integrated circuit system may be provided somewhere over, around, or within base substrate 11 . Controls and/or other peripheral circuitry for operating components within a vertically extending string array of memory cells (e.g., array 12) may also be fabricated and may or may not be fully or partially integrated within the array or sub-array. Inside. Additionally, multiple sub-arrays may be fabricated and operated independently of each other, sequentially, or otherwise. In this file, "subarrays" are also considered arrays.The substrate construction 10 includes a stack 18 that includes vertically alternating insulating layers 20 and wordline layers 22 directly above an example conductively doped semiconductor material 16 (eg, conductively doped polysilicon above a metallic material). Wordline layer 22 may not include conductive material and insulating layer 20 may not include insulating material or be insulating at this point in the process. Only a few layers 20 and 22 are shown, where the stack 18 is more likely to include dozens, a hundred, or more, etc. layers 20 and 22 . Wordline layer 22 includes a first material 26 (eg, silicon nitride), which may be fully or partially sacrificial. The insulating layer 20 includes a second material 24 (eg, silicon dioxide) that is of a different composition than the first material 26 and may be fully or partially sacrificial. In one embodiment, material 26 may be considered a first sacrificial material 26 , and in one embodiment material 24 may be considered a second sacrificial material 24 . Conductive material 16 may include part of the control circuitry (eg, peripheral circuitry underlying the array) that controls read and write access to transistors and/or memory cells to be formed within array 12 . Other circuitry that may or may not be part of peripheral and/or control circuitry (not shown) may be between conductive material 16 and stack 18 . For example, multiple vertically alternating layers of conductive and insulating materials (not shown) of such circuitry may be below the lowest layer of word line layer 22 and/or above the highest layer of word line layer 22 .Referring to FIGS. 2 and 3 , channel openings 25 have been formed (eg, by dry anisotropic etching) into alternating layers 20 and 22 . By way of example only, the channel openings 25 are shown arranged in groups or columns of staggered rows of four openings 25 per row. Any existing or future developed alternative arrangements and construction may be used. Channel opening 25 may enter conductive material 16 as shown, or may stop on top of the conductive material (not shown).In one embodiment, transistor channel material is formed in each channel opening to extend vertically through the insulating layer and the word line layer, and each memory cell of the array is formed to include a gate region (eg, a control gate region ) and a memory structure located laterally between the gate region and the channel material. In one such embodiment, a memory structure is formed to include a charge blocking region, a charge storage material, and an insulating charge transfer material. The charge storage material of each memory cell (eg, floating gate material such as doped or undoped silicon or charge trapping material such as silicon nitride, metal dots, etc.) is vertically along each charge blocking region. An insulating charge transfer material (e.g., a bandgap engineered structure with a nitrogen-containing material [e.g., silicon nitride] sandwiched between two insulator oxides [e.g., silicon dioxide]) is located laterally between the channel material and the charge storage material between.FIG. 4 shows an embodiment in which charge blocking material 31 / 30 , charge storage material 32 and charge transfer material 34 have been formed in each channel opening 25 vertically along the insulating layer 20 and the word line layer 22 . The transistor material 31/30, 32 and 34 (eg memory cell material) may be prepared by depositing, for example, a respective thin layer of said transistor material over the stack 18 and within each channel opening 25 and subsequently at least planarizing such backside. to the uppermost surface of the stack 18. A punch etch may be performed to remove materials 31 / 30 , 32 , and 34 from the base of channel opening 25 to expose conductive material 16 . Channel material 36 has then been formed in channel opening 25 vertically along insulating layer 20 and word line layer 22 . Example channel material 36 includes suitably doped crystalline semiconductor materials, such as one or more of silicon, germanium, and so-called III/V semiconductor materials (eg, GaAs, InP, GaP, and GaN). Example thicknesses for each of materials 30, 32, 34, and 36 are 25 to 100 Angstroms. Channel opening 25 is shown including a radially centered solid dielectric material 38 (eg, spin-on dielectric, silicon dioxide, and/or silicon nitride). Alternatively, and by way of example only, the radially central portion within the channel opening 25 may contain void space (not shown) and/or be free of solid material (not shown).Referring to Figures 5 and 6, horizontally elongated trenches 40 have been formed (eg, by anisotropic etching) into the stack 18, and in one embodiment into the conductive material 16 (at least into the material 16) . By way of example, the insulating layer 20 and the word line layer 22 are formed so as to include opposing longitudinal edges 17 , 19 (eg, pairs of such edges) that together constitute respective word line layers 22 to be formed in the respective word line layer 22 . The longitudinal shape of the longitudinal outline 23 of the word line. With regard to the two opposite longitudinal edges 17 , 19 , only one complete longitudinal profile 23 is shown, wherein with respect to one longitudinal edge 17 and one longitudinal edge 19 only the portions of two laterally adjacent word lines formed adjacent the longitudinal profile 23 Longitudinal profile visible. The word lines to be formed may be laterally outwardly protruding or laterally inwardly recessed relative to the longitudinal edges 17 and 19, as will be apparent from the continuing discussion.Referring to FIG. 7 , first material 26 (not shown) of word line layer 22 has been selectively etched (eg, using liquid or vapor H3PO4 as the primary etchant) relative to second material 24 in which material 26 is nitrided silicon and material 24 is silicon dioxide).Referring to FIG. 8 , conductive material 48 has been formed into word line layer 22 through trench 40 and will include the conductive material of each word line to be formed. Any suitable conductive material may be used, such as one or both of a metallic material and/or a conductively doped semiconductor material.Referring to FIGS. 9-11 , the first conductive material 48 has been removed from each trench 40 . This has resulted in the formation of word lines 29 and vertically extending strings 49 of individual transistors and/or memory cells 56 . The approximate locations of transistors and/or memory cells 56 are indicated in parentheses in FIG. 11 and some are indicated in dashed outlines in FIG. 10 , with transistors and/or memory cells 56 being substantially annular or ring-shaped in the depicted example. of. The first conductive material 48 may be considered to have an end 50 with a control gate region 52 corresponding to each transistor and/or memory cell 56 (FIG. 11). In the depicted embodiment, control gate region 52 includes portions of respective word lines 29 . Materials 31/30, 32, and 34 can be considered memory structure 65 located laterally between control gate region 52 and channel material 36.A charge blocking region (eg, charge blocking material 31 / 30 ) is between charge storage material 32 and each control gate region 52 . The charge block can have the following functions in a memory cell: In programming mode, the charge block can prevent charge carriers from flowing out of the charge storage material (e.g., floating gate material, charge trapping material, etc.) to the control gate, and during erase In this mode, the charge block prevents charge carriers from flowing from the control gate into the charge storage material. Therefore, the charge block can be used to block charge migration between the control gate region and the charge storage material of each memory cell. The example charge blocking region as shown includes insulator material 31/30. As a further example, the charge blocking region may include a laterally (e.g., radially) outer portion of a charge storage material (e.g., material 32), wherein such charge storage material is insulating (e.g., between insulating charge storage material 32 and conductive without any different compositional materials between materials 48). Regardless, as an additional example, the interface of the charge storage material and the conductive material of the control gate may be sufficient to serve as a charge blocking region in the absence of any separate composition insulator material 31/30. Additionally, the interface of conductive material 48 with material 31/30 (when present) may act as a charge blocking region in combination with insulator material 31/30, and may alternatively or additionally be an insulating charge storage material (eg, silicon nitride material 32) of the lateral lateral area. Example material 31 is any silicon hafnium oxide, and example material 30 is silicon dioxide and/or silicon nitride.Referring to Figures 12 and 12A, the second conductive material 37 has been selectively deposited laterally (ie, selectively with respect to other outwardly exposed materials) beyond the opposing longitudinal edges 17, 19 of the insulating layer 20, and thereby The second conductive material 37 protrudes upward and downward into each adjacent insulating layer 20 and includes portions of each word line 29 . The first conductive material 48 and the second conductive material 37 may have the same composition or different compositions from each other. In one embodiment and as shown, upon initial selective deposition of the second conductive material 37, the first conductive material 48 is laterally recessed from opposing longitudinal edges 17, 19 of the insulating layer 20 (Fig. 10). Any existing or future developed selective deposition/growth technology may be used. As yet another example of conductive materials 48 and 37 including elemental tungsten and/or aluminum and other exposed materials including silicon dioxide and/or silicon nitride, the example technical disclosure was issued to Chang et al. on August 27, 1991 of U.S. Patent No. 5,043,299.In one embodiment in which the insulating layer at least initially includes a sacrificial material (eg, material 24, and whether insulating, semiconducting, or conductive), embodiments of the present invention further include that shown in Figures 12 and 12A The selective deposition action is followed by removal of such sacrificial material. By way of example only, it is shown in Figures 13 and 13A that all sacrificial material 24 (not shown) has been removed, such as by wet isotropic selective etching relative to other exposed materials. In the case where, for example, material 24 includes silicon dioxide, materials 37 and 38 include elemental tungsten, and material 31 includes hafnium silicon oxide, an example wet etch chemical is liquid or vapor HF.Referring to Figures 14, 14A, and 14B, an insulator material 51 (eg, silicon nitride, silicon oxynitride, aluminum oxide, hafnium oxide, combinations thereof, etc.) has been formed as a selective deposition in the vertically adjacent word line layer 22 The second conductive material 37 extends completely vertically between the upward and downward protruding portions. In one such embodiment and as shown in the Figures, longitudinally elongated voids 53 are thus formed in each insulating layer 20 (extending into and out of the plane of the page in which Figures 14, 14A and 14B lie).Referring to Figures 15 and 15A, another material 57 (dielectric and/or silicon-containing material, such as polysilicon) has been formed in each trench 40 vertically along and across the insulator material 51 therein. ).Referring to Figures 14, 14A, 15, and 15A, examples of each word line 29 formed may be viewed as including laterally outer longitudinal edge portions 35 and 43 and corresponding laterally inner portions 39 or 41 that are laterally adjacent to respective laterally outer longitudinal edges. Edge portions 35, 43, wherein each laterally outer longitudinal edge portion 35 and 43 projects upwardly and downwardly relative to its laterally adjacent laterally inner portion 39 or 41. In one embodiment, the insulator material 51 extends completely vertically between each laterally outer longitudinal edge portion 35 and 43 of the vertically adjacent word line layer 22 . In one embodiment in which longitudinally elongated voids 53 are formed, these voids may be laterally and circumferentially surrounded by insulator material 51 as shown. In one embodiment, each laterally outer longitudinal edge portion 35 and 43 may be considered to include an upper projection 45 projecting upwardly at an angle Θ (FIG. 14A) from a laterally adjacent upper surface 61 and a laterally adjacent lower surface 63 The lower protrusion 47 protrudes downward at a certain angle Φ. In one such embodiment, each of the angles Θ and Φ is 90°, and in one embodiment each of the laterally adjacent upper and lower surfaces 61 , 63 respectively are horizontal. In one embodiment, the upper and lower protrusions 45 and 47 project the same maximum amount (amount A1) from their respective laterally immediately adjacent upper surface 61 or lower surface 63, respectively. In one embodiment, each laterally outer longitudinal edge portion 35 and 43 (dimension T1) is taller than each laterally inner portion 39 or 41 (dimension T2).In one embodiment, each word line 29 has a generally horizontal I-beam shape in a vertical cross-section orthogonal to the primary longitudinal orientation (ie, direction) of the respective word line 29 . Figure 16 shows this example I-beam shape for each wordline 29, with the channel openings and materials therein not shown in order to clearly perceive the general I-beam shape.Any other attributes or aspects shown and/or described herein with respect to other embodiments may be used with respect to the above-described embodiments.An alternative example method for forming memory array 12 is described next with reference to Figures 17-23 (including Figure 17A). The same reference numbers for the embodiments described above have been used where appropriate, with certain construction differences indicated by the suffix "a" or by different reference numbers.Referring to Figures 17 and 17A, an example of an alternative process to the process depicted in Figure 7 is shown. In such an embodiment, the materials 26 and 24 of FIG. 6 may be considered to include a first sacrificial material 26 and a second material 24 having a different composition than the first sacrificial material 26 (eg, which may be sacrificial) . A third sacrificial material 67 has been selectively deposited laterally from the first sacrificial material 26 (ie, selectively deposited relative to the other outwardly exposed material) beyond the opposing longitudinal edges 17 , 19 of the insulating layer 20 and upwardly and protrudes downwardly into each adjacent insulating layer 20 . The third sacrificial material 67 has a different composition than the second material 24 . The first sacrificial material 26 and the third sacrificial material 67 may have the same composition or different compositions from each other. For example, and by way of example only, in the case where materials 26 and 67 are silicon nitride and second material 24 is silicon dioxide, the silicon dioxide may be hydroxyl terminated by initially causing it (eg, by exposure to H2plasma body or water) to selectively grow silicon nitride material 67 from silicon nitride material 26. Subsequently, the substrate is exposed to Si(CH3)3N(CH3)2, which selectively forms (CH3)3SiO in combination with silicon dioxide and will prevent subsequent deposition of silicon nitride thereon. Therefore, silicon nitride deposited by any existing or future developed means will effectively deposit selectively on exposed silicon nitride.Referring to FIG. 18 , a fourth material 71 having a different composition from the compositions of the first sacrificial material 26 and the third sacrificial material 67 has been formed directly above and below the selectively deposited third sacrificial material 67 , with the fourth materials upwardly facing respectively. and protrude downward into each insulating layer 20 . In one embodiment, fourth material 71 is sacrificial. Regardless, in one embodiment, Figure 18 also shows a fourth material 71 formed laterally on the longitudinal edge 55 of the third sacrificial material 67 within the trench 40.Figure 19 illustrates a short isotropic etch from longitudinal edge 55, for example by using HF selectively with respect to third sacrificial material 67 (where material 71 is silicon dioxide and third sacrificial material 67 is silicon nitride) Remove the fourth material 71 from above.Referring to FIG. 20 , the first sacrificial material 26 (not shown) and the third sacrificial material 67 (not shown) have been selectively removed (e.g., by wet isotropic etching) relative to the second material 24 and the fourth material 71 to form: a) a cavity 73 extending upward and a cavity 75 extending downward in the fourth material 71, and b) a word line layer void 77.Referring to Figure 21, a first conductive material 48 has been formed in cavities 73, 75 and word line layer gap 77. In one embodiment, as shown, first conductive material 48 completely fills cavities 73, 75 and word line layer gap 77.Referring to FIG. 22 , first conductive material 48 has been removed from trench 40 and thereby formed respective word lines 29 , which include first conductive materials located in cavities 73 and 75 and located in word line layer void 77 . Material 48.Referring to FIG. 23 , and in one embodiment, after first conductive material 48 has been formed, second material 24 has been selectively removed (eg, by wet isotropic etching) relative to first conductive material 48 (not shown). (shown) and the fourth material 71 (not shown). Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used.Embodiments of the present invention encompass memory arrays that are independent of the manufacturing method. Nonetheless, such memory arrays may have any of the properties as described herein in method embodiments. Similarly, the method embodiments described above may incorporate and form any of the attributes described with respect to the device embodiments.In one embodiment, a memory array (eg, 12) that is independent of the manufacturing method includes a vertical stack (eg, 18) of alternating insulating layers (eg, 20) and wordline layers (eg, 22). The word line layer includes gate regions (eg, 52) of respective memory cells (eg, 56). The gate regions respectively include portions of the word lines (eg, 29) in the respective word line layers. The channel material (eg, 36) extends vertically through the insulating layer and the word line layer. Each memory cell includes a memory structure (eg, 65) between the gate region and the channel material. In one embodiment, each word line includes a laterally outer longitudinal edge portion (eg, 35 and 43) and a corresponding laterally inner portion (eg, 39 or 41) laterally adjacent the respective laterally outer longitudinal edge portion. Each laterally outer longitudinal edge portion projects upwardly and downwardly relative to its laterally adjacent laterally inner portion. In one embodiment, each laterally outer longitudinal edge portion (eg, T1 ) is higher than each laterally inner portion (eg, T2 ). Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used.The above embodiments illustrate example methods of producing one and only one upper protrusion 45 and one and only one lower protrusion 47 and structures having the same. An alternative example embodiment is described with respect to Figure 24. The same reference numerals for the embodiments described above have been used where appropriate, with the suffix "b" indicating certain construction differences. Structure 10b has a plurality of upper protrusions 45 and a plurality of lower protrusions 47 . Regardless, in one embodiment, the total number of upper protrusions and the total number of lower protrusions are the same as each other (eg, even if such total number is only 1). The embodiment of Figure 24 may be formed, for example, by performing multiple iterations of selective deposition and formation of the fourth material, as shown in Figures 17, 17A, 18, and 19.The assemblies and structures discussed above may be used in integrated circuits/circuit systems and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communications modems, processor modules, and application-specific modules, and may include multi-layer, multi-chip modules. The electronic system may be any of a wide range of systems such as cameras, wireless devices, displays, chipsets, set-top boxes, games, lighting systems, vehicles, clocks, televisions, cellular phones, personal computers, automobiles, industrial control systems , airplanes, etc.In this document, unless otherwise indicated, the terms "vertical", "higher", "upper", "lower", "top", "above", "bottom", "above", "below", "Under", "under", "upward" and "downward" generally refer to a vertical direction. "Horizontal" refers to the general direction along the primary substrate surface (ie, within 10 degrees) to which the process substrates may be opposed during fabrication, and vertical is the direction generally orthogonal thereto. "Exactly horizontal" is a direction along the main substrate surface from which the process substrate can be opposed during fabrication (ie, not at an angle to it). Furthermore, "vertical" and "horizontal" as used herein are generally vertical directions relative to each other and are independent of the orientation of the substrate in three-dimensional space. In addition, "extending vertically" and "extending vertically" refer to a direction that is inclined at least 45° from just horizontal. Furthermore, "vertically extending", "vertically extending", horizontally extending and horizontally extending with respect to a field effect transistor refers to the trench of the transistor along which current flows between the source/drain regions during operation. Orientation of track length. For a bipolar junction transistor, "extend vertically", "extend vertically", extend horizontally and extend horizontally refer to the orientation along the length of the base along which current flows between the emitter and collector during operation. In some embodiments, any components, features, and/or regions that extend vertically extend vertically or within 10° of vertical.Furthermore, "directly above" and "directly below" require at least some lateral overlap (ie, horizontally) of the two stated regions/materials/components with respect to each other. Furthermore, the use of "above" without "directly" only requires that a portion of a stated area/material/component that is above another stated area/material/component be vertically directed from the other stated area/material/component. (i.e. regardless of whether there is any lateral overlap between the two stated areas/materials/components). Similarly, use of "below" without "positive" only requires that some portion of a stated area/material/component below another stated area/material/component be vertically vertical from the other stated area/material/component. Inwardly (i.e. regardless of whether there is any lateral overlap between the two stated areas/materials/components).Any of the materials, regions, and structures described herein may be uniform or non-uniform, and in any event may be continuous or discontinuous over any overlying material. When one or more example compositions are provided for any material, the material may include, consist essentially of, or consist of such one or more compositions . Furthermore, unless otherwise stated, each material may be formed using any suitable or yet to be developed technology, examples of which are atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion injection.Additionally, "thickness" (preceded by a non-directional adjective) used alone is defined as the average straight-line distance perpendicularly through a given material or zone from the nearest surface of an immediately adjacent material or zone of different composition. Additionally, various materials or regions described herein may have a substantially constant thickness or have variable thicknesses. If there is a variable thickness, then unless otherwise indicated, the thickness refers to the average thickness and the material or region will have a certain minimum thickness and a certain maximum thickness due to the variable thickness. As used herein, "different compositions" requires only that those portions of two stated materials or regions that are directly adjacent one another are chemically and/or physically different, such as where such materials or regions are not homogeneous Down. If two stated materials or regions are not directly in contact with each other, then to the extent such materials or regions are not homogeneous, "different composition" requires only that those portions of the two stated materials or regions that are closest to each other are chemically physically and/or physically different. In this document, a material, region, or structure is "directly against" another material, region, or structure when the stated materials, regions, or structures are in at least some physical contact with each other. In contrast, "over", "on", "near", "along" and "against" without "positive" cover "directly against" "by" and a configuration in which intervening materials, regions, or structures are such that the stated materials, regions, or structures are not in physical contact with each other.In this context, if, in normal operation, an electric current can flow continuously from one zone-material-component to another zone-material-component, and when sufficient subatomic positive and/or negative charges are generated, mainly through said subatomic positive and/or negative charges, or the movement of negative charges to effect the flow, the zone-material-components are then "electrically coupled" with respect to each other. Another electronic component can be between and electrically coupled to the zone-material-component. In contrast, when a region-material-component is said to be "directly electrically coupled," there are no intervening electronic components (e.g., no diodes, transistors, resistors, transducers) between the directly electrically coupled region-material-components. , switch, fuse, etc.).Additionally, a "metallic material" is any one or combination of an elemental metal, a mixture or alloy of two or more elemental metals, and any conductive metal compound.As used herein, "selectivity" with respect to etch, etching, removing, removal, deposition, forming and/or formation is the action: a so-called A stated material acts relative to another stated material at a rate of at least 2:1 by volume. Additionally, to selectively deposit, selectively grow, or selectively form is to deposit, grow, or form one material relative to another recited material or materials in a ratio of at least 2:1 by volume for at least a third The deposition, growth or formation of - 75 Angstroms.Unless otherwise indicated, the use of "or" herein covers either and both.in conclusionIn some embodiments, a method for forming a memory array includes forming a stack including vertically alternating insulating layers and word line layers. The insulating layer includes opposing longitudinal edges that include the longitudinal shape of a longitudinal profile of each word line to be formed in each of the word line layers. The word line layer includes a first conductive material of each word line to be formed. A second conductive material is selectively laterally deposited from the first conductive material beyond the opposing longitudinal edges of the insulating layer. The selectively deposited second conductive material projects upwardly and downwardly into each of the insulating layers and includes portions of the respective word lines.In some embodiments, a method for forming a memory array includes forming a stack including vertically alternating insulating layers and word line layers. The insulating layer and the word line layer include opposing longitudinal edges that include the longitudinal shape of a longitudinal profile of each word line to be formed in each of the word line layers. The word line layer includes a first sacrificial material. The insulating layer includes a second material of a different composition than the first sacrificial material. A third sacrificial material is selectively deposited laterally from the first sacrificial material beyond the opposing longitudinal edges of the insulating layer. The selectively deposited third sacrificial material projects upwardly and downwardly into each of the insulating layers. The third sacrificial material has a different composition than the second material. A fourth material protruding upwardly and downwardly into the respective insulating layer is formed directly above and directly below the selectively deposited third sacrificial material. The fourth material has a different composition than the first and third sacrificial materials. Selectively removing the first and third sacrificial materials relative to the second and fourth materials to form: a) a cavity extending upwardly and downwardly in the fourth material, and b) a word line layer gap. Conductive material is formed in the cavity and in the word line layer gap and the respective word lines are formed to include the conductive material in the cavity and word line layer gap.In some embodiments, a memory array includes a vertical stack including alternating insulating layers and word line layers. The word line layer includes gate regions of respective memory cells. The gate regions respectively include portions of word lines in each of the word line layers. Channel material extends vertically through the insulating layer and the word line layer. The respective memory cells include a memory structure laterally located between the gate region and the channel material. Each of the word lines includes a laterally outer longitudinal edge portion and a corresponding laterally inner portion laterally adjacent each of the laterally outer longitudinal edge portions. Each transversely outer longitudinal edge portion projects upwardly and downwardly relative to its laterally adjacent transversely inner portion.In some embodiments, a memory array includes a vertical stack including alternating insulating layers and word line layers. The word line layer includes gate regions of respective memory cells. The gate regions respectively include portions of word lines in each of the word line layers. Channel material extends vertically through the insulating layer and the word line layer. The respective memory cells include a memory structure laterally located between the gate region and the channel material. Each of the word lines includes a laterally outer longitudinal edge portion and a corresponding laterally inner portion laterally adjacent each of the memory structures. Each of said laterally outer longitudinal edge portions is higher than each of said laterally inner portions.In some embodiments, a memory array includes a vertical stack including alternating insulating layers and word line layers. The word line layer includes the control gate region of each memory cell. The control gate regions respectively include portions of word lines in each of the word line layers. The charge blocking region of each memory cell is vertically along the respective control gate region. The charge storage material of each memory cell is vertically along each of the charge blocking regions. Channel material extends vertically through the insulating layer and the word line layer. An insulating charge transfer material is laterally located between the channel material and the charge storage material. Each of the word lines includes a laterally outer longitudinal edge portion and a corresponding laterally inner portion laterally adjacent each of the laterally outer longitudinal edge portions. Each transversely outer longitudinal edge portion projects upwardly and downwardly relative to its laterally adjacent transversely inner portion.In some embodiments, a memory array includes a vertical stack including alternating insulating layers and word line layers. The word line layer includes the control gate region of each memory cell. The control gate regions respectively include portions of word lines in each of the word line layers. The charge blocking region of each memory cell is vertically along the respective control gate region. The charge storage material of each memory cell is vertically along each of the charge blocking regions. Channel material extends vertically through the insulating layer and the word line layer. An insulating charge transfer material is laterally located between the channel material and the charge storage material. Each of the word lines includes a laterally outer longitudinal edge portion and a corresponding laterally inner portion laterally adjacent each of the memory structures. Each of said laterally outer longitudinal edge portions is higher than each of said laterally inner portions. |
An apparatus and method for fairly accessing a shared cache with multiple resources, such as multiple cores, multiple threads, or both are herein described. A resource within a microprocessor sharing access to a cache is assigned a static portion of the cache and a dynamic portion. The resource is blocked from victimizing static portions assigned to other resources, yet, allowed to victimize the static portion assigned to the resource and the dynamically shared portion. If the resource does not access the cache enough times over a period of time, the static portion assigned to the resource is reassigned to the dynamically shared portion. |
1、An integrated circuit, including:Cache with multiple static parts and a dynamically shared part;A plurality of computing resources, each computing resource operable to sacrifice one of the plurality of static portions of the cache allocated to the computing resource and the dynamic shared portion; andFor reallocating at least one route in the first static part to the dynamic in response to the access activity level of the first computing resource related to the first static part of the plurality of static parts being below a predetermined threshold The redistribution logic of the shared part.2、The integrated circuit of claim 1, wherein the plurality of computing resources includes a first number of cores.3、The integrated circuit of claim 1, wherein the plurality of computing resources includes a first number of threads.4、The integrated circuit of claim 1, wherein the plurality of computing resources include at least one core and at least one thread.5、The integrated circuit of claim 1, wherein each computing resource is further operable to sacrifice at least one of a plurality of static portions of another computing resource allocated to the plurality of computing resources.6、The integrated circuit of claim 1, wherein the redistribution logic includes counting logic for counting the number of times each computing resource accesses the cache over a period of time.7、The integrated circuit of claim 6, wherein at least one route in the first static part is reallocated to the first static part in response to the access activity level of the first computing resource associated with the first static part being below a predetermined threshold The redistribution logic of the dynamic shared part includes: for reallocating at least one route in the first static part in response to the first computing resource accessing the cache less than the predetermined threshold during the period of time The redistribution logic of the dynamic shared part.8、The integrated circuit of claim 7, wherein accessing the cache includes requesting an element from the cache, the element having an associated address.9、The integrated circuit of claim 7, wherein accessing the cache includes requesting an element from an address that caused the cache to miss.10、The integrated circuit of claim 1, wherein each computing resource is operable to sacrifice a static portion and a dynamically shared portion of the cache allocated to the computing resource, including:Each computing resource is operable to replace cached lines only in a static portion and a dynamic shared portion of the cache allocated to the computing resource based on a miss, and is not operable to allocate to another based on a miss Cached lines in a static portion of computing resources.11、The integrated circuit of claim 10, wherein each computing resource is further operable to hit a cache line, the cache line exists in a static portion of the cache allocated to the computing resource, allocated to another A static part of computing resources and a dynamic shared part.12、The integrated circuit of claim 1, wherein the plurality of static portions are equal in number to the number of the plurality of computing resources, and wherein the size of the cached dynamic shared portion on the route is equal to the number of the plurality of computing resources number.13、The integrated circuit of claim 12, wherein the cache has a size of 16 routes, the number of the plurality of computing resources is equal to 8, the dynamic shared portion has a size equal to 8 routes of the cache, and there are 8 in total The static part, each static part has a size equal to one route of the cache.14、A microprocessor, including:A first resource with an associated first resource identifier;A second resource with an associated second resource identifier;Logically organized into a cache of multiple routes; andLocking mechanism forBlocking the second resource based at least in part on the second processor identifier to prevent sacrificing the first number of routes among the multiple routes,Blocking the first resource based at least in part on the first processor identifier to prevent sacrificing the second number of routes among the multiple routes, andAllows the first and second resources to sacrifice a third number of routes among multiple routes; and a dynamic allocation mechanism forAssigning at least a first route of the first number of routes to a third number of routes in response to a low activity level related to the first resource, andIn response to a low activity level associated with the second resource, at least a second route of the second number of routes is assigned to a third number of routes.15、The microprocessor of claim 14, wherein the first and second resources are cores, and wherein the first and second resource identifiers are core identifiers associated with the first and second cores, respectively.16、The microprocessor of claim 14, wherein the first and second resources are threads, and wherein the first and second resource identifiers are thread identifiers associated with the first and second threads, respectively.17、The microprocessor of claim 14, wherein the first resource is a core and the second resource is a thread, and wherein the first resource identifier is a core identifier related to the first core and the second resource identifier is a The thread identifier associated with the second thread.18、The microprocessor of claim 14, wherein the locking mechanism includes a mask generated when the first resource initiates a cache lookup based on a cache lookup based at least in part on the first resource identifier and when the second resource initiates a cache lookup A mask generated based on a cache lookup based at least in part on the second resource identifier.19、The microprocessor of claim 18, wherein the mask includes a plurality of mask bits, each mask bit corresponding to one of a plurality of routes.20、The microprocessor of claim 19, wherein each of the plurality of mask bits corresponding to the first number of routes in the plurality of routes has a first value for at least partially when a cache search is initiated by the second resource The second resource is blocked based on the second resource identifier to prevent sacrificing the first number of routes.21、The microprocessor of claim 20, wherein each of the plurality of masked bits corresponding to the second and third number of routes has a second value for at least partially based on the first Two resource identifiers to allow the second resource to sacrifice the second and third number of routes.22、The microprocessor of claim 21, wherein the cache is logically organized into 8 routes, the first number of routes is 2, the second number of routes is 2, and the third number of routes is 4, and The mask includes 8 mask bits.23、The microprocessor of claim 22, wherein the two mask bits corresponding to the first number of 2 routes have a logic value of 0 for blocking the second resource to prevent the sacrifice of the first when the second resource initiates a cache lookup The number of 2 routes, the two mask bits corresponding to the second number of 2 routes have a logical value of 1, used to allow the second resource to sacrifice the second number of 2 routes when the second resource initiates a cache lookup, and The four mask bits corresponding to the third number of 4 routes have a logical value of 1, and are used to allow the second resource to sacrifice the third number of 4 routes when the second resource initiates a cache lookup.24、The microprocessor of claim 14, wherein the dynamic distribution mechanism includes:The first counter is used to calculate the first number of access caches related to the first resource within a period of time;A second counter for calculating the second number of access caches related to the second resource during the period of time, andComparison logic for determining that the first resource is related to the low activity level in response to the first number of accesses being less than the predetermined number of accesses, and determining that the second resource is low in response to the second number being accessed less than the predetermined number of accesses Activity level related.25、An apparatus for sharing access to cache includes:cache;The first computing resource is used to access the first static allocation part of the cache and the dynamic part of the cache;The second computing resource is used to access the second static allocation part of the cache and the dynamic part of the cache;A counter for calculating a first number of accesses to the cache by the first computing resource within a period of time and a second number of accesses to the cache by the second computing resource within the period of time; andLogic, operable forIf the first number accessed at the end of the period of time is less than the predetermined number, then reduce the size of the first static allocation portion of the cache and increase the size of the dynamic portion of the cache, andIf the second number accessed at the end of the period of time is less than the predetermined number, then the size of the second static allocation portion of the cache is reduced, and the size of the dynamic portion of the cache is increased.26、The apparatus of claim 25, wherein the first and second computing resources are selected from the group consisting of cores, hardware threads, and software threads.27、The apparatus of claim 25, wherein the cache is organized into multiple routes, and wherein the first static allocation portion of the cache includes the first route of the multiple routes, and the second static allocation portion of the cache includes The second route of multiple routes.28、The apparatus of claim 27, wherein reducing the size of the first static allocation portion of the cache and increasing the dynamic portion of the cache includes reallocating the first route to the dynamic portion of the cache, and wherein reducing the first Two statically allocate portions of the size and increase the dynamic portion of the cache. The size includes redistributing the second route to the dynamic portion of the cache.29、The apparatus of claim 28, wherein the logic is further operable to allocate the first route back to the first static allocation portion of the cache based on a predetermined number of cache misses in the first computing resource, and wherein the logic is further operable It is used to allocate the second route back to the second static allocation portion of the cache according to the predetermined number of cache misses of the second computing resource.30、The apparatus of claim 25, wherein accessing the cache includes generating an address of the element to be retrieved and comparing a portion of the address with the tag value in the cache.31、The apparatus of claim 25, wherein accessing the cache includes requesting an element from an address that caused the cache to miss.32、A system for sharing access to cache includes:System memory, including multiple memory locations for storage elements, each memory location is marked with a physical address; andMicroprocessor coupled to system memory, includingAn address conversion unit, configured to convert a virtual memory address into a physical address, and the physical address is marked with multiple memory locations,Cache, which is logically organized into multiple routes to store elements recently fetched from multiple memory locations,Multiple resources, which are assigned to dynamically share a first number of routes in multiple routes, where each resource is also assigned a static second number of routes in multiple routes, andLogic for reallocating at least one of the static second number of routes allocated to the first resource of the plurality of resources to the dynamic shared first if the first resource has not accessed the cache a predetermined number of times within a period of time Number of routes.33、The system of claim 32, wherein the system memory is a random access memory chip.34、The system of claim 32, wherein the element is selected from the group consisting of instructions, operands, and data operands.35、The system of claim 32, wherein the address translation unit includes a translation lookaside buffer (TLB).36、The system of claim 32, wherein the cache is logically organized as a set of associative caches.37、The system of claim 32, wherein the multiple resources are multiple multi-threaded cores.38、The system of claim 37, wherein there are four multi-threaded cores, and the cache has 8 routes.39、The system of claim 38, wherein the first number of routes shared dynamically is equal to 4, and wherein the static second number of routes allocated to each of the four multi-threaded cores is equal to 1.40、A method for sharing access to cache includes:Generating an address related to an instruction scheduled to be executed on the first resource, the address marking a memory location of an element;Request this element from the cache;Determine whether the element exists in the cache; andIf the element does not exist in the cache,Allowing the first resource to sacrifice at least a first route of the cache allocated to the first resource and at least a second route of the cache shared by at least the first resource and the second resource, andBlocking the first resource to prevent sacrificing at least a third route of the cache allocated to the second resource; andThe cached first route is allocated to the second resource in response to the first resource not fully utilizing the cache.41、The method of claim 40, wherein allowing the first resource to sacrifice at least the first route and at least the second route is based at least in part on the first resource identifier associated with the first resource.42、The method of claim 41, wherein blocking the first resource to prevent sacrificing at least the third route is based at least in part on the first resource identifier.43、The method of claim 42, wherein allowing the first resource to sacrifice at least the first route and at least the second route based at least in part on the first resource identifier includes:Determine, based at least in part on the requested resource identifier, whether the first resource requests elements from the cache;At least generate first and second mask bits corresponding to the first and second routes, respectively, the first and second mask bits have a first logic value for allowing the first resource to sacrifice the first and The second route.44、The method of claim 43, wherein blocking the first resource based at least in part on the first resource identifier to prevent sacrificing at least a third route of the cache allocated to the second resource includes:A third mask bit corresponding to the third route is generated, and the third mask bit has a second logic value for blocking the first resource when the first resource request element prevents the third route from being sacrificed.45、The method of claim 43, wherein blocking the first resource to prevent sacrificing the third route includes not allowing the first resource to be missed to the third route, and wherein allowing the first resource to sacrifice the first and second routes includes allowing the first The resource allocation was not hit by the first or second route.46、The method of claim 44, wherein the first value is a logic 1 and the second value is a logic 0.47、The method of claim 40, wherein the first and second resources are cores.48、The method of claim 40, wherein the first and second resources are threads.49、The method of claim 40, further comprising returning the element to the first resource if the element exists in the first, second, or third route.50、The method of claim 40, wherein the element is selected from the group consisting of instructions, operands, data operands, and binary values.51、A method for sharing access to cache includes:Relating the cached first route to the first computing resource among the plurality of computing resources;The cached shared number of routes is related to multiple computing resources, and the shared number of routes does not include the first route;Calculate the number of caches accessed by the first computing resource within the first period of time;If the number accessed by the first computing resource within the first period of time is less than the predetermined number, the first route is reallocated to the dynamically shared number of routes.52、The method of claim 51, further comprising after redistributing the first route to a dynamically shared number of routes, if it occurs that the first computing resource fails to cache the first number in the second period of time, then assigning the first route The route from the dynamically shared number is allocated back to the first computing resource.53、The method of claim 52, wherein the first number of misses is one.54、The method of claim 51, wherein the plurality of computing resources are selected from the group consisting of single-threaded cores, multi-threaded cores, and threads.55、The method of claim 51, wherein accessing the cache includes requesting elements from the cache.56、The method of claim 51, wherein accessing the cache comprises the first computing resource missed cache. |
Fair sharing of cache in multi-core / multi-thread processorsTechnical fieldThe present invention relates to the field of cache memories, and in particular to shared caches in a multi-resource environment.Background techniqueAdvances in semiconductor processing and logic design have allowed an increase in the number of logic circuits that can be present on integrated circuit devices. As a result, computer system configurations have evolved from multiple integrated circuits in one system to being able to store multiple architecture states on a single integrated circuit, which allows parallel execution of multiple threads. Therefore, a single die can have multiple resources, such as multiple cores and / or multiple threads, to execute code in parallel.A typical thread refers to the ability of an integrated circuit to store the respective architectural state / context of each thread, which may be related to shared execution resources. In addition, a thread can refer to an independent application, program, or software thread executing on a hardware thread or core. On the other hand, a typical core refers to an independent architectural state associated with dedicated execution resources that can be physically adjacent but logically isolated or physically separated. However, both cores and threads can share a certain level of cache in the memory hierarchy and other units, such as a bus interface, to communicate with external devices. The use of one or more cache memory systems in a computer's memory hierarchy is a well-known technique for improving computer performance. Traditionally, three types of cache structures have been used: fully associative, k-way set associative, and direct map cache structures. In a fully associative cache structure, every item of information from the main system memory can be stored in any cache item. In contrast, in the group associative cache, the cache is logically decomposed into k groups of memory, that is, k ways. Based on the offset of the memory location in the memory page, the group associative cache "associates" the location in the memory page logically seen to the corresponding cache line in each of the k routes. Therefore, each memory location corresponds to a "group" of cache lines in k routes. Similarly, the direct map cache is an effective one-way set associative cache that associates memory locations to cache lines in a route of the direct map cache.During the memory transfer, the resource or processor generates a memory address that references the location of the element. The term resource refers to core, execution core, hardware thread, software thread, or other threading technologies. Elements are instructions or operands. Examine the cache associated with the resource or processor to determine whether the element exists in the cache or must be retrieved from system memory. Typical cache tools, such as tag lookup, indexing, etc., are used to determine whether an element exists in the cache. A cache hit refers to determining that an element exists in the cache. Alternatively, if the requested element does not exist in the cache, then a cache miss is caused and the element is retrieved from system memory to replace the contents of the cache line within the cache. The process of replacing existing lines to make room for the most recent misses is also referred to as cache line sacrifice.The shared cache in multiple resources allows different independent program threads to share data and instructions without repeating cache misses. However, if one resource sacrifices a large amount of cache state belonging to another resource, then sharing multiple resources of the cache can cause destructive interference. An example of multiple resources sharing a single cache is illustrated in Figure 1. The integrated circuit 140 includes a resource 145, a resource 150, and an Nth resource 155. Resources 145-155 share access cache 160, which is organized as a four-way set associative cache with routes 165-168. As can be seen, one of the resources 145-155, such as the resource 150, can start to monopolize the cache 160 and sacrifice a large amount of cache state belonging to the resource 145. Therefore, ensuring fairness among multiple resources becomes an important consideration.BRIEF DESCRIPTIONThe present invention is illustrated by examples and is not intended to be limited by the drawings in the drawings.FIG. 1 illustrates a prior art embodiment of an integrated circuit with N resources sharing access cache.2 illustrates an embodiment of an integrated circuit having N resources that perform fair shared cache.Figure 3a illustrates an embodiment of an integrated circuit that includes two resources that use blocking mechanisms to share access to the cache.Figure 3b illustrates an embodiment of an integrated circuit that includes two resources that use a mask as a locking mechanism to share access to the cache.Figure 4 illustrates an embodiment of an integrated circuit that includes two resources that use masking as a locking mechanism to share access to the cache, where counters and logic are used to reallocate portions of the cache.Figure 5 illustrates an embodiment of an integrated circuit that includes four cores that use masking to share access to the cache.6 illustrates an embodiment of a system with a microprocessor that has two cores coupled to a memory controller and system memory, each of the two cores has two threads, where all The four threads described use shielding to share access to the cache.7 illustrates an embodiment of a flowchart of a method of sharing access to a cache.8 illustrates an embodiment of a flowchart of a method of sharing access to a cache.detailed descriptionIn the following description, many specific details are set forth, such as a specific number of resources, a specific size and structure of the cache, and examples of logical layouts, in order to provide a comprehensive understanding of the present invention. However, it is obvious to those skilled in the art that these specific details are not necessary to practice the present invention. In other instances, well-known components or methods such as specific implementations of threads / cores and techniques for multithreading have not been described in detail in order to avoid unnecessarily obscuring the invention.Described herein is an apparatus and method for fair access to a shared cache with multiple resources, such as multiple cores, multiple threads, or both. The method and apparatus can be implemented in any level of the memory hierarchy architecture. As an example, it may be implemented on a microprocessor having two multi-threaded cores, namely first and second multi-threaded cores. The use of the term multi-threaded core indicates that each core can execute multiple threads. Each multi-threaded core has a dedicated lower-level cache. In this example, the devices and methods described herein can be used to ensure fair sharing of the dedicated cache of the first core among multiple threads executing on the first core. In addition, the first and second cores can share access to higher level caches. Therefore, the methods and apparatus described herein are used to ensure fair sharing of higher-level caches between the first and second cores.Turning to FIG. 2, an integrated circuit 200 including N resources is illustrated, and the N resources share a cache 220. Examples of integrated circuit 200 include microprocessors, coprocessors, embedded processors, or other processors that include multiple computing resources and caches. In one embodiment, the integrated circuit 200 is a microprocessor capable of speculative execution out of order. The microprocessor can be executed independently of other microprocessors. However, the microprocessor can also operate in defense or cooperation with other processors.As shown in FIG. 2, the integrated circuit 200 includes a resource 205, a resource 210, and an N-th resource 215. Examples of the number of resources available in the integrated circuit 200 include 2, 4, 6, 8, 12, and so on. However, as will become apparent throughout the discussion, there can be any number of resources. The term resource is also referred to as computing resources, processing resources, etc., and generally refers to cores, execution cores, hardware threads, software threads, implicit threads, explicit threads, or other thread technologies. The term core generally includes the logical performance of maintaining independent architectural states in an integrated circuit, which are also referred to as contexts, where the independent architectural states are related to dedicated execution resources. The execution resources do not have to be physically separated; in fact, the execution resources can be divided in the core.In contrast, thread refers to the ability of a core or processor to perform two or more "thread" controls. Therefore, each thread on the integrated circuit can store the architectural state / context associated with the shared execution resource. In addition, a thread may refer to an independent application, program, or software thread executing on a hardware thread or core. Therefore, it is obvious that the integrated circuit 200 may be a multi-threaded processor, a multi-core processor, or a multi-threaded multi-core processor, all of which can execute multiple software threads.As described, at least resources 205 and 210 share access to cache 220. The cache 220 may be any level of cache of the memory hierarchy architecture of the integrated circuit 220 as described above. The cache 220 has a plurality of static parts, including a static part 225 and a static part 230, and a dynamic shared part 235. The term static part refers to a dedicated part of the cache, such as at least one way in the cache, and refers to one or more resources, such as cores or threads. The cache 220 as illustrated is organized as a group associative cache, which has eight ways, including: routes 226, 227, 231, 232, and 236-239. However, the cache 220 is not limited to this. Generally, the cache 220 is a static random access memory (SRAM) or other memory that has a faster access time than the main system memory. Therefore, the cache 220 may be physically organized in any manner, and logically organized as a group associative cache or other organizational structure.Here, the static portion 225 is allocated to the resource 205, and the static portion 230 is allocated to the resource 210. In addition, the dynamic shared portion 235 is allocated to the resource 205 and the resource 210. The dynamic sharing section 235 will be discussed in more detail below. The static portion 225 includes a route 226 and a route 227, and the static portion 230 has routes 231 and 232; however, there may be any number of routes in the static portion. As shown in the line indicating the allocation of the static portion 225 and the static portion 230 from FIG. 2, the static portion 225 is not allocated to the resource 210 and the static portion 230 is not allocated to the resource 205.In one embodiment, the resource 205 can access the static portion 230 assigned to the resource 210 to request and receive elements from the static portion 230; however, the resource 205 is not operable to sacrifice the static portion 230.As an example, the resource 205 generates a linear address that labels the location of elements in the main system memory. A portion of the linear address is used to compare the tag values in all possible routes of the cache 220, including routes 231 and 232, to see if an element exists in the cache 220, that is, if there is a cache "hit". If there is a cache hit in any route including routes 231 and 232, the element is returned to the resource 205 or the path associated with the resource 205 for execution. However, if the element does not exist in the cache 220, that is, the cache is "missed", the element is retrieved from the main memory. Since the static portion 230 is allocated to the resource 210, in this example, the resource 205 is not allowed to sacrifice the static portion 230. Therefore, when a route is selected to replace the cached line with an element fetched from system memory, the static portion 230 is blocked to avoid being missed in allocation, ie sacrificed by the resource 205. More specifically, when the route to be sacrificed is selected, the lines in the routes 231 and 232 may not be selected for replacement with the elements taken from the system memory, and the cache lookup is initiated by the resource 205.The blocking of resource 205 can be performed on top of many well-known cache replacement algorithms to prevent sacrificing routes 231 and 232. For example, when a cache miss occurs, a cache replacement algorithm such as a time-based algorithm is used to select a route in the cache 220 to replace the cache line, or select a route to allocate a cache miss to it. Another way to start it is to replace the algorithm used to select a sacrificed route in the cache 220. Therefore, if the static part 230 is blocked by the resource 205 to prevent sacrifice, when a route sacrifice in the cache 220 is selected according to a miss from the resource 205, a known replacement algorithm is used for the static part 225 and the dynamic shared part 235 A choice is made to exclude the selection of the route of the static part 230.The previously described embodiments utilize only two resources; the integrated circuit 200 can have any number of resources. In addition, it is possible to allow resource access or sacrifice the static part allocated to another resource. As an example, if the Nth resource 215 is allocated to the route 236 and the route 237, so that only the route 238 and the route 239 are left as the dynamic part 235, the static part 225 and the static part 230 may be allocated to the resource 205 and the resource 210 .Therefore, if the Nth resource 215 is to initiate a cache lookup that causes a cache miss, then only routes 236-237 (allocated to the static portion of the Nth resource 215) and routes 238-239 (dynamic portion) may be sacrificed. However, if resource 205 or resource 210 starts a cache lookup that causes a cache miss, then routes 226, 227, 231, and 232 (allocated to the static portion of resources 205 and 210) and routes 238-239 (dynamic portion) are sacrificed . In addition, static parts can overlap. For example, the static portion 225 allocated to the resource 205 includes routes 226, 227 and route 231, while the static portion 230 allocated to the resource 210 includes route 227, route 231 and route 232. As a result, the routes 227 and 231 overlap between the static parts, thereby allowing the resources 205 and 210 to sacrifice the routes 227 and 231, but not allowing the n-th resource 215 to sacrifice them.As mentioned in the example above, all resources can access the dynamic shared portion 235. However, it is not necessary that the dynamic sharing part 235 can be accessed by all resources. For example, the dynamic sharing part 235 can be used for all resources except the resource 210 to be sacrificed. In one embodiment, the number of routes in the dynamic sharing section 235 is equal to the number of resources present in the integrated circuit 200. As an example, the integrated circuit 200 has 8 resources, which are any combination of cores and / or threads. The cache 220 has a route size of 16 routes. Each of the 8 resources is assigned 1 route as a static part, a total of 8 static parts, and the dynamic part has the size of 8 routes. Therefore, when any one of the 8 resources starts a cache lookup that causes a cache miss, 9 routes of the cache (1 allocated route and 8 dynamically shared routes) can be used for sacrifice.3a, an embodiment of an integrated circuit 300 is illustrated, which has two resources: a resource 305 and a resource 310, which use the locking mechanism 340 to share access to the cache 320. The blocking mechanism 340 is used to block resources to avoid sacrificing the static part of the resources that are not allocated to start the cache lookup. The lock mechanism 340 is shown as being present outside the cache 320. However, the locking mechanism may exist in the cache 320 or cache control logic not described.In addition, the locking mechanism 340 is illustrated as being arranged between the resources of the integrated circuit 300 and the route of the cache 320. In one embodiment, the locking mechanism 340 is part of the cache lookup and allocation process, where the locking mechanism 340 does not physically lock the resource access cache 320, but only allows the cache replacement algorithm to be allocated from the requested cache lookup Choose the sacrificial route in the static and dynamic parts of the resource. Therefore, the locking mechanism 340 may be implemented as a logic circuit, software, or firmware.As an example, a request from resource 305 is allowed to find all routes of cache 320. Therefore, if a cache hit occurs, even in the static portion 330 allocated to the resource 310, the cache line that owns the data is returned to the resource 305 for operation. In the replacement of hits, if the requested element does not exist in the cache, or the cache line containing the element is in a cache state that requires the cache line to be updated, such as an invalid or modified cache state, a failure in. In the case of a miss, the blocking mechanism 340 blocks the resource 305 so as not to sacrifice the static portion 330 allocated to the resource 310. Complementing the blocking, the blocking mechanism allows resources 350 to sacrifice or allocate misses to the static portion 325 and the dynamic shared portion 335. Conversely, if the resource 310 is to request a cache 320 that caused a miss, the locking mechanism will block the resource 310 so as not to sacrifice the static portion 325 allocated to the resource 305.Turning to FIG. 3b, an embodiment of the locking mechanism 340 is shown. In this embodiment, the shield 345 serves as the locking mechanism 340. The mask 345 includes multiple mask bits, such as mask bits (MB) 346-353. As shown in FIG. 3b, each mask bit corresponds to a route of the cache 320. However, the shield 345 is not limited to this.As an example, the mask 345 includes 3 bits, 1 bit for each portion shown. Therefore, if the resource 305 requests elements from the cache 320, the first bit corresponding to the three bits of the static portion 325 and the second bit corresponding to the three bits of the dynamic sharing portion 335 allow the resource 305 to sacrifice the static portion 325 and the dynamic Shared part 335. The third bit corresponding to the three bits of the static portion 330 blocks the resource 305 to prevent the static portion 330 from being sacrificed.Based at least in part on the resource identifier (ID) 307, the blocking 345 blocking resource 305 so as not to sacrifice two routes of the cache 320, which is the static part 330. In one embodiment, when the resource 305 requests an element from the cache, a mask 345 is generated based at least in part on the resource ID 307. Based on the requested resource ID 307, a mask 345 is generated to block the resource 305 to prevent sacrificing the static portion 330. In an alternative embodiment, the mask 345 is a static mask, which is not generated by the lookup. When it is determined based at least in part on ID 307 that resource 305 initiates a cache lookup, a static mask corresponding to resource 305 is used to replace the algorithm to block resource 305 to prevent sacrificing static portion 330, which is stored in a register or other storage .In the example shown in FIG. 3b, the resource 305 makes a request to the cache 320. Based at least in part on the ID 307, a mask 345 is generated or loaded to block the resource 305 to prevent sacrificing the static portion 330 and allow the resource 305 to potentially sacrifice the static portion 325 and the dynamic shared portion 335. In addition, the mask 345 includes eight mask bits corresponding to the eight routes of the cache 320. The mask bits 348 and 349 have the first value to block the resource 305 so as not to sacrifice the static portion 330. The mask bits 346 and 347 corresponding to the two routes in the static section 325 and the mask bits 350-353 corresponding to the four routes in the dynamic sharing section 335 have a second value to allow the resource 305 to potentially sacrifice in the static section 325 and the dynamic Share the route in section 335. In FIG. 3b, the mask bit with logic 0 blocks the corresponding route to prevent the allocated cache from being missed, while logic 1 allows the corresponding route to be sacrificed. However, it is obvious that logic 1 or other values can block access, while logic 0 or other values can allow access.The mask 345 also blocks the resource 310 based at least in part on the ID 312 to prevent sacrificing the static portion 325 and allows the resource 310 to potentially sacrifice the static portion 330 and the dynamic shared portion 335. As illustrated, the shield 345 is shown coupled between the cache 320 and the resources of the integrated circuit 300. Nevertheless, the mask 345 may also be coupled to the cache 320, present in the cache 320, and present in the control logic associated with the cache 320. As mentioned above, resources 305 and 315 may be cores, hardware threads, software threads, and so on. Therefore, IDs 307 and 312 are the corresponding core ID, physical thread ID, virtual thread ID, hardware thread ID, software thread ID, etc.The embodiment of the integrated circuit 300 shown in FIGS. 3a and 3b is illustrated in FIG. 4 and further includes redistribution logic 405. In the further fair sharing of caches between resources, the free part of the cache that is not accessed and locked to other resources can be reallocated to the dynamic part of the cache to be utilized by other resources. Resources such as resource 305 or resource 310 can usually enter a low-power state, a sleep state independently, or can hit the lower-level cache consistently enough so that the static portion allocated to them can be reduced or fully reallocated Dynamic part.Counter 350 counts the number of caches accessed by resources 305 and 310. In an alternative embodiment, counter 350 only tracks accesses made by resource 305, while another counter not shown tracks accesses made by resource 310. The cache access may be a cache lookup, a hit cache, a missed cache, or an actual allocation of a miss to a route in the cache. Logic 355 is coupled to counter 350 to redistribute portion 410 from static portion 325 to dynamic shared portion 335 if resource 305 has not accessed cache 320 a sufficient number of times in a period of time.In a specific example, the counter 350 calculates the access made by the resource 305 and the access made by the resource 310 over a period of time. The period of time may be a predetermined period of time and a programmable period of time. If the number of accesses made by the resource 310 at the end of this period is less than the predetermined number, then the static portion 325 is reduced by one size and the dynamic portion 335 is increased by that size.In FIG. 4, the reallocation part 410 is redistributed from the static part 325 to the dynamic shared part 335, thereby reducing the size of one route of the static part 325 and increasing the size of one route of the dynamic part 335. As further explained, the reallocation section 410 is redistributed to the dynamic section 335, and the logic 355 flips the mask bit 346 in the mask 345 from 0 to 1. Therefore, after reallocation, when the resource 310 initiates a cache lookup, a mask 345 with a mask bit 346 equal to 1 is generated. This allows the resource 310 to sacrifice the reallocation part 410 as an effective part of the dynamic sharing part 335.In the above example, only a single route of the static portion 325 has been reassigned. However, in another embodiment, all static portions 325 or any portion smaller than static portion 325 can be redistributed. In fact, if the static portion 325 contains only one route, assigning another route to the dynamic shared portion 335 will not leave any static portion allocated to the resource 305.Counter 350 and logic 355 are also operable to track accesses made by resource 310, and reallocate portions of static portion 330 to dynamic shared portion 335. In addition, after the portion 410 has been reassigned, the counter 350 and logic 355 can be operated to assign the portion 410 back to the static portion 325. As part of tracking access, counter 350 is operable to track missed caches. Therefore, if the counter 350 tracks the resource 305 missed the cache 320 a sufficient number of times over a period of time, then the logic 355 allocates the portion 410 back to the static portion 325. As an example, if the portion 410 has been reallocated to the dynamic shared portion 335, after the resource 305 enters the sleep state, the resource 305 may start to access or miss the cache 320 as soon as the resource 305 is woken up. If the counter 350 counts enough accesses or misses, then the portion 410 is allocated back to the resource 305.Essentially, for any reason, if the resource 305 does not access the cache 320 a predetermined number of times within a period of time, then the size of the static portion 325 is reduced by reallocating part or all of the static portion 325 to the dynamic shared portion 335. Then, if the resource 305 wakes up or begins to access the cache 320 for some reason, then the size of the static portion 325 is increased by allocating part or all of the static portion 325 that has been reallocated to the dynamic shared portion 335 back to the static portion 325 .Referring to FIG. 5, an embodiment of an integrated circuit 500 having at least four cores is described, the cores sharing access to the cache 525. Cores 505, 510, 515, and 520 are associated with core IDs 507, 512, 517, and 522, respectively. As an example, when the core 515 requests to find the cache 525, based at least in part on the core ID 517, a mask 560 is generated to lock the core 515 to prevent sacrificing the portion 530 assigned to the core 505, the static portion 535 assigned to the core 510, and The static portion 545 of the core 520. However, all the routes in the cache 525 are checked to see if the requested element is present. If a miss occurs, the mask 560 allows the core 515 to sacrifice the static portion 540 and the dynamic shared portion 550 assigned to the core 515.In addition, the counter 565 counts at least core 505 accesses to the cache 525 over a period of time. If the counter 565 calculates that the number of core 505 accesses is less than the predetermined number, then the logic 570 redistributes the portion 530 to the dynamic sharing portion 550. Therefore, in the above example, when the core 515 makes another request to the cache 525, or when the core 510, 515 or 520 makes a request, a mask 560 is generated, and the mask bit 561 corresponding to the portion 530 is 1. This allows the core 510, 515 or 520 to allocate a miss to the redistribution section 530 as part of the dynamic sharing section 550. And if the counter 565 counts the number of core 505 accesses or misses to the cache 525 at another time, the logic 570 flips the mask bit 561 back and allocates the portion 530 back to the core 505.Next, FIG. 6 illustrates an embodiment of a system with a microprocessor 600 having two multi-threaded cores that share access to the cache 620. The illustrated microprocessor 600 is coupled to the memory 650 through a memory controller 640. The memory controller 640 is coupled with the I / O controller 660 via the interconnection 655. Although memory controllers 640 and 660 are often on different integrated circuits, they are often referred to as chipsets. The memory 650 is any random access memory (RAM) or dynamic RAM (DRAM). As a specific example, the memory 650 is a double data rate RAM (DDR RAM). The microprocessor 600 can execute speculative and non-speculative execution out of order, or can execute only in order. Only a small part of the microprocessor 600 is illustrated.In fact, the microprocessor 600 may include, but does not necessarily include any one, any plurality, or any combination of the following: a bus interface unit for communication and interface with external devices, and an input / output for performing I / O operations Ring, a virtual memory address translation unit / buffer for converting virtual memory addresses to physical memory addresses, an interrupt controller for handling interrupts, a branch prediction unit for predicting branches and instructions to be speculatively executed, for A prefetch unit that assists in fetching predicted instructions and / or operands, a fetch unit for fetching operands and instructions, a decoding unit for decoding fetched instructions, and a reordering of instructions to be executed and micro-operation instructions Reordering unit, register file for storing operands and results, arithmetic logic unit (ALU) for performing integer or serial operations in parallel, floating-point unit (FPU) for performing floating-point operations serially or in parallel, Operand registers used to store single or multiple integer and / or floating-point operands, as well as other logic commonly associated with microprocessors.In one embodiment, it is not described that threads 606 and 608 perform fair sharing dedicated to the first low-level cache of core 605, while threads 611 and 613 perform fair sharing dedicated to the second low-level cache of core 610. In this embodiment, cores 605 and 610 share access to higher level cache 620. In FIG. 6, the core 605, the core 610, and the threads 606, 608, 611, and 613 running on the cores 605 and 610 share the cache 620.Typically, the address generation unit is used to generate a linear address, and the virtual memory address to physical address converter, namely a look-aside buffer (TLB), converts the virtual memory / linear address into a physical address in the memory. Separate threads and cores may have different control registers to store different base values for conversion; therefore, the same linear addresses generated from threads 606 and 611 may actually be labeled with different physical addresses. In the application pending with the title "Use of Context Identifier in Cache Memory" and the serial number of 10 / 104,815, the use of context identifiers from different threads was discussed A solution to distinguish between cache hits and misses.As an example of a fair shared cache 620, the thread 608 on the core 605 generates a linear address, which labels the elements in the memory 650. By comparing the part of the linear address called the tag with the tag stored in the cache 620, a cache lookup is performed in the cache 620. Specifically, if the offset of the linear address is "related" to the group in cache 620, then all cache lines in the group are checked to see if the element is present. If the element is present in the cache 620 and the cache line containing the element is in an operational cache state, such as a dedicated cache state, then a cache hit occurs. This element is then placed in the operand register indicated by thread 608, or otherwise returned for operation by thread 608.However, if the element does not exist in the cache 620, or the cache line containing the element is in an inoperable state, such as an invalid state, then the route in the cache 620 is selected to allocate the miss to it. The mask 615 is generated based on the search and based at least in part on the thread ID 609. Only one route is assigned to the thread 608, as indicated by the logical 1 value corresponding to the route in the static allocation section 625. Therefore, the ordinary cache replacement algorithm can choose between a single allocation route and the four routes in the dynamic sharing section 630 to sacrifice. The element is then fetched from the memory 650, and the corresponding cache line in the route to be sacrificed is updated.Turning to FIG. 7, an embodiment of a block diagram of a method of sharing access to a cache is illustrated. In block 705, an address associated with an instruction scheduled to execute on the first resource is generated, the address marking the memory location of the element. As an example, the address is a linear address that marks the memory location by the offset from the value stored in the register associated with the first resource. Elements are usually instructions, operands, or other things usually stored in memory. Then, in block 710, the element is requested from the cache. Requesting this element from the cache can be any action that initiates a lookup in the cache.In block 715, it is determined whether the element exists in the cache. If a linear address is generated, it is determined whether the element is present in the cache including the portion where the linear address is compared and the tag value stored in the cache. In some embodiments, it is also required that the linear address is fully decoded and the cache line is checked to determine whether the element is present in the cache. As an example, the entire route of the cache is checked to determine whether an element exists in the cache.If the element does not exist in the cache, that is, the cache is missed, then in block 725 the first resource is allowed to sacrifice at least the first route of the cache allocated to the first resource and shared by at least the first resource and the second resource Cache at least a second route. Additionally, in block 730, the first resource is blocked to prevent sacrificing at least a third route of the cache allocated to the second resource. In one embodiment, allowing the first resource to sacrifice at least the first and second routes and blocking the first resource from sacrificing at least the third route is based at least in part on the first resource ID. Based on the requested resource ID, shielding or other blocking mechanisms allow or block access based on the requested resource ID. Therefore, based on the route that has not been selected for allocation, a common cache replacement algorithm is used to select between at least the first route and the second route of the cache, thereby blocking the third route allocated to the second resource from selection.Another embodiment of a block diagram of a method of sharing access to cache is illustrated in FIG. 8. In block 805, the cached first route is allocated to the first computing resources of the plurality of computing resources, and each computing resource is allocated at least one cached route. In block 810, the dynamically shared portion of the cache is allocated to multiple computing resources. As an example, a route is assigned to resources through a locking mechanism. Whether the locking mechanism is stored statically or based on cache lookups, the locking mechanism "allocates" or associates each computing resource with the static portion and allocates the dynamic portion to multiple computing resources.In block 815, count the number of cache accesses by the first computing resource over a period of time. In one embodiment, the access is just a lookup in the cache. As another example, access is a lookup in the cache, which causes the cache to miss. This period of time is a predetermined period of time. Then in block 820, if the number of accesses by the first computing resource within the first period of time is less than the predetermined number, the first route is reallocated to the dynamic shared portion of the cache. If the counter does not count a predetermined number of accesses to the first computing resource at the end of the period of time, the counter trips, which may include sending a signal or providing an indication that the first computing resource has not accessed the cache at least a predetermined number of times Logical value. Based on the failure signal or logical value, the first route is reassigned. In one embodiment, redistributing the first route includes changing the mask bit in the mask, which allows multiple resources to all sacrifice the first route as a dynamically shared part.In addition, if the cached first number appears by the first computing resource during the second period of time, then in block 825, the first route is allocated back to the first computing resource from the dynamic shared portion. Similar to the reallocation operation, if the first computing resource accesses the specified number of caches in the second period of time, then the mask bit in the above embodiment is flipped to reallocate the first route to the dynamic shared part is flipped back The first route is allocated back to the first computing resource.Fair sharing of caches among multiple resources as described above allows different independent program threads to share data and instructions without repeated misses. However, by establishing a static portion of the cache for each resource and allowing access to the dynamic portion of the cache, destructive interference is avoided by ensuring that at least one static portion allocated to the resource is available. In addition, if one of the resources enters a low-power state or does not require many static parts allocated to it, then redistribution logic is used to reduce the static part and increase the dynamic part shared by all resources to avoid wasting cache space reservations. In addition, if the computing resources that have reduced its static portion require more access to the cache, the portion of the reallocated static portion is allocated back to the static portion to again ensure fair sharing of the cache.In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. However, it is obvious that various modifications and changes can be made thereto without departing from the broader spirit and scope of the invention as described in the appended claims. Therefore, this specification and the drawings should be viewed in an illustrative sense rather than a restrictive sense. |
Structures and methods for selectively applying a well bias to only those portions of a PLD where such a bias is necessary or desirable, e.g., applying a positive well bias to transistors on critical paths within a user's design. A substrate for an integrated circuit includes a plurality of wells, each of which can be independently and programmably biased with the same or a different well bias voltage. In one embodiment, FPGA implementation software automatically determines the critical paths and generates a configuration bitstream that enables Positive well biasing only for the transistors participating in the critical paths, or only for programmable logic elements (e.g., CLBs or lookup tables) containing those transistors. In another embodiment, negative well biasing is selectively applied to reduce leakage current. |
CLAIMSWhat is claimed is: 1. A method of implementing a user circuit in a programmable logic device (PLD), comprising: selecting a first logical grouping from the user circuit based on cost criteria; selecting a second logical grouping from the user circuit based on the cost criteria ; and generating a configuration data file enabling a first level of well biasing for the first logical grouping and a second level of well biasing for the second logical grouping.2. The method of Claim 1, wherein: the first level of well biasing is a positive well bias; and the second level of well biasing is no applied well bias.3. The method of Claim 1, wherein: the first level of well biasing is a negative well bias; and the second level of well biasing is no applied well bias.4. The method of Claim 1, wherein: the first level of well biasing is a positive well bias; and the second level of well biasing is a negative well bias. 5. The method of Claim 1, wherein: the first and second levels of well biasing are of the same polarity but different values.6. The method of Claim 1, wherein the cost criteria include the performance of the first logical grouping. 7. The method of Claim 1, wherein the cost criteria include the power consumption of the user circuit.8. The method of Claim 1, wherein the second logic grouping comprises all portions of the user circuit not included in the first logical grouping.9. A substrate for an integrated circuit, comprising: a first well'formed within the substrate; first means for programmably providing a first well bias voltage to the first well; a second well formed within the substrate; and second means for programmably applying a second well bias voltage to the second well, wherein the first and second means are independent of each other.10. The substrate of Claim 9, wherein: the first means is programmed to apply the first well bias voltage to the first well; and the second means is programmed not to apply the second well bias voltage to the second well.11. The substrate of Claim 9, wherein: the first well bias voltage is a positive well bias; and the second well bias voltage is a negative well bias.12. The substrate of Claim 9, wherein: the first well bias voltage is a first positive well bias; and the second well bias voltage is a second positive well bias.13. The substrate of Claim 9, wherein: the first well bias voltage is a first negative well bias; and the second well bias voltage is a second negative well bias.14. The substrate of Claim 9, wherein the first means programmably provides one of a plurality of supported well bias voltages to the first well. |
STRUCTURES AND METHODS FOR SELECTIVELY APPLYING A WELL BIASTO PORTIONS OF A PROGRAMMABLE DEVICEFIELD OF THE INVENTIONThe invention relates to Programmable Logic Devices (PLDs). More particularly, the invention relates to structures and methods for applying a programmable well bias to selected portions of a PLD.BACKGROUND OF THE INVENTIONProgrammable logic devices (PLDs) are a well-known type of digital integrated circuit that may be programmed by a user to perform specified logic functions. One type of PLD, the field programmable gate array (FPGA), typically includes an array of configurable logic blocks (CLBs) surrounded by a ring of programmable input/output blocks (IOBs). The CLBs and IOBs are interconnected by a programmable interconnect structure. Some FPGAs also include additional logic blocks with special purposes (e. g., DLLs, RAM, and so forth). The CLBs, IOBs, interconnect, and other logic blocks are typically programmed by loading a stream of configuration data (bitstream) into internal configuration memory cells that define how the CLBs, IOBs, and interconnect are configured. The configuration data may be read from memory (e. g. , an external PROM) or written into the FPGA by an external device. The collective states of the individual memory cells then determine the function of the FPGA. In a PLD, as in other integrated circuits (ICs), the various CLBs, IOBs, and interconnect are formed on a single substrate. Fig. 1A shows a first silicon substrate on which NMOS 101 and PMOS 102 transistors are formed. The silicon substrate 100 is positively doped (P-type).Therefore, to form a PMOS transistor, an"N-well" (negatively doped region) 112 is diffused into substrate 100, and the PMOS transistor 102 is formed within N-well 112. Fig. 1B shows a second silicon substrate for a CMOS integrated circuit (IC) formed using a"triple-well" process. When the triple-well process is used, NMOS transistors 101 are formed within"P-wells" (positively doped regions) 111, within larger N-wells 113, which in turn reside within P-type substrate 100. Similarly, allPMOS transistors 102 are formed within N-wells 112, which also reside within P-type substrate 100. Thus, the P-wells 111 and N-wells 112 are electrically isolated, both from each other and from all other wells in the substrate. Fig. 1C shows a third silicon substrate formed using an"SOI", or silicon-on-insulator, process. When an SOI process is used, NMOS transistors 101 are formed within Pwells 111, and PMOS transistors 102 are formed within Nwells 112. Each of P-wells 111 and N-wells 112 reside within an electrically insulating substrate 110. Thus, the substrate insolates the P-wells and N-wells from each other and from all other wells in the substrate. Over time, IC designers are reducing the"VCC"or power high voltage level at which ICs are designed to operate. This reduction in VCC has the advantage of reducing power consumption in an IC. However, it also has the undesirable effect of reducing performance in the IC.Therefore, it is desirable to find ways to counteract this decrease in performance. One method is to apply a voltage bias to the wells in which the transistors reside. When either a triple-well or an SOI process is used, the P-wells and N-wells can be biased to voltage levels different from each other and from other wells of the same type. An applied voltage differential is referred to as a "substrate bias"or (when applied to a well) a"well bias". Figs. 1B and 1C shows examples of the application of well biasing to P-wells and N-wells. For example, for anNMOS transistor 101, a positive well bias 105 of about 0.4 to 0.6 volts can be applied to P-well 111. In other words, if P-well 111 is normally at ground (0 volts), the P-well is driven to about 0.4 to 0.6 volts. Similarly, for a PMOS transistor 102 a positive well bias 106 of about-0.4 to-0.6 volts can be applied to anN-well 112. In other words, the so-called"positive well bias"drives the N-well to a negative voltage relative to the original voltage level. For example, for a PMOS transistor 102, if the N-well is normally at VCC (power high), the N-well is driven to about VCC-0.4 to VCC-0.6 volts. As the term is used herein, applying a more positive voltage to a P-well or a more negative voltage to an N-well is called applying a"positive well bias". Thus, applying a positive well bias effectively reduces the reverse well bias of the transistors within the well. Also as used herein, applying a more negative voltage to a P-well or a more positive voltage to an N'-well is called applying a "negative well bias". Thus, applying a negative well bias effectively increases the reverse well bias of the transistors within the well. By changing the voltage level of a well, the threshold voltage (Vt) of the transistors within the well is altered.For example, an increased positive voltage in a P-well (i. e. , a positive well bias) causes a drop in the threshold voltage of the NMOS transistors within the well. This lower threshold voltage, in turn, increases the saturation drain current, which increases the performance of all of the NMOS transistors within the biased well. The reverse situation is also true. For example, a lower voltage in a P-well (i. e. , a negative well bias) causes a rise in the threshold voltage of the NMOS transistors within the well, resulting in a reduced leakage current. Gitlin et al. describe one example of using a negative well bias to reduce leakage current in U. S. Patent No. 5,880, 620, entitled"Pass Gate Circuit with Body BiasControl". However, the application of a negative well bias also has the effect of reducing the performance of the transistor. While the application of a positive well bias increases the performance of a transistor, the faster operation has its price. Besides increasing the saturation drain current, the positive well bias also increases the amount of current flowing through an inactive transistor.This current is a major component of leakage current in aCMOS integrated circuit (IC). Therefore, applying a positive well bias to all the transistors on an IC certainly improves the performance of the device, but can also lead to an unacceptably large leakage current. To address this limitation, "fixed function"logic devices (as opposed to programmable logic devices, or PLDs) can be designed with positive well bias applied only to circuits that are particularly speed-critical. By applying this technique, the speed advantage is gained only where necessary, while the increase in leakage current is kept within acceptable bounds. However, the problem of increased leakage current with an applied positive well bias is not so easily addressed in PLDs. In PLDs, the critical circuits and paths are not limited to specific areas of the device or to specific transistors. For example, in an FPGA, a user can program any of the CLBs to perform a speed-critical function, and a path between two such CLBs can traverse any of a large number of interconnect paths. Therefore, in the past, to take advantage of positive well biasing in a PLD would have required the well biasing of each transistor in the programmable areas of the device, to ensure that the critical paths. used the biased transistors. As PLDs increase in size, to the point where many millions of transistors are used in each PLD, leakage currents are becoming a limiting factor in many designs. Therefore, it has not been possible to take advantage of positive well biasing in the design of large PLDs. It is therefore desirable to provide structures and methods enabling the application of well biasing techniques to large PLDS. SUMMARY OF THE INVENTIONThe invention provides a substrate for an integrated circuit that includes a plurality of wells, each of which can be independently and programmably biased with the same or a different well bias voltage. In some embodiments the integrated circuit is a programmable logic device (PLD) such as a field programmable gate array (FPGA). In one such embodiment, the bias for each well or group of wells is programmably applied from a bias generator circuit through a pass transistor controlled by a programmable memory cell. The programmable memory cells are programmed using the same configuration bitstream that controls the programming of the CLBS, IOBS, and interconnect in the FPGA. The FPGA is divided into two or more portions wherein the well biasing is separately controlled. The FPGA portions can comprise lookup tables, individual transistors such as pass transistors, multiplexers, entire CLBS, or any other portions of the device. In some embodiments, a plurality of well bias voltage levels are provided. Values stored in two or more SRAM cells are decoded to select one of the plurality of well bias values for each well. Another aspect of the invention provides methods for selectively applying a well bias to only those portions of a PLD where such a bias is necessary or desirable, e. g., applying a positive well bias only to transistors on critical paths within a user's design. According to one embodiment of the invention, an FPGA user defines the critical paths in his or her design at the time the user circuit is defined. The FPGA implementation software (software that accepts a design description and generates a configuration bitstream implementing the described design in an FPGA) takes note of the designated critical paths and generates a configuration bitstream that enables positive well biasing only for the transistors participating in the critical paths, or only for programmable logic elements (e. g., CLBs or lookup tables) containing those transistors. In another embodiment, the FPGA implementation software includes timing software (such as is well known in the art) that automatically determines the critical paths in the user's design. The software then enables positive well biasing for transistors on these determined critical paths. In one embodiment, the FPGA implementation software monitors the number of transistors having an applied positive well bias, and issues an error message if the number of these transistors is such that the specified maximum leakage current for the device will be exceeded.In another embodiment, negative well biasing voltage levels are programmably provided. In other words, a P-well can be programmably biased to a lower voltage, and an N-well can be programmably biased to a higher voltage. This negative well biasing leads to decreased performance of transistors within the well, and concomitant decreased leakage current.In one such embodiment, the FPGA implementation software compensates for an otherwise unacceptably large number of positively well biased transistors by negatively well biasing transistors in non-critical paths. In one embodiment, the user specifies these non-critical paths.In another embodiment, the FPGA implementation software automatically determines the least critical paths in the user's design. In another embodiment, negative well biasing is used to reduce leakage current on non-critical paths, while no positive well biasing occurs.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the following figures, in which like reference numerals refer to similar elements. Fig. 1A shows the substrate of an exemplary CMOS integrated circuit. Fig. 1B shows the substrate of an exemplary CMOS integrated circuit using a triple-well process. Fig. 1C shows the substrate of an exemplary CMOS integrated circuit using an SOI (silicon-on-insulator) process. Figs. 2A-2H show exemplary silicon substrates to which programmable well biasing is applied in accordance with various embodiments of the invention. Fig. 3 shows a user circuit implemented in several configurable logic blocks (CLBs) of an FPGA. Fig. 4 shows a first method of implementing a PLD in accordance with the present invention. Fig. 5 shows a second method of implementing a PLD in accordance with the present invention. Fig. 6 shows a third method of implementing a PLD in accordance with the present invention. Fig. 7 shows a fourth method of implementing a PLD in accordance with the present invention. Fig. 8 shows a fifth method of implementing a PLD in accordance with the present invention. Fig. 9 shows a sixth method of implementing a PLD in accordance with the present invention. Fig. 10 shows a seventh method of implementing a PLD in accordance with the present invention.DETAILED DESCRIPTION OF THE DRAWINGSThe present invention is applicable to a variety of programmable logic devices (PLDs). The present invention has been found to be particularly applicable and beneficial for field programmable gate arrays (FPGAs). While the present invention is not so limited, an appreciation of the present invention is presented by way of specific examples, in this instance with an FPGA programmed using SRAM cells. In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Circuit ConfigurationsWell biasing can be used to increase transistor performance (positive well biasing) or decrease leakage current (negative well biasing). Figs. 2A-2F show various silicon substrates and exemplary biasing configurations that can be used in accordance with the various embodiments of the present invention. Fig. 2A shows a first silicon substrate to which a programmable well bias is applied. The silicon substrate 200 and the various structures formed thereon are similar to those of Fig. 1B, which is formed using a triple-well process. In the example of Fig. 2A, a positive well bias 205 can be selectively applied to P-well 211 by way of switch 203. Switch 203 is controlled by programmable memory cell 204. In one embodiment, switch 203 is an NMOS passgate, while memory cell 204 is a configuration memory cell in an FPGA. Also in Fig. 2A, a positive well bias 206 can be selectively applied to N-well 212 (i. e. , a more negative voltage is applied to the N-well) by way of switch 207.Switch 207 is controlled by programmable memory cell 208.In one embodiment, switch 207 is a PMOS passgate, while memory cell 208 is a configuration memory cell in an FPGA. Bias generator circuits such as those labeled 205 and 206 in Fig. 2A (and those shown in Figs. 2C-2H) are well known in the art, and therefore are not described in detail here. In the embodiment of Fig. 2A, switches 203,207 and bias generator circuits 205,206 are also implemented within substrate 200. However, in other embodiments, a well bias is externally provided. Fig. 2B shows the same positive well biasing configuration implemented using an SOI substrate similar to that of Fig. 1C. Fig. 2C shows the application of negative well biasing to P-wells and N-wells. In the example of Fig. 2C, a negative well bias 215 can be selectively applied to P-well 211 by way of switch 203. Switch 203 is controlled by programmable memory cell 204. Similarly, a negative well bias 216 can be selectively applied to N-well 212 (i. e. , a more positive voltage is applied to the N-well) by way of switch 207. Switch 207 is controlled by programmable memory cell 208. Fig. 2D shows the same negative well biasing configuration implemented using an SOI substrate similar to that of Fig. 1C. Fig. 2E shows another configuration of the triple-well substrate of Fig. 1B, wherein a programmable selection is made between positive well biasing or negative well biasing. Fig. 2F shows the same configuration applied to an SOI substrate. Fig. 2G shows another configuration of the triple-well substrate of Fig. 1B, wherein a programmable selection is made between four different well bias voltages provided by well bias voltage generators 221-224. For NMOS transistor 201, the selection is made via multiplexer 220, which is controlled by two programmable memory cells 225 and 226.Note that in this embodiment, the switch circuit is implemented as a multiplexer, rather than being implemented as an NMOS or PMOS passgate as in the embodiments of Figs.2A-2F. For example, the multiplexer can be implemented as a plurality of passgates in parallel. There are many well known types of switch circuits that can be used to implement the invention. In addition, the switch circuits can be controlled by means other than programmable memory cells. For example, the switch circuits can be controlled by flip-flops, where the flip-flops are driven by other programmable logic within the user circuit. Thus, the switch circuits can be dynamically controlled, provided that sufficient time is allowed for the well bias to be applied. Many other types of switch controls can also be used. The four bias voltages V1-V4 can be all positive well bias voltages, all negative, a mixture, and so forth. One of the four well bias voltages can be a zero bias, in which case one of well bias voltage generators 221-224 can be omitted. For PMOS transistor 202, the selection is made via multiplexer 230, which is controlled by two programmable memory cells 235 and 236. The four bias voltages V5-V8 can be the same as, or different from, the four bias voltages V1-V4. Fig. 2H shows the same configuration applied to an SOI substrate. Many other configurations are possible. For example, a single memory cell can serve to control multiple switches. For example, memory cells 204 and 208 in Figs.2A-2F can be the same memory cell. Similarly, memory cells 225 and 226 can be the same as memory cells 235 and 236.Exemplary User CircuitFig. 3 shows the logic elements of an FPGA in which user logic has been mapped and placed. "Mapping"is the grouping of specific portions of a user's logic circuit into sets that will fit into logic elements in the FPGA. "Placement"is the assignment of a set to a specific logic element in a specific location within the FPGA. In Fig. 3, the pictured logic elements are configurable logic blocks, or CLBs, arranged in a regular array. The simple example of user logic shown in Fig. 3 includes four logic sets placed in CLBs CLB02, CLB01, CLB11, and CLB21. There are two paths through the circuit.A first path 301 extends from node A through CLB02 to nodeB, and hence through CLB21 to node C. A second path 300 extends from node D through CLB01 to node E, through CLB11 to node F, and through CLB21 to node C. In this example, the logic in each of the CLBs traverses only one lookup table, except for in CLB21, wherein the lookup table output value is stored in a flip-flop, and the flip-flop output is placed on node C. Because the second path 300 traverses three lookup tables while the first path 301 traverses only two, clearly the second path will take longer to traverse (assuming, for simplicity, that routing delays over these short distances are relatively negligible). Therefore, it is the speed of the second path that determines the overall speed of the user circuit. Hence, the second path is called the "critical path", and the first path is called a"noncritical path". A critical path can be designated by the user when he or she enters the design description, or this information can be supplied by the user when initiating the FPGA implementation software, or by placing the information in a file, or interactively during implementation, or by some other means. However, this information can also be extracted from the design by the implementation software, thus eliminating the need for user intervention. For example, it is well known in the art of FPGA software design to extract timing information from a user design before, during, and after implementation, both to optimize the results and to report on the performance of the resulting design. This technique is commonly used, for example, by the FPGA implementation software currently available from Xilinx, Inc. FPGA implementation software typically performs a series of steps in implementing a user circuit in the FPGA.For example, these steps can include mapping, placement, and routing. Mapping and placement were previously described. "Routing"is the assignment of the various paths to the various programmable interconnect resources available in the FPGA. Timing information (including critical path designations) is commonly used in all three steps. During the mapping step, an effort is made to group logic on critical paths together into a single logic element. During the placement step, as in the example ofFig. 3, logic on the critical path is usually placed such that the physical distance between successive logic sets is minimized. Thus, the routing delay on the critical path is minimized as much as possible, to reduce the impact of this slowest path on the performance of the user circuit. During the routing step, the fastest interconnect resources are assigned to the most critical paths. In one embodiment, the FPGA implementation software functions as follows. The FPGA is divided into portions, each of which has separately controlled well biasing. For example, in this embodiment each lookup table (LUT) has separately controlled well biasing. Each LUT is modeled as being either fast (with positive well biasing) or slow (without well biasing). There is a cost associated with the fast model. During the placement and/or routing phase, one of the two models is selected based on delay and power constraints. For example, the router can balance the static power consumed by the positively biased well against the dynamic power from all the connections in the system. In another embodiment, an additional model is provided, the low-power model. The low-power model is associated with LUTs having an applied negative well bias.In another embodiment, several models with various levels of applied bias are used. The invention provides additional methods of minimizing delays on critical paths. According to one aspect of the present invention, delays on a critical path are minimized by selectively applying a positive well bias to transistors implementing logic on the path. In the user circuit of Fig. 3, for example, a positive well bias can be applied to the entirety of CLBs CLB01, CLB11, and CLB21.Thus, the full speed advantage of the positive well bias is gained (because the limiting path is speeded up as much as possible), but the additional leakage current is limited to the CLBs on the critical path. No well bias is applied toCLBs not on the critical path, for example, CLBs CLB02,CLB12, and so forth. In another embodiment, a positive well bias is only applied to some of the transistors on the critical path.The path need only be speeded up to the point where the specified timing requirement is met. Therefore, when the timing requirement is met by biasing only a subset of the transistors, only the transistors in that subset are positively biased. This approach minimizes the additional leakage current added by the applied positive biasing. In some embodiments, there are two or more critical paths. If only some of the transistors on each path need to be positively biased, and some transistors are shared between multiple critical paths, the wells containing the shared transistors are preferably biased first. Then, if timing requirements are not met, additional transistors on each path can be positively biased as needed. In another embodiment, while a positive well bias is applied to CLBs on the critical path, a negative well bias is applied to CLBs on the slowest path (CLB02) and/or CLBs not used in the user circuit (CLBs CLB12, CLB22, CLB00, CLB10, and CLB20). Thus, the non-critical paths are actually slowed down, reducing leakage current to compensate for the increased leakage current resulting from the positive well bias on the critical paths. In other embodiments, a well bias is only applied to those portions of the CLB that are actually used by the user circuit. For example, in the circuit of Fig. 3, only lookup tables (LUTs) are used in CLBs CLB02, CLB01, and CLB11, i. e. , the flip-flops provided in these CLBs are not used. Therefore, in these CLBs the well bias is only applied to the LUTS in the CLB. In another embodiment, a well bias is also applied to the pass transistors allowing access to and from the LUTs and interconnect. In other embodiments (including those implemented in PLDS having programmable subdivisions other than CLBS and LUTs), a well bias is selectively applied to other groupings of programmable logic. In one embodiment, a positive well bias is applied to all transistors in the user logic (or a predetermined subset thereof). Thus, a large number of transistors are initially set to their fastest speed. The well bias is then sequentially removed from non-critical transistors while monitoring the projected leakage current. When the projected leakage current falls below a specified value, the positive bias is retained on the remaining biased transistors. Programmable voltage generators are known in the art of programmable logic design. For example, Lee et al. describe a number of illustrative programmable charge pump circuits in U. S. Pat. No. 5,661, 685, entitled"Programmable Logic Device with Configurable Power Supply". Programmable charge pumps are generally designed to be adjustable so that voltage levels can be changed to compensate for process variations during fabrication, which can cause shifts in the output voltage of the charge pumps. However, a programmable voltage generator can be used to add another level of complexity to the present invention, by providing two or more different well biasing voltages from which to choose. Where several well bias values are available, the FPGA implementation software can calculate by how much the speed of the critical path must be increased, by comparing the difference between the timing delay on the critical path with that of the next slowest path. If only a slight increase in speed is needed, a small well bias can be applied, with its correspondingly slight increase in leakage current. If the critical path is much slower than the next most critical, the strongest available well bias is applied. When the speed of several transistors (or larger FPGA portions) is being adjusted, the implementation software can try various combinations of well bias levels on various transistors and various paths, until the optimal configuration is achieved. In one embodiment, four positive well bias voltage levels are available for a P-well: 0 volts, +X/3 volts, +2X/3 volts, and +X volts, where X is a positive value.For example, if X=0.6, the four available positive well bias voltage levels are 0 volts, 0.2 volts, 0.4 volts, and 0.6 volts. Of these selections, a positive well bias of 0 volts (i. e. , no bias applied) gives the poorest performance but the lowest leakage current, while a positive well bias of 0.6 volts gives the best performance but the highest leakage current. With four selections, the choice is made by programming two memory cells (via the configuration bitstream) with appropriate bit values. In some embodiments, the available well bias voltage levels are negative well bias levels. For example, for a P-well, the available values can be 0 volts, -X/3 volts,- 2X/3 volts, and-X volts, where X is a positive value. In other embodiments, both positive and negative well biasing are available for a single well. For example, for a P-well, the available values can be-X volts, 0 volts, and +X volts, where X is a positive value.Illustrative MethodsFigs. 4-10 show several illustrative methods encompassed by the present invention. These methods are shown and described to demonstrate some applications of the present invention; however, the invention is not limited to the variations shown. Fig. 4 shows a first embodiment of the invention. In step 401, the transistors in a first critical path of the user design are determined. In step 402, each transistor on the critical path is identified. In one embodiment, each transistor reference in the design file is"tagged" with an identifier identifying the transistor as a critical path transistor. In step 403, a configuration file is generated, where the configuration file includes information enabling positive well biasing on the transistors identified as critical path transistors. In an optional series of steps that can occur simultaneously with steps 401 and 402, transistors on a second critical path are similarly determined (step 404) and identified (step 405). In this embodiment, the configuration data file enables positive well biasing for transistors on both critical paths. In another optional series of steps, the number of critical path transistors is monitored (step 406), to avoid increasing the leakage current of the PLD to an unacceptable level. If the number of critical path transistors exceeds a predetermined acceptable number, an error or warning message is issued to the user. Fig. 5 shows a second embodiment of the invention. In step 501, the transistors in a critical path of the user design are determined. In step 502, each transistor on the critical path is identified. In a series of steps that can occur simultaneously with steps 501 and 502, transistors on a non-critical path are also determined (step 504) and identified (step 505). In step 503, a configuration file is generated, where the configuration data file includes information enabling positive well biasing on the transistors identified as critical path transistors (508), and further enabling negative well biasing on the transistors identified as non-critical path transistors (509). A transistor may be part of both the critical path and the non-critical path, for example, a transistor in CLBCLB21 of Fig. 3. In that case, the transistor is preferably treated as a critical path transistor. The embodiment shown in Fig. 6 is similar to the embodiment of Fig. 5, except that the number of transistors on the critical path is monitored (step 606), and negative well biasing for transistors on the non-critical path is only enabled if the number of critical path transistors exceeds a predetermined number. Fig. 7 shows a fourth embodiment of the invention. In step 701, a user circuit is evaluated to determine the timing delays of two paths. In step 702, the two timing delays are compared, and a faster path and a slower path are determined. In step 703, a configuration data file is generated, where the configuration data file enables well biasing (either positive or negative well biasing, or both) on at least one transistor on at least one of the paths. The embodiment of Fig. 8 is similar to that of Fig. 7, except that after the faster and slower paths are determined (step 802), a timing difference between the two paths is determined, e. g. , by subtracting the timing delay of the faster path from the timing delay of the slower path (step 810). Based on this timing difference, a preferred well bias value is selected from among a group of available well bias voltage values (step 811). These values are those supported by the voltage generator circuit providing the well bias to each transistor. As previously described, such a circuit can select from among, for example, four available values based on two logic values stored in two configuration memory cells of an FPGA. In step 812, a configuration data file is generated, where the configuration data file enables well biasing to the preferred value on at least one transistor on at least one of the paths. Fig. 9 shows a sixth embodiment of the invention. In step 901, a user circuit is evaluated to determine the timing delays of two paths. In step 902, the difference between the two timing delays is determined. In step 903, it is determined on which path each transistor belongs. As previously described, a transistor on both paths is preferably treated as belonging to the slower of the two paths. (The order of steps 902 and 903 can be reversed.)In step 904, based on the determined timing difference and the path to which each transistor belongs, a preferred well bias value is selected from among a group of available well bias voltage values. In step 905, a configuration data file is generated, where the configuration data file enables well biasing to the preferred value on at least one transistor on at least one of the paths. Fig. 10 shows a seventh embodiment of the invention.In step 1001, a user circuit is evaluated based on cost criteria that may include, for example, the increased speed gained by applying a positive well bias, the increased leakage current resulting from a positive well bias, the decreased speed resulting from an applied negative well bias, the decreased leakage current from the applied negative well bias, and other power consumption issues such as the leakage current from wiring among the various elements of the user circuit. In step 1002, first and second logical groupings are selected from the user circuit, based on the evaluation performed in step 1001. (In some embodiments, steps 1001 and 1002 are performed concurrently. ) In step 1003, a configuration data file is generated, where the configuration data file enables well biasing to a first value in the first grouping and to a second value in the second grouping. In one embodiment, the first grouping has an applied positive bias, while the second grouping has no applied bias. In another embodiment, the first grouping has an applied negative bias, while the second grouping has no applied bias. In yet another embodiment, the groupings both have positive applied biases, but of different values, and so forth. Many other variations are possible using this embodiment of the invention, and will be obvious to those of ordinary skill in the art based on the disclosure herein. Those having skill in the relevant arts of the invention will now perceive various modifications and additions that may be made as a result of the disclosure herein. For example, the above text describes the structures and methods of the invention in the context ofFPGAs implemented using CMOS transistors on a silicon substrate. However, the invention can also be applied to other programmable logic devices, including devices implemented on other substrates and in other types of logic, including but not limited to NMOS, PMOS, bipolar, and so forth. Further, charge pumps, programmable voltage generators, memory cells, transistors, substrates, N-wells and P-wells, and configuration data files other than those described herein can be used to implement the invention.Further, the methods of the present invention are preferably performed by computer software, but the invention is not limited thereto. Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents. |
Low-power compute-in-memory (CIM) systems employing CIM circuits that include static random access memory (SRAM) bit cells circuits. The CIM circuits can be used for multiply-and-accumulate (MAC) operations. The CIM circuits can include five-transistor (5T) SRAM bit cells that each have a single bit line coupled to an access circuit for accessing the SRAM bit cell for read/write operations. The CIM circuit also includes a multiplication circuit (e.g., an exclusive OR (XOR)-based circuit) coupled to the SRAM bit cell. The CIM circuit is configured to perform multiplication of an input data value received by the multiplication circuit with a weight data value stored in the SRAM bit cell. The reduction of an access circuit in the 5T SRAM bit cell allows the pull-up voltage at a supply voltage rail coupled to the inverters of the 5T SRAM bit cell to be reduced to reduce standby power while providing storage stability. |
What is claimed is:1. A compute-in-memory (CIM) circuit, comprising: a bit line; a static random access memory (SRAM) bit cell circuit, comprising: a storage circuit, comprising: a true inverter circuit comprising a true inverter input node and a true inverter output node comprising a true storage node; and a complement inverter circuit comprising a complement inverter input node coupled to the true inverter output node, and a complement inverter output node comprising a complement storage node coupled only to the true inverter input node; and an access circuit coupled to the true storage node; and a multiplication circuit comprising a first multiplication input node coupled to the storage circuit, a second multiplication input node, and a multiplication output node; the multiplication circuit configured to generate on the multiplication output node, a multiplication product of a first multiplication input on the first multiplication input node and a second multiplication input on the second multiplication input node.2. The CIM circuit of claim 1, further not comprising a second access circuit coupled to the complement storage node.3. The CIM circuit of claim 2, further not comprising a second bit line coupled to the SRAM bit cell circuit.4. The CIM circuit of claim 1, wherein the multiplication circuit comprises an exclusive OR (XOR)-based circuit configured to generate the multiplication product on the multiplication output node as an XOR-based logic operation of the first multiplication input and the second multiplication input.5. The CIM circuit of claim 4, wherein: the first multiplication input node comprises: a first true multiplication input node coupled to the true storage node and configured to receive true storage data on the true storage node; and a first complement multiplication input node coupled to the complement storage node and configured to receive complement storage data on the complement storage node; the second multiplication input node comprises: a second true multiplication input node configured to receive a second true multiplication input data; and a second complement multiplication input node configured to receive a second complement multiplication input data; and the XOR-based circuit comprises: a first multiplication transistor, comprising: a first source/drain coupled to the first true multiplication input node; a first drain/source coupled to the multiplication output node; and a first gate coupled to the second true multiplication input node; and a second multiplication transistor, comprising: a second source/drain coupled to the first complement multiplication input node; a second drain/source coupled to the multiplication output node; and a second gate coupled to the second complement multiplication input node.6. The CIM circuit of claim 4, wherein: the first multiplication input node comprises: a first true multiplication input node coupled to the true storage node and configured to receive true storage data on the true storage node; and a first complement multiplication input node coupled to the complement storage node and configured to receive complement storage data on the complement storage node; the second multiplication input node comprises: a second true multiplication input node configured to receive a second true multiplication input data; and a second complement multiplication input node configured to receive a second complement multiplication input data; and the XOR-based circuit comprises: a first multiplication transistor, comprising: a first source/drain coupled to the second true multiplication input node; a first drain/source coupled to the multiplication output node; and a first gate coupled to the first true multiplication input node; and a second multiplication transistor, comprising: a second source/drain coupled to the second complement multiplication input node; a second drain/source coupled to the multiplication output node; and a second gate coupled to the first complement multiplication input node.7. The CIM circuit of claim 1, further comprising a non-volatile (NV) capacitor circuit coupled to the multiplication output node.8. The CIM circuit of claim 7, wherein the NV capacitor circuit comprises a ferroelectric capacitor circuit.9. The CIM circuit of claim 1, wherein: the bit line is configured to be pre-charged to a pre-charge voltage; and the access circuit is configured to pass a data value on the true storage node to the bit line in response to the access circuit being activated.10. The CIM circuit of claim 1, further comprising: a first supply voltage rail coupled to the true inverter circuit, the first supply voltage rail configured to receive a first supply voltage; and a second supply voltage rail coupled to the complement inverter circuit, the second supply voltage rail configured to receive a second supply voltage; the second supply voltage rail configured to provide a boosted voltage based on the second supply voltage in response to the access circuit being activated to perform an access operation to the SRAM bit cell circuit, the boosted voltage exceeding the first supply voltage.11. The CIM circuit of claim 1, further comprising: a first positive supply voltage rail configured to receive a first positive supply voltage; a second positive supply voltage rail configured to receive a second positive supply voltage; a first negative supply voltage rail configured to receive a first ground voltage; and a second negative supply voltage rail configured to receive a second ground voltage; wherein the true inverter circuit comprises: a true positive (P)-type field-effect transistor (FET) (PFET), comprising: a true P-type source coupled to the first positive supply voltage rail; a true P-type gate coupled to the complement inverter output node; and a true P-type drain coupled to the complement inverter input node; and
a true negative (N)-type FET (NFET), comprising: a true N-type drain coupled to the first negative supply voltage rail; a true N-type gate coupled to the complement inverter output node; and a true N-type source coupled to the complement inverter input node; and the complement inverter circuit comprises: a complement PFET, comprising: a complement P-type source coupled to the second positive supply voltage rail; a complement P-type gate coupled to the true inverter output node; and a complement P-type drain coupled to the true inverter input node; and a complement NFET, comprising: a complement N-type drain coupled to the second negative supply voltage rail; a complement N-type gate coupled to the true inverter output node; and a complement N-type source coupled to the true inverter input node.12. The CIM circuit of claim 11, wherein the second positive supply voltage rail is coupled to the first positive supply voltage rail.13. The CIM circuit of claim 11, wherein: the second positive supply voltage rail is configured to provide a positive boosted voltage based on the second positive supply voltage in response to the access circuit being activated to perform an access operation to the SRAM bit cell circuit, the positive boosted voltage exceeding the first positive supply voltage; and
the second negative supply voltage rail is configured to provide a negative boosted voltage based on the second ground voltage in response to the access circuit being activated to perform the access operation to the SRAM bit cell circuit, the negative boosted voltage negatively exceeding the first ground voltage.14. The CIM circuit of claim 1, wherein the SRAM bit cell circuit is a five-transistor (5T) SRAM bit cell circuit.15. The CIM circuit of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server, a computer, a portable computer; a mobile computing device; a wearable computing device; a desktop computer, a personal digital assistant (PDA); a monitor, a computer monitor; a television; a tuner, a radio; a satellite radio; a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player; an automobile; a vehicle component; avionics systems; a drone; and a multicopter.16. The CIM circuit of claim 1 integrated in an integrated circuit (IC).17. A method of performing a compute-in-memory (CIM) operation, comprising: activating an access circuit to couple a bit line to a true storage node of a true inverter circuit of a static random access memory (SRAM) bit cell circuit, the SRAM bit cell circuit comprising: the true inverter circuit comprising a true inverter input node and a true inverter output node comprising the true storage node; and a complement inverter circuit comprising a complement inverter input node coupled to the true inverter output node, and a complement inverter output node comprising a complement storage node coupled only to the true inverter input node;
providing a true data value from the true storage node as a first multiplication input to a first multiplication input node of a multiplication circuit; asserting a second multiplication input to a second multiplication input node of the multiplication circuit; and generating a multiplication product on a multiplication output node of the multiplication circuit based on a multiplication of the first multiplication input and the second multiplication input.18. The method of claim 17, further comprising: providing a first positive supply voltage of a first positive supply voltage rail to the true inverter circuit; providing a second positive supply voltage of a second positive supply voltage rail to the complement inverter circuit; and positively boosting the second positive supply voltage to exceed the first positive supply voltage in response to the access circuit being activated.19. The method of claim 18, further comprising: providing a first ground voltage of a first negative supply voltage rail to the true inverter circuit; providing a second ground voltage of a second negative supply voltage rail to the complement inverter circuit; and negatively boosting the second ground voltage to negatively exceed the first ground voltage in response to the access circuit being activated.20. The method of claim 17, further comprising: providing a memory domain supply voltage to a memory array in a processor- based system; providing a first supply voltage to the true inverter circuit less than the memory domain supply voltage; and providing a second supply voltage to the complement inverter circuit.21. The method of claim 20, further comprising pre-charging the bit line to the first supply voltage.22. A memory system, comprising: a compute-in-memory (CIM) array circuit comprising a plurality of CIM circuits each comprising: a static random access memory (SRAM) bit cell circuit, comprising: a storage circuit, comprising: a true inverter circuit comprising a true inverter input node and a true inverter output node comprising a true storage node; and a complement inverter circuit comprising a complement inverter input node coupled to the true inverter output node, and a complement inverter output node comprising a complement storage node coupled only to the true inverter input node; and an access circuit coupled to the true storage node; and a multiplication circuit comprising a first multiplication input node coupled to the storage circuit, a second multiplication input node, and a multiplication output node; the multiplication circuit configured to generate on the multiplication output node, a multiplication product of a multiplication of a first multiplication input on the first multiplication input node and a second multiplication input on the second multiplication input node; a first bit line coupled to each of the access circuits of a first subset of the plurality of CIM circuits; a second bit line coupled to each of the access circuits of a second subset of the plurality of CIM circuits different from the first subset of the plurality of CIM circuits; and a bit line driver circuit coupled to the first bit line and the second bit line; the bit line driver circuit configured to:
pre-charge the first bit line to a first pre-charge voltage of true read data; and pre-charge the second bit line to a second pre-charge voltage of complement read data.23. The memory system of claim 22, wherein the bit line driver circuit is physically located in the CIM array circuit between a first end CIM circuit among the first subset of the plurality of CIM circuits and a second end CIM circuit among the second subset of the plurality of CIM circuits.24. The memory system of claim 22, wherein: a number of CIM circuits in the first subset of the plurality of CIM circuits is equal to a number of CIM circuits in the second subset of the plurality of CIM circuits; the first subset of the plurality of CIM circuits is arranged in a first linear array; the second subset of the plurality of CIM circuits is arranged in a second linear array aligned with the first linear array; and the bit line driver circuit is physically located in an area between the first linear array and the second linear array.25. The memory system of claim 22, wherein: the CIM array circuit further comprises a global bit line (GBL); and each of the multiplication output nodes of the plurality of CIM circuits in the CIM array circuit is coupled to the GBL.26. The memory system of claim 25, wherein the CIM array circuit further comprises a global bit line driver circuit configured to pre-charge the GBL to a pre-charge voltage.27. The memory system of claim 26, further comprising a memory array, wherein: the memory array is configured to receive a memory domain supply voltage; and the global bit line driver circuit is configured to pre-charge the GBL to the pre- charge voltage less than the memory domain supply voltage. |
COMPUTE-IN-MEMORY (CIM) EMPLOYING LOW-POWER CIM CIRCUITS EMPLOYING STATIC RANDOM ACCESS MEMORY (SRAM) BIT CELLS, PARTICULARLY FOR MULTIPL Y-AND-ACCUMLUATE (MAC) OPERATIONSPRIORITY APPLICATION[0001] The present application claims priority to U.S. Patent Application Serial No. 17/084,779, filed October 30, 2020 and entitled “COMPUTE-IN-MEMORY (CIM) EMPLOYING LOW-POWER CIM CIRCUITS EMPLOYING STATIC RANDOMACCESS MEMORY (SRAM) BIT CELLS, PARTICULARLY FOR MULTIPLY- AND- ACCUMLUATE (MAC) OPERATIONS," which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to machine learning computing, and more particularly to computational compute-in-memory (CIM) circuits integrated with static random access memory (SRAM) cell circuits for machine learning computing.II. Background[0003] Machine learning is the ability of a computing device to progressively improve performance of a particular task. For example, machine-learning algorithms can use results from processing known data to “train” a computing device to process new data with a higher degree of accuracy. Neural networks are a framework in which machine- learning algorithms may be implemented. A neural network is modeled after the organization of a brain, with a plurality of nodes that each correspond to a brain synapse. Each node receives input signals representing input data from preceding nodes and generates an output that becomes an input signal to succeeding nodes. The nodes are organized in sequential layers such that, in a first processing stage, nodes of a first layer receive input data from an external source and generate an output that is provided to every node in a second layer. In a next processing stage, nodes of the second layer receive the outputs of each node in the first layer, and generate further outputs to be provided to every
node in a third layer as the nodes in the first layer receive and process new external inputs, and so on in subsequent processing stages.[00041 Within each node, each input signal is uniquely weighted by multiplying the numerical input signal by an associated numerical weight The products corresponding to the weighted input signals representing weight data are summed to generate a node output. Together, these operations are known as a multiply-and-accumulate (MAC) operation. Figure 1 is a block diagram representing the operation of a conventional node 100 that may be used in a neural network. In the node 100, each of the numerical input signals X0-XMare received and multiplied by respective numerical weight data W0-WMto generate products P0-PM. The numerical weight data WO-WM is stored and reused by the node 100 in each processing stage. The numerical weight data WO-WM may be updated in a machine-learning method using feedback based on a comparison of actual results to expected results when processing known data. The node 100 uses an accumulation or summation function to add the products PO-PM together to generate a summation signal SUM. The conventional node 100 may include an additional step in which an “activation function” may be performed on the SUM signal to produce an OUTPUT signal from the node 100. However, the activation function is beyond the scope of this disclosure and is not discussed further herein.[0005] The node 100 in Figure 1 performs “M” multiply operations and a summation function at each processing stage as a new set of numerical input signals XO-XM are provided. The number of operations performed by a processing device executing a neural network framework will depend on the number of external inputs to the neural network, the number of nodes in each layer, and the number of layers in the neural network. In large neural networks, the processing device must execute thousands of operations at each processing stage. When the numerical input signals and weights are high-precision floating-point values, a significant amount of processing time, power, and memory are required to perform the MAC operations, consuming a large amount of energy, much of which is dissipated as heat. In addition, large amounts of data must be transferred between memory and one or more processors performing the MAC operations, which can cause delays that increase the response time of an application. Thus, neural network applications executing on conventional processing devices may have very slow response
times, occupy large amounts of memory, and cause battery-operated devices to discharge quickly.[0006] It is desired to provide memory circuits in memory arrays in a memory system accessible by a processor to store weight data that can be efficiently multiplied by input data for performing MAC operations, including for machine learning applications.SUMMARY OF THE DISCLOSURE[0007] Aspects disclosed in the detailed description include low-power compute-in- memory (CIM) systems employing CIM circuits employing static random access memory (SRAM) bit cells. As a non-limiting example, the CIM circuits can be used for multiply- and-accumulate (MAC) operations, such as those employed in machine-learning applications. The CIM circuits each include a SRAM bit cell circuit that includes a storage circuit for storing data. Data can be read from the storage circuit of the SRAM bit cell circuit by pre-charging a bit line and activating an access circuit (e.g., an access transistor) coupled between the bit line and the storage circuit. Data can be written to the storage circuit of the SRAM bit cell circuit by asserting a voltage of the desired logic level on the bit line and activating the access circuit. In aspects disclosed herein, the CIM circuit can also perform a multiplication operation between input data and storage data in the storage circuit of the SRAM bit cell. In this regard, the CIM circuit includes a multiplication circuit coupled to the SRAM bit cell circuit. As examples, the multiplication circuit can be an exclusive OR (XOR)-based circuit configured to perform an XOR-based logic operation (e.g., an XOR or exclusive negative OR (XNOR) operation) to perform a multiplication operation. The CIM circuit is configured to perform multiplication of an input data on a received input signal in the multiplication circuit with a weight data ftom the storage data in the SRAM bit cell circuit. The CIM system can employ a large number of CIM circuits. Thus, it may be desired to reduce standby and/or dynamic power dissipation in the CIM circuits to reduce the overall power dissipation in the CIM system.[0008] In this regard, in an exemplary aspect, the CIM system includes a bit line driver circuit configured to pre-charge a bit line coupled to a SRAM bit cell circuit of a CIM circuit for a read operation. Because the bit line driver circuit may be coupled to more than one CIM circuit, the access circuit of the SRAM bit cell circuit to be read is
also activated so that the charge stored in the storage circuit can be passed through the access circuit to the bit line. To reduce dynamic power in read operations to the CIM circuit, the bit line driver circuit can be configured to pre-charge the bit line to a reduced pre-charge voltage. For example, the CIM system may be included in a processor-based system that includes other memory arrays that are powered by a memory domain supply voltage (e.g. VDD) in a memory domain. The bit line driver circuit is configured to pre- charge the bit line to a reduced pre-charge voltage level (e.g., VDD/2) from the voltage level of the memory domain supply voltage as an example. By reducing the pre-charge voltage on the bit line, dynamic power for a read operation to the CIM circuit is reduced. The voltage swings in pre-charging the bit line for read operations is also reduced, thereby further reducing dynamic power dissipated for read operations. However, reducing the bit line pre-charge voltage for a read operation can cause a read disturbance issue between the storage circuit and the access circuit of the SRAM bit cell. For example, in a complementary six-transistor (6T) SRAM bit cell circuit, a reduced bit line pre-charge voltage asserted on a bit line for a read operation may not cause a pull-down N-type field- effect transistor (FET) (NFET) in a inverter circuit reinforcing a stored logic ‘()’ value to discharge fast enough to avoid a respective access circuit causing a charge build up on its storage node. This could cause a voltage flip on the complementary storage node.[00091 Thus, in a further exemplary aspect, the SRAM bit cell circuit in the CIM circuit can be provided as a five-transistor (5T) SRAM bit cell circuit. The 5T SRAM bit cell includes a true inverter circuit cross-coupled to a complement inverter circuit. A single access circuit is coupled between a single bit line and the true inverter circuit. The 5T SRAM bit cell circuit eliminates a complement bit line and complement access circuit (e.g., a complement access transistor) coupled to the complement inverter circuit as compared to a 6T complement SRAM bit cell circuit. By eliminating the complement access circuit in the 5T SRAM bit cell circuit of the CIM circuit, a contention that could exist between a complement access circuit and a complement inverter circuit in the 5T SRAM bit cell circuit from charging the bit line to a reduced pre-charge voltage in a read operation is reduced or avoided. Also by eliminating a complement bit line and complement access transistor in the 5T SRAM bit cell circuit of the CIM circuit, dynamic and standby power of the CIM circuit can be further reduced.
[0010] A bit line and complement bit line are provided in a 6T SRAM bit cell to provide differential voltages between a bit line and complement bit line to accomplish a high read sensitivity for a read operation that may not be required in the CIM circuit. Highly accurate read sensing may not be as important in applications that employ the CIM circuit as memory applications employing a 6T SRAM bit cell, for example.[0011] Also, by reducing the pre-charge voltage asserted on the bit line coupled to the 5T SRAM bit cell circuit of the CIM circuit in a read operation, the voltage margin between the reduced bit line pre-charge voltage and the supply voltage powering the SRAM bit cell may be increased. Thus, by employing a reduced bit line pre-charge voltage for read operations to the CIM circuit, there is a voltage margin available to allow the supply voltage supplied to the 5T SRAM bit cell circuit of the CIM circuit to be reduced. Reducing the supply voltage to the CIM circuit can further reduce standby and dynamic power of the CIM circuit without increasing the likelihood of a read disturbance in its 5T SRAM bit cell circuit. The reduced supply voltage can allow a storage node in the 5T SRAM bit cell to still be discharged fast enough in a read operation to avoid a read disturbance condition, because the bit line pre-charge voltage to be discharged in the 5T SRAM bit cell is also reduced.[0012] However, with a reduced supply voltage supplied to the 5T SRAM bit cell circuit with no complementary access transistor, writing data into the 5T SRAM bit cell circuit can be difficult. This is because of a write contention issue between a weakerNFET access transistor and a stronger pull-down NFET transistor in the 5T SRAM bit cell circuit. Thus, in further exemplary aspects, the supply voltage supplied to the 5T SRAM bit cell circuit can be boosted in a write operation to provide write assist to avoid or reduce the risk of write contention in the 5T SRAM bit cell circuit of the CIM circuit.Further, because a machine-learning application employing the CIM circuits may involve many more read operations than write operations, boosting the supply voltage for a write operation to the CIM circuit may not have a significant impact on overall dynamic power consumption of the CIM circuit Also, if desired, the supply voltage supplied to the 5T SRAM bit cell circuit in the CIM circuit can also optionally be boosted in a read operation to provide read assist to the SRAM bit cell circuit. Providing read assist can make a read operation to the SRAM bit cell circuit faster thus expending less dynamic power in a read operation.
[0013] In another aspect, the CIM system can include one or more arrays of CIM circuits in one or more respective CIM array circuits each coupled to a common, global bit line. To reduce the line capacitance of the bit line coupled to the CIM circuits in a given CIM array circuit, the bit line driver circuit can be physically located between end CIM circuits in its respective CIM array circuit. For example, the bit line driver circuit can be physically located in the middle of the CIM array circuit to reduce the distance between the bit line driver circuit and the farthest away CIM circuit in the CIM array circuit. As an example, one bit line can be provided to half of the CIM circuits in a given CIM array circuit that is driven by the bit line driver circuit, and another bit line provided and driven by the bit line driver circuit to a second half of the CIM circuits. In this manner, the two (2) bit lines each have a length that is reduced by approximately half versus a single bit line coupled to all of the CIM circuits in the CIM array. This allows the length of the bit line driven by bit line driver circuit to be reduced and thus reduce the line capacitance of the bit line. Reducing the line capacitance in the bit line can reduce the time to pre-charge the bit line for a read operation and assert write data for a write operation, thus reducing dynamic power expended by the CIM circuit.[0014] In another exemplary aspect, a capacitor circuit can be provided and coupled to a multiplication output node of the multiplication circuit in the CIM circuit. The capacitor circuit stores a charge representing the multiplication product output of the multiplication operation of the CIM circuit to be asserted and accumulated on a global bit line. The capacitor circuit can be provided as a non-volatile (NV) capacitor circuit that has the ability to retain a charge in a non-volatile manner over power cycles.[0015] In another exemplary aspect, a global bit line driver used to pre-charge the global bit line can also be configured to pre-charge the global bit line at a reduced supply voltage (e.g., VDD/2). The global bit line may be coupled to a plurality of multiplication outputs of CIM circuits in a CIM column array circuit for example, where the charges of the multiplication outputs are accumulated on the global bit line in a multiplication operation. Before the CIM circuits are activated to perform multiplication operations, the global bit line is pre-charged. Reducing the pre-charge voltage on the global bit line can reduce dynamic power of the CIM circuits in a given CIM circuit array for multiplication operations.
[0016] In this regard in one aspect, a CIM circuit is provided. The CIM circuit includes a bit line. The CIM circuit also includes an SRAM) bit cell. The SRAM bit cell circuit includes a storage circuit that includes a true inverter circuit comprising a true inverter input node and a true inverter output node comprising a true storage node, and a complement inverter circuit comprising a complement inverter input node coupled to the true inverter output node, and a complement inverter output node comprising a complement storage node coupled only to the true inverter input node. The SRAM bit cell circuit also includes an access circuit coupled to the true storage node. The CIM circuit also includes a multiplication circuit comprising a first multiplication input node coupled to the storage circuit, a second multiplication input node, and a multiplication output node. The multiplication circuit is configured to generate on the multiplication output node, a multiplication product of a first multiplication input on the first multiplication input node and a second multiplication input on the second multiplication input node.[0017] In another exemplary aspect, a method of performing a CIM operation is provided. The method includes activating an access circuit to couple a bit line to a true storage node of a true inverter circuit of an SRAM bit cell circuit. The SRAM bit cell circuit comprises the true inverter circuit comprising a true inverter input node and an true inverter output node comprising the true storage node, and a complement inverter circuit comprising a complement inverter input node coupled to the true inverter output node, and a complement inverter output node comprising a complement storage node coupled only to the true inverter input node. The method also includes providing a true data value from the true storage node as a first multiplication input to a first multiplication input node of a multiplication circuit. The method also includes generating a multiplication product on a multiplication output node of the multiplication circuit based on a multiplication of the first multiplication input and the second multiplication input.[0018] In another exemplary aspect, a memory system is provided. The memory system includes a CIM array circuit comprising a plurality of CIM circuits. Each CIM circuit in a CIM array circuit includes an SRAM bit cell circuit that includes a storage circuit comprising a true inverter circuit comprising a true inverter input node and a true inverter output node comprising a true storage node, a complement inverter circuit comprising a complement inverter input node coupled to the true inverter output node,
and a complement inverter output node comprising a complement storage node coupled only to the true inverter input node and an access circuit coupled to the true storage node. Each CIM circuit in a CIM array circuit also includes a multiplication circuit comprising a first multiplication input node coupled to the storage circuit, a second multiplication input node, and a multiplication output node. The multiplication circuit is configured to generate on the multiplication output node, a multiplication product of a multiplication of a first multiplication input on the first multiplication input node and a second multiplication input on the second multiplication input node. The CIM array circuit also includes a first bit line coupled to each of the access circuits of a first subset of the plurality of CIM circuits. The CIM array circuit also includes a second bit line coupled to each of the access circuits of a second subset of the plurality of CIM circuits different from the first subset of the plurality of CIM circuits. The CIM array circuit also includes a bit line driver circuit coupled to the first bit line and the second bit line. The bit line driver circuit is configured to pre-charge the first bit line to a first pre-charge voltage of true read data and pre-charge the second bit line to a second pre-charge voltage of complement read data.BRIEF DESCRIPTION OF THE FIGURES[00191 Figure 1 is a diagram of exemplary multiply-and-accumulate (MAC) operations of a node of a deep neural network (DNN);[0020] Figure 2 is a diagram of an exclusive negative OR (NOR) (XNOR) truth table to show that an XNOR logic operation can be used for binary multiplication;[0021] Figure 3 is a diagram of an exemplary compute-in-memory (CIM) circuit that includes a six-transistor (6T) static random access memory (SRAM) bit cell and a multiplication circuit in the form of an exclusive OR (XOR)-based circuit, wherein the CIM circuit is configured to perform a multiplication operation of a data value stored in the SRAM bit cell circuit with an input value provided to the multiplication circuit;[0022] Figure 4 is a diagram of an exemplary CIM circuit that includes a five- transistor (5T) SRAM bit cell circuit configured to be coupled to a single bit line and a multiplication circuit in the form of an XOR-based circuit for generating a multiplication output representing a multiplication operation of an input data value with a storage data value in the 5T SRAM bit cell circuit with an input data value, wherein the CIM circuit
is configured to be operated at a reduced supply voltage to reduce standby and dynamic power,[0023] Figure 5 is a signal diagram for a write operation in the CIM circuit in Figure4;[0024] Figure 6 is a signal diagram for a read operation in the CIM circuit in Figure4;[0025] Figure 7 is an exemplary CIM system that includes a plurality of CIM array circuits each comprising a plurality of the CIM circuits in Figure 4, wherein each CIM circuit is configured to apply a charge representing a multiplication output to a respective global bit line to be accumulated as a MAC operation, and wherein each CIM column array circuit includes a bit line driver circuit configured to drive the bit lines of their respective CIM array circuits at a reduced voltage to reduce dynamic power, [0026] Figure 8 is a diagram of an exemplary layout of bit lines and global bit lines of a CIM array circuit in Figure 7;[0027] Figure 9 is a diagram of another exemplary CIM circuit like the CIM circuit in Figure 4, but with access circuits of the multiplication circuit coupled to the storage nodes of the 5T SRAM bit cell circuit therein;[0028] Figure 10 is a block diagram of an exemplary processor-based system that can include a CIM system that includes one or more CIM circuits that each include a 5T SRAM bit cell circuit configured to be coupled to a single bit line and a multiplication circuit in the form of an XOR-based circuit for generating a multiplication output representing a multiplication operation of an input data value with a storage data value in the 5T SRAM bit cell circuit with an input data value, wherein the CIM circuit is configured to be operated at a reduced supply voltage to reduce standby and dynamic power, including, but not limited to, the CIM circuits in Figures 4 and 7-9; and[0029] Figure 11 is a block diagram of an exemplary wireless communications device that includes radio frequency (RF) components and includes a CIM system that includes one or more CIM circuits that each include a 5T SRAM bit cell circuit configured to be coupled to a single bit line and a multiplication circuit in the form of an XOR-based circuit for generating a multiplication output representing a multiplication operation of an input data value with a storage data value in the 5T SRAM bit cell circuit with an input data value, wherein the CIM circuit is configured to be operated at a reduced supply voltage
to reduce standby and dynamic power, including, but not limited to, the CIM circuits in Figures 4 and 7-9.DETAILED DESCRIPTION[0030] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.[0031] Aspects disclosed in the detailed description include low-power compute-in- memory (CIM) systems employing CIM circuits employing static random access memory (SRAM) bit cells. As a non-limiting example, the CIM circuits can be used for multiply- and-accumulate (MAC) operations, such as those employed in machine-learning applications. The CIM circuits each include a SRAM bit cell circuit that includes a storage circuit for storing data. Data can be read from the storage circuit of the SRAM bit cell circuit by pre-charging a bit line and activating an access circuit (e.g., an access transistor) coupled between the bit line and the storage circuit. Data can be written to the storage circuit of the SRAM bit cell circuit by asserting a voltage of the desired logic level on the bit line and activating the access circuit. In aspects disclosed herein, the CIM circuit can also perform a multiplication operation between input data and storage data in the storage circuit of the SRAM bit cell. In this regard, the CIM circuit includes a multiplication circuit coupled to the SRAM bit cell circuit. As examples, the multiplication circuit can be an exclusive OR (XOR)-based circuit configured to perform an XOR-based logic operation (e.g., an XOR or exclusive negative OR (XNOR) operation) to perform a multiplication operation. The CIM circuit is configured to perform multiplication of an input data on a received input signal in the multiplication circuit with a weight data from the storage data in the SRAM bit cell circuit. The CIM system can employ a large number of CIM circuits. Thus, it may be desired to reduce standby and/or dynamic power dissipation in the CIM circuits to reduce the overall power dissipation in the CIM system.[0032] In this regard, in an exemplary aspect, the CIM system includes a bit line driver circuit configured to pre-charge a bit line coupled to a SRAM bit cell circuit of a CIM circuit for a read operation. Because the bit line driver circuit may be coupled to
more than one CIM circuit, the access circuit of the SRAM bit cell circuit to be read is also activated so that the charge stored in the storage circuit can be passed through the access circuit to the bit line. To reduce dynamic power in read operations to the CIM circuit, the bit line driver circuit can be configured to pre-charge the bit line to a reduced pre-charge voltage. For example, the CIM system may be included in a processor-based system that includes other memory arrays that are powered by a memory domain supply voltage (e.g., VDD) in a memory domain. The bit line driver circuit is configured to pre- charge the bit line to a reduced pre-charge voltage level (e.g. VDD/2) from the voltage level of the memory domain supply voltage as an example. By reducing the pre-charge voltage on the bit line, dynamic power for a read operation to the CIM circuit is reduced. The voltage swings in pre-charging the bit line for read operations is also reduced, thereby further reducing dynamic power dissipated for read operations. However, reducing the bit line pre-charge voltage for a read operation can cause a read disturbance issue between the storage circuit and the access circuit of the SRAM bit cell. For example, in a complementary six-transistor (6T) SRAM bit cell circuit, a reduced bit line pre-charge voltage asserted on a bit line for a read operation may not cause a pull-down N-type field- effect transistor (FET) (NFET) in a inverter circuit reinforcing a stored logic ‘()’ value to discharge fast enough to avoid a respective access circuit causing a charge build up on its storage node. This could cause a voltage flip on the complementary storage node.[00331 Thus, in a further exemplary aspect, the SRAM bit cell circuit in the CIM circuit can be provided as a five-transistor (5T) SRAM bit cell circuit. The 5T SRAM bit cell includes a true inverter circuit cross-coupled to a complement inverter circuit. A single access circuit is coupled between a single bit line and the true inverter circuit. The 5T SRAM bit cell circuit eliminates a complement bit line and complement access circuit (e.g., a complement access transistor) coupled to the complement inverter circuit as compared to a 6T complement SRAM bit cell circuit. By eliminating the complement access circuit in the 5T SRAM bit cell circuit of the CIM circuit, a contention that could exist between a complement access circuit and a complement inverter circuit in the 5T SRAM bit cell circuit from charging the bit line to a reduced pre-charge voltage in a read operation is reduced or avoided. Also by eliminating a complement bit line and complement access transistor in the 5T SRAM bit cell circuit of the CIM circuit, dynamic and standby power of the CIM circuit can be further reduced.
[0034] A bit line and complement bit line are provided in a 6T SRAM bit cell to provide differential voltages between a bit line and complement bit line to accomplish a high read sensitivity for a read operation that may not be required in the CIM circuit. Highly accurate read sensing may not be as important in applications that employ the CIM circuit as memory applications employing a 6T SRAM bit cell, for example.[0035] Also, by reducing the pre-charge voltage asserted on the bit line coupled to the 5T SRAM bit cell circuit of the CIM circuit in a read operation, the voltage margin between the reduced bit line pre-charge voltage and the supply voltage powering the SRAM bit cell may be increased. Thus, by employing a reduced bit line pre-charge voltage for read operations to the CIM circuit, there is a voltage margin available to allow the supply voltage supplied to the 5T SRAM bit cell circuit of the CIM circuit to be reduced. Reducing the supply voltage to the CIM circuit can further reduce standby and dynamic power of the CIM circuit without increasing the likelihood of a read disturbance in its 5T SRAM bit cell circuit. The reduced supply voltage can allow a storage node in the 5T SRAM bit cell to still be discharged fast enough in a read operation to avoid a read disturbance condition, because the bit line pre-charge voltage to be discharged in the 5T SRAM bit cell is also reduced.[0036] However, with a reduced supply voltage supplied to the 5T SRAM bit cell circuit with no complementary access transistor, writing data into the 5T SRAM bit cell circuit can be difficult. This is because of a write contention issue between a weakerNFET access transistor and a stronger pull-down NFET transistor in the 5T SRAM bit cell circuit. Thus, in further exemplary aspects, the supply voltage supplied to the 5T SRAM bit cell circuit can be boosted in a write operation to provide write assist to avoid or reduce the risk of write contention in the 5T SRAM bit cell circuit of the CIM circuit.Further, because a machine-learning application employing the CIM circuits may involve many more read operations than write operations, boosting the supply voltage for a write operation to the CIM circuit may not have a significant impact on overall dynamic power consumption of the CIM circuit Also, if desired, the supply voltage supplied to the 5T SRAM bit cell circuit in the CIM circuit can also optionally be boosted in a read operation to provide read assist to the SRAM bit cell circuit. Providing read assist can make a read operation to the SRAM bit cell circuit fester thus expending less dynamic power in a read operation.
[0037] In another aspect, the CIM system can include one or more arrays of CIM circuits in one or more respective CIM array circuits each coupled to a common, global bit line. To reduce the line capacitance of the bit line coupled to the CIM circuits in a given CIM array circuit, the bit line driver circuit can be physically located between end CIM circuits in its respective CIM array circuit. For example, the bit line driver circuit can be physically located in the middle of the CIM array circuit to reduce the distance between the bit line driver circuit and the farthest away CIM circuit in the CIM array circuit. As an example, one bit line can be provided to half of the CIM circuits in a given CIM array circuit that is driven by the bit line driver circuit, and another bit line provided and driven by the bit line driver circuit to a second half of the CIM circuits. In this manner, the two (2) bit lines each have a length that is reduced by approximately half versus a single bit line coupled to all of the CIM circuits in the CIM array. This allows the length of the bit line driven by bit line driver circuit to be reduced and thus reduce the line capacitance of the bit line. Reducing the line capacitance in the bit line can reduce the time to pre-charge the bit line for a read operation and assert write data for a write operation, thus reducing dynamic power expended by the CIM circuit.[0038] In another exemplary aspect, a capacitor circuit can be provided and coupled to a multiplication output node of the multiplication circuit in the CIM circuit. The capacitor circuit stores a charge representing the multiplication product output of the multiplication operation of the CIM circuit to be asserted and accumulated on a global bit line. The capacitor circuit can be provided as a non-volatile (NV) capacitor circuit that has the ability to retain a charge in a non-volatile manner over power cycles.[0039] In another exemplary aspect, a global bit line driver used to pre-charge the global bit line can also be configured to pre-charge the global bit line at a reduced supply voltage (e.g., VDD/2). The global bit line may be coupled to a plurality of multiplication outputs of CIM circuits in a CIM column array circuit for example, where the charges of the multiplication outputs are accumulated on the global bit line in a multiplication operation. Before the CIM circuits are activated to perform multiplication operations, the global bit line is pre-charged. Reducing the pre-charge voltage on the global bit line can reduce dynamic power of the CIM circuits in a given CIM circuit array for multiplication operations.
[0040] As discussed above, the CIM circuits include an XOR-based circuit that can provide a binary multiplication operation. Figure 2 is a diagram of an XNOR logic truth table 200 to show that an XNOR operation on two inputs X and Y to generate an XNOR output 202 is equivalent to a binary multiplication operation of inputs X and Y. Binary multiplication of inputs X and Y as either both ‘0’ or ‘1’ values is equal to ‘1’, which is the XNOR output 202 of an XNOR operation as shown in Figure 2. Binary multiplication of inputs X and Y as one having a value ‘()’ and the other having a value of ‘ 1* is equal to ‘()’, which is also the XNOR output 202 of an XNOR operation as shown in Figure 2. Thus, an XOR-based circuit to perform an XOR-based logic operation, such as an XNOR logic operation, can be used for binary multiplication. An XOR-based circuit can be incorporated with a memory bit cell to perform binary multiplication of a stored value in the memory bit cell with a second input value. This circuit arrangement may be particularly useful for machine-learning applications where weight values as an input value to the XOR-based circuit are stored values in the memory bit cell.[0041] Before discussing examples of CIM circuits that include memory bit cells each with an integrated multiplication circuit configured to perform a low-power multiplication operation starting at Figure 4, an exemplary CIM circuit that includes a memory bit cell in the form of a 6T SRAM bit cell circuit is first described with regard to Figure 3 below.[0042] In this regard, Figure 3 is a diagram of an exemplary CIM circuit 300 that includes a memory bit cell circuit 302 in the form of a 6T SRAM bit cell circuit 304 coupled to a multiplication circuit 306. In this example, the multiplication circuit 306 is in the form of an XOR-based circuit 308 that is configured to perform an XOR-based logic operation (e.g., an XNOR logic operation). The CIM circuit 300 is configured to perform a binary multiplication of a stored data value as storage data in the 6T SRAM bit cell circuit 304 with an input data value X provided to the multiplication circuit 306. The input data value X is signified by the label ‘X* in Figure 3. The complement data value of the input data value X is signified by the label ‘XB’ in Figure 3. A plurality of the CIM circuits 300 can be arranged in a memory array in row and column format to provide MAC operation. For example, CIM circuits 300 can be arranged in a column array, where each of the outputs of the CIM circuits 300 are coupled to a common, global bit line (GBL) 310 to provide a binary multiplication output 312 from the multiplication circuit
306 on a multiplication output node 314 in the form of a charge on the GBL 310 as a multiplication product. In this manner, the charges of the binary multiplication outputs 312 asserted from each of the CIM circuits 300 on the GBL 310 can be accumulated as an accumulated charge representing a dot product to provide an accumulate operation as part of a MAC operation.[0043] With continuing reference to Figure 3, the 6T SRAM bit cell circuit 304 includes a storage circuit 316 that includes a true inverter circuit 318T and a complement inverter circuit 318C. The true inverter circuit 318T and complement inverter circuit 318C each include respective pull-up true and complement positive (P)-type field-effect transistors (FETs) (PFETs) PT, PC and pull-down true and complement negative (N)-type FETs (NFETs) NT, NC for a total of four (4) transistors. The true and complement PFETs PT, PC are coupled to a positive supply voltage rail 320P configured to receive a supply voltage VDD. The true and complement NFETs NT, NC are coupled to a negative supply voltage rail 320N, which is a ground node in this example and is configured to receive a ground voltage Vss. The true inverter circuit 318T has a true inverter input node 322T-I that is configured to receive an input signal to generate an output signal on a true inverter output node 324T-O of an opposite logic value of the logic value of the input signal. For example, if an input signal on the true inverter input node 322T-I is a voltage of the positive supply voltage VDD, the true inverter circuit 318T is configured to generate an output signal on the true inverter output node 324T-O based on the ground voltage Vss. If an input signal on the true inverter input node 322T-I is a voltage of the ground voltage Vss, the true inverter circuit 318T is configured to generate an output signal on the true inverter output node 324T-O based on the positive supply voltage VDD. The complement inverter circuit 318C is configured to generate an output signal on its complement inverter output node 324C-O that has a logic value opposite of the output signal generated by the true inverter circuit 318T on the true inverter output node 324T-O.[0044] The true inverter circuit 318T and a complement inverter circuit 318C are cross-coupled to each other by the true inverter input node 322T-I being coupled to the complement inverter output node 324C-O, and the complement inverter input node 322C- I being coupled to the true inverter output node 324T-O. The complement inverter input node 322C-I being coupled to the true inverter output node 324T-O forms a true storage node 326T. The true inverter input node 322T-I being coupled to the complement inverter
output node 324C-O forms a complement storage node 326C. The SRAM bit cell circuit 304 enforces the voltage at the true storage node 326T representing a complement logic value to a voltage at the complement storage node 326C. The cross-coupling of the true and complement inverter circuits 318T, 318C keeps the voltage on the true and complement storage nodes 326T, 326T reinforced for retention until a write operation occurs that changes the stored voltages on the true and complement storage nodes 326T, 326C.[0045] To perform a multiplication operation of storage data in the storage circuit 316 in the 6T SRAM bit cell circuit 304 of the CIM circuit 300, the multiplication circuit 306 in the form of an XNOR circuit 328 in this example is configured to perform an XNOR logic operation like shown in Figure 2. The XNOR circuit 328 includes a true and complement PFET 330T, 330C that include respective gates G coupled to respective true and complement multiplication input nodes 332T, 332C. The gates G of the true and complement PFETs 330T, 330C are configured to receive respective input data signals represented as input data X and XB. The input data XB, X stored at only one of the respective tine and complement storage nodes 326T, 326C is passed by the respective PFETs 330T, 330C of the XNOR circuit 328 to the multiplication output node 314 at a time, because only one of the PFETs 330T, 330C will be active at a time based on complement voltages between the input data X and XB. The XNOR circuit 328 is configured perform an XNOR operation between respective true data XB, X and respective storage data on the true and complement storage nodes 326T, 326C as additional respective multiplication input nodes to generate the multiplication output 312 on the multiplication output node 314. For example, the data stored on the true storage node 326T may be weight data that is multiplied by the input data XB by the multiplication circuit 306 for a machine-learning application. The multiplication operation in the CIM circuit 300 is similar to a read operation in a 6T SRAM memory bit cell in that the storage data stored at the true and complement storage nodes 326T, 326C is also discharged to the bit line BL and complement bit line BLB.[0046] The CIM circuit 300 also includes a capacitor circuit 334 that is configured to store a charge to latch the multiplication output 312. The CIM circuit 300 also includes a pass gate 336 that acts as a selection device to control current flow in the CIM circuit 300 to perform a multiplication operation and control the passing of the latched charge in
the capacitor circuit 334 to the GBL 310. In this manner, a charge representing the multiplication output 312 that is passed to the GBL 310 can be accumulated with other charges representing multiplication outputs from other CIM circuits as dot products to provide a MAC operation.[0047] To read data from the storage circuit 316 of the 6T SRAM bit cell circuit 304, a bit line driver circuit 338 is also provided. The bit line driver circuit 338 is configured to pre-charge the bit line BL and complement bit line BLB to complement voltages levels based on the supply voltage VDDpowering the SRAM bit cell circuit 304 to read data stored in the storage circuit 316 in a read operation. A word line (WL) coupled to the gates G of true and complement access circuits 340T, 340C (which are NFETs in this example providing a total of six (6) transistors in the SRAM bit cell circuit 304) is asserted to evaluate the differential voltages on the true storage node 326T and complement storage node 326C. For example, if a voltage representing a logic value of ‘1’ is stored at the true storage node 326T, and a voltage representing a logic value level of ‘0’ is stored at the complement storage node 326C, the PFET PT maintains the charge on the true storage node 326T. The true access circuit 340T passes the charge on the true storage node 326T to the bit line BL to represent a stored logic ‘ 1 ’ value in the storage node 326T. The voltage representing a logic value of ‘ 1 ’ stored at the true storage node 326T causes the complement NFET No to discharge the pre-charge voltage on the complement bit line BLB to represent a logic ‘()’ value in the complement storage node 326C.[0048] To write data to the storage circuit 316 of the 6T SRAM bit cell circuit 304, the bit line driver circuit 338 is also configured to assert write data and complement write data as a write voltage and complement write voltage on the respective bit line BL and complement bit line BLB based on the supply voltage VDDpowering the 6T SRAM bit cell circuit 304. This causes the write voltages on the bit line BL and complement bit line BLB to be passed to the respective true and complement storage nodes 326T, 326C through the activated true and complement access circuits 340T, 340C as written data.[0049] The active or dynamic power dissipated by the CIM circuit 300 in its operation is a function of the voltage level of the supply voltage VDDon the positive supply voltage rail 320P, the voltage swings in pre-charging and discharging of the bit line BL and complement bit line BLB, and the capacitance of the bit line BL and complement bit line BLB. The voltage levels of the supply voltages VDD, VSSat the positive and negative
supply voltage rails 320P, 320N affect the active power dissipated by the true and complement inverter circuits 318T, 318C in the 6T SRAM bit cell circuit 304 and the multiplication circuit 306 in a multiplication operation. The voltage levels of the supply voltages VDD, VSSat the positive and negative supply voltage rails 320P, 320N also determine the swing voltage level when pre-charging the bit line BL and complement bit line BLB for a write operation to the 6T SRAM bit cell circuit 304. The line capacitance of the bit line BL and complement bit line BLB, which increases as a function of their length, also affects dynamic power dissipation by the CIM circuit 300 for multiplication and write operations. An increased line capacitance of the bit line BL and complement bit line BLB results in an increase in charge time and thus results in an increase in dynamic power to pre-charge and discharge the bit line BL and complement bit line BLB. Dynamic power of the CIM circuit 300 is also consumed by the multiplication operation causing the stored voltage in the true and complement storage nodes 326T, 326C to be discharged to the bit line BL and complement bit line BLB similar to a read operation in a conventional 6T SRAM memory bit cell.[0050] The voltage levels of the supply voltages VDD, VSSat the positive and negative supply voltage rails 320P, 320N also affect standby (i.e., idle) power dissipated by the CIM circuit 300. The supply voltage VDDat the positive supply voltage rail 320P provides power to the true and complement inverter circuits 318T, 318C of the 6T SRAM bit cell circuit 304 during standby operation to reinforce the voltage as storage data at the true and complement storage nodes 326T, 326C in the storage circuit 316 as stored data. The voltage levels of the supply voltages VDD, VSSat the positive and negative supply voltage rails 320P, 320N also affect the amount of leakage current in the true and complement access circuits 340T, 340C, thus affecting power dissipation of the 6T SRAM bit cell circuit 304.[0051] It is desired to reduce the power consumption of the CIM circuit 300, particularly if employed in lower-power applications, such as mobile devices. For example, a memory array may contain a large number of CIM circuits 300. Thus, the active and standby power dissipated by a CIM circuit 300 is multiplied by the number of CIM circuits 300 present in a memory array.[0052] In this regard, Figure 4 is a diagram of an exemplary CIM circuit 400 that is included in a CIM system 401. The CIM system 401 can include a plurality of other CIM
circuits 400 (not shown). The CIM circuit 400 includes a memory bit cell circuit 402 that includes a 5T SRAM bit cell circuit 404, as opposed to the 6T SRAM bit cell circuit 304 in the CIM circuit 300 in Figure 3. The 5T SRAM bit cell circuit 404 is coupled to a multiplication circuit 406, which is an XOR-based circuit 408 in this example, to perform a multiplication operation of storage data in the 5T SRAM bit cell circuit 404 with input data provided to the multiplication circuit 406. The 5T SRAM bit cell circuit 404 eliminates the complement access circuit like provided as the complement access circuit 340C present in the 6T SRAM bit cell circuit 304 in Figure 3. The 5T SRAM bit cell circuit 404 also eliminates a complement bit line circuit like present as the complement bit line BLB in the 6T SRAM bit cell circuit 304 in Figure 3. As discussed in more detail below, providing the 5T SRAM bit cell circuit 404 in this manner in the CIM circuit 400 facilitates operation the CIM circuit 400 to perform multiplication (i.e., read) and write operations while consuming less dynamic and standby power over the CIM circuit 300 in Figure 3.[0053] In this regard, as illustrated in Figure 4, the 5T SRAM bit cell circuit 404 includes a storage circuit 416 that includes a true inverter circuit 418T and a complement inverter circuit 418C. The true inverter circuit 418T and complement inverter circuit 418C each include respective pull-up true and complement PFETs PT, PC and pull-down true and complement negative NFETs NT, NC for a total of four (4) transistors. The true PFET PT is coupled to a first positive supply voltage rail 420P(l) configured to receive a first supply voltage VDL(1). The complement PFET Pc is coupled to a second positive supply voltage rail 420P(2) configured to receive a second supply voltage VDL(2). The second supply voltage VDL(2)may be equal to or based on the first supply voltage VDL(1)in idle/standby operation to retain the storage data in the true and complement storage nodes 426T, 426C. The true NFET NT is coupled to a first negative supply voltage rail 420N(l) configured to receive a first ground voltage VSL(1), which is a ground voltage in this example. The complement NFET Nc is coupled to a second negative supply voltage rail 420N(2) configured to receive a second ground voltage VSL(2), which is a ground voltage in this example. The second ground voltage VSL(2)may be equal to or based on the first ground voltage VSL(1)in idle/standby operation to retain the storage data in the true and complement storage nodes 426T, 426C.
[0054] The true inverter circuit 418T has a true inverter input node 422T-I that is configured to receive an input signal to generate an output signal on a true inverter output node 424T-O of an opposite logic value of the logic value of the input signal. For example, if an input signal on the true inverter input node 422T-I is a voltage based on the first positive supply voltage VDL(1), the true inverter circuit 418T is configured to generate an output signal on the true inverter output node 424T-O based on ground voltage VSL(1). If an input signal on the true inverter input node 422T-I is a voltage based on the first ground voltage VSL(1), the true inverter circuit 418T is configured to generate an output signal on the true inverter output node 424T-O based on the first positive supply voltage VDL(1).[0055] The complement inverter circuit 418C has a complement inverter input node 422C-I that is configured to receive an input signal generate an output signal on a complement inverter output node 424C-O of an opposite logic value of the logic value of the input signal. For example, if an input signal on the complement inverter input node 422C-I is a voltage based on the second positive supply voltage VDL(2), the complement inverter circuit 418C is configured to generate an output signal on the complement inverter output node 424C-O based on second ground voltage VSL(2). If an input signal on the complement inverter input node 422C-I is a voltage based on the second ground voltage VSL(2), the complement inverter circuit 418C is configured to generate an output signal on the true inverter output node 424T-O based on the second positive supply voltage VDL(2).[0056] The true inverter circuit 418T and a complement inverter circuit 418C are cross-coupled to each other by the true inverter input node 422T-I being coupled to the complement inverter output node 424C-O, and the complement inverter input node 422C- I being coupled to the true inverter output node 424T-O. The complement inverter input node 422C-I being coupled to the true inverter output node 424T-O forms a true storage node 426T. The true inverter input node 422T-I being coupled to the complement inverter output node 424C-O forms a complement storage node 426C. The voltage level of the voltage at the true storage node 426T is the complement of the voltage at the complement storage node 426C. The cross-coupling of the true and complement inverter circuits 418T, 418C keeps the voltage on the true and complement storage nodes 426T, 426C reinforced for retention until a write operation occurs changing the stored voltages.
[0057] To perform a multiplication operation of storage data in the storage circuit 416 in the 5T SRAM bit cell circuit 404 of the CIM circuit 400 in Figure 4, the multiplication circuit 406 in the form of an XNOR circuit 428 in this example is configured to perform an XNOR logic operation like shown in Figure 2. The XNOR circuit 428 includes true and complement PFET s 430T, 430C that include respective gates G coupled to respective true and complement multiplication input nodes 432T, 432C. Drains D of the true and complement PFETs 430T, 430C are coupled to the respective complement and true storage nodes 426C, 426T. Sources S of the true and complement PFETs 430T, 430C are coupled to a multiplication output node 414 of the XNOR circuit 428. The gates G of the true and complement PFETs 430T, 430C are configured to receive respective input signals represented as input data X and XB. The input data X and XB stored at only one of the complement and true storage nodes 426C, 426T is passed by the respective true and complement PFETs 430T, 430C of the XNOR circuit 428 to the multiplication output node 414 at a time, because only one of the PFETs 430T, 430C will be active at a time based on complement voltages between the input data X and XB. The XNOR circuit 428 is configured perform an XNOR logic operation between a respective true input signal X and XB and the respective storage data on the complement and true storage nodes 426C, 426T as additional respective multiplication input nodes to generate a multiplication output 412 on the multiplication output node 414. For example, the data stored on the true storage node 426T may be weight data multiplied by the input data X for a machine-learning application. The multiplication operation performed by the CIM circuit 400 is similar to a read operation for a SRAM bit cell circuit in that the storage data on the true and complement storage nodes 426T, 426C is also discharged to the bit line BL.[00581 The CIM circuit 400 in Figure 4 also includes a capacitor circuit 434 that is configured to store a charge to latch the multiplication output 412. The capacitor circuit 434 could be a capacitor circuit that will eventually lose its charge in the absence of the CIM circuit 400 being powered. Alternatively, the capacitor circuit 434 could be a non- volatile (NV) capacitor circuit that is configured to retain charge even in the absence of power. In this manner, if the capacitor circuit 434 is provided as a NV capacitor circuit, the capacitor circuit 434 will retain the charge representing the multiplication output 412 of the CIM circuit 400 even through a power cycle of the CIM circuit 400. For example,
the capacitor circuit 434 could be a ferroelectric capacitor that can store a charge by the polarization of a thin ferroelectric film by an external electric field and that remains polarized even with the external electric field removed. The CIM circuit 400 also includes a transmission gate 436 that acts as a selection device to control current flow in the CIM circuit 400 to perform a multiplication operation and control passing of the latched charge in the capacitor circuit 434 to a GBL 410. In this manner, a charge representing the multiplication output 412 that is passed to the GBL 410 can be accumulated with other charges representing multiplication outputs from other CIM circuits to provide a MAC operation.[0059] To read data from the storage circuit 416 of the 5T SRAM bit cell circuit 404, a bit line driver circuit 438 is also provided in the CIM system 401. The bit line driver circuit 438 is configured to pre-charge the bit line BL to a pre-charge voltage for a read operation. To perform a read operation, a pre-charge voltage is applied by the bit line driver circuit 438 to the bit line BL. A word line (WL) coupled to the gate G of an access circuit 440 (which is provided as an NFET in this example providing a total of five (5) transistors in the SRAM bit cell circuit 404) is also asserted to activate the access circuit 440 to electrically couple the bit line BL to the true storage node 426T. For example, in a read operation, if a voltage representing a logic value of ‘ 1’ is stored at the true storage node 426T, and a voltage representing a logic value of ‘0* is stored at the complement storage node 426C, the true PFET PT in the true inverter circuit 418T maintains charge from the supply voltage VDDon the true storage node 426T that is passed to the bit line BL. If however, a voltage representing a logic value of ‘0* is stored at the true storage node 426T, a voltage representing a logic value of ‘ 1’ is stored at the complement storage node 426C and causes the true PFET PT to be turned off. The true NFET NT is turned on to pull the true storage node 426T and the bit line BL to the first negative supply voltage rail 420N(l). The access circuit 440 passes the charge of the true storage node 426T to the bit line BL for the read operation. The voltage level on the bit line BL can be sensed to determine the logic of the data value stored in the true storage node 426T of the 5T SRAM bit cell circuit 404.[0060] To reduce dynamic power of the CIM circuit 400, the bit line driver circuit 438 in this example is configured to pre-charge the bit line BL in the 5T SRAM bit cell circuit 404 to a reduced pre-charge voltage VPRE. For example, the CIM system 401 may
be included in a processor-based system that includes other memory arrays that are powered by a memory domain supply voltage (e.g. VDD) in a memory domain. The bit line driver circuit 438 can be configured to pre-charge the bit line BL to a reduced pre- charge voltage VPREof a lower voltage level than the memory domain supply voltage VDDfor a write operation to the CIM circuit 400. The pre-charge voltage VPREmay be half the memory domain supply voltage VDD(e.g., VDD/2) as an example. By reducing the pre-charge voltage VPREon the bit line BL, dynamic power is reduced in the CIM circuit 400 for write operations. Also, the pre-charge time to pre-charge the bit line BL to the pre-charge voltage VPREin the CIM circuit 400 is reduced as compared to pre-charging the bit line BL based on the voltage of the memory domain supply voltage VDD. This increases the speed of the pre-charge time to provide for fester write operations and thus reduced dynamic power in write operations. The voltage swings in pre-charging the bit line BL for read operations is also reduced, thereby further reducing dynamic power dissipated in the CIM circuit 400 for read operations.[0061] Providing the SRAM bit cell circuit in the CIM circuit 400 as the 5T SRAM bit cell circuit 404 can allow the pre-charge voltage VPREasserted on the bit line BL to be reduced for a read operation without causing a read disturbance issue between the storage circuit 416 and the access circuit 440 of the 5T SRAM bit cell circuit 404. This is because a complement access circuit is not present in the 5T SRAM bit cell circuit 404, like present as the complement access circuit 340C in the 6T SRAM bit cell circuit 304 in Figure 3 for example. If a complement access circuit coupled to the complement storage node 426C were included in the 5T SRAM bit cell circuit 404, a reduced pre-charge voltage VPREasserted by the bit line driver circuit 438 on the bit line BL for a read operation could weaken the pull-down complement NFET Nc in the complement inverter circuit 418C. In this case, the pull-down complement NFET Nc may not discharge the pre-charge voltage passed by a complement access circuit from a complement bit line fast enough to prevent a charge build up from occurring on the complement storage node 426C from the reduced pre-charge voltage. This could then cause the true PFET PT to be turned off and cause a voltage flip on the true storage node 426T in a read operation. This could thus cause a voltage flip on the complement storage node 426C in the read operation.[0062] Thus, by providing the SRAM bit cell circuit in the CIM circuit 400 as the 5TSRAM bit cell circuit 404, the pre-charge voltage VPREasserted on the bit line BL for a
read operation can be reduced or avoided, and without increasing the likelihood of a read disturbance issue. Also, eliminating the complement bit line and complement access circuit in the 5T SRAM bit cell circuit 404 can also reduce standby and dynamic power of the CIM circuit 400. The bit line BL and complement bit line BLB are provided in the 6T SRAM bit cell 304 in Figure 3 to provide differential voltages between the bit line BL and complement bit line BLB to accomplish a high read sensitivity for a read operation that may not be required in the CIM circuit 400 in Figure 4. Highly-accurate read sensing may not be as important in applications that employ the CIM circuit 400 as memory applications employing the 6T SRAM bit cell circuit 304 in Figure 3 for example.[00631 Also, reducing the pre-charge voltage VPREasserted by the bit line driver circuit 438 on the bit line BL coupled to the 5T SRAM bit cell circuit 404 of the CIM circuit 404 in a read operation increases the voltage margin between the true storage node 426T and the supply voltage VDL(1)powering the SRAM bit cell circuit 404. The pre- charge voltage VPREcan be less than the positive supply voltage VDL(1). Thus, by employing a reduced bit line pre-charge voltage VPREfor read operations to the CIM circuit 400, there is a voltage margin available. This voltage margin allows the positive supply voltages VDL(1), VDL(2)supplied to the first and second positive supply voltage rails 420P(l), 420P(2) of the 5T SRAM bit cell circuit 404 of the CIM circuit 400 to be reduced without increasing the likelihood of a read disturbance in its 5T SRAM bit cell circuit 404. For example, the supply voltage VDL(1), VDL(2)may be half VDD/2of the memory domain supply voltage VDD. Reducing the supply voltage of the 5T SRAM bit cell circuit 404 also reduces the standby (leakage) and dynamic power of the 5T SRAM bit cell circuit 404, and thus the CIM circuit 400. The reduced positive supply voltages VDL(1), VDL(2)can still allow the true storage node 426T in the 5T SRAM bit cell 404 storing a logic value ‘0* to be discharged fast enough to the first negative supply voltage rail 420N(l) in a read operation without a charge build up on the true storage node 426T to avoid or reduce a read disturbance condition.[0064] However, with reduced positive supply voltages VDL(1), VDL(2)supplied to the 5T SRAM bit cell circuit 404 of the CIM circuit 400, where the 5T SRAM bit cell circuit 404 has no complement access transistor, writing data into the 5T SRAM bit-cell circuit 404 may be is difficult. A write contention issue may be present in the storage circuit 416 of the 5T SRAM bit cell circuit 404 for a write operation because of contention between
a weaker NFET of the access circuit 440 and a stronger pull-down NFET NT of the inverter circuit 418T in this example. Thus, in further exemplary aspects, the second supply voltages VDL(2), VSL<2) supplied to the complement inverter circuit 418C of the 5T SRAM bit cell circuit 404 can be boosted in a write operation to provide write assist to avoid or reduce the risk of write contention in the 5T SRAM bit cell circuit 404 of theCIM circuit 400. Boosting the second supply voltages VDL(2), VSL(2)supplied to the complement inverter circuit 418C of the 5T SRAM bit cell circuit 404 can provide a write assist by assisting in flipping the state of the complement inverter 418C in the 5T SRAM bit cell circuit 404 in a write operation. The word line WL coupled to the 5T SRAM bit cell circuit 404 can also be boosted in a write operation.[0065] If the CIM circuit 400 is employed in a machine-learning application, the 5T SRAM bit cell circuit 404 of the CIM circuit 404 may perform more read operations than write operations. Thus, boosting the second supply voltages VDL(2), VSL(2)of the complement inverter circuit 418C in a write operation to the CIM circuit 400 may not have a significant impact on overall dynamic power consumption of the CIM circuit 400. Boosting the second supply voltage VDL(2)can involve increasing the voltage of second supply voltage VDL<2) at the second positive supply voltage rail 420P(2). Boosting the second supply voltage VSL(2)can involve decreasing or lowering the voltage at the second negative supply voltage rail 420N(2). Also, if desired, the supply voltage supplied to the 5T SRAM bit cell circuit 404 in the CIM circuit 400 can also optionally be boosted in a read operation to provide read assist to the 5T SRAM bit cell circuit 404. Providing read assist can make a read operation to the 5T SRAM bit cell circuit 404 fester thus expending less dynamic power in a read operation.[0066] Data can also be written to the storage circuit 416 of the 5T SRAM bit cell circuit 404 of the CIM circuit 400 that is used for the multiplication operation, In a write operation, the bit line driver circuit 338 asserts a write voltage on the bit line BL to represent the logic value of data to be written to the true storage node 426T. A word line (WL) coupled to the gate G of the single access circuit 440 is asserted to activate the access circuit 440 to pass the write data from the bit line BL to the true storage node 426T. The write voltage on the true storage node 426T causes the complement inverter circuit 418C to store a complement voltage to the write voltage on the complement storage node 426C. However, with lower positive supply voltages VDL(1), VDL(2)supplied to the 5T
SRAM bit cell circuit 404 of the CIM circuit 400 in Figure 4, a write contention issue may occur in the complement inverter circuit 418C in the storage circuit 416 for a write operation. For example, in a write operation, if a logic ‘1’ is stored in the true storage node 426T and write data asserted on the bit line BL to be written to the true storage node 426T is a logic ‘()’, the access circuit 440 discharges the true storage node 426T to the bit line BL to write a logic ‘0’ to the true storage node 426T. The access circuit 440 is capable of passing a strong logic ‘0’ as an NFET in this example. However, the logic ‘0’ stored in the complement storage node 426C at the start of the write operation can cause the strengthened true PFET PT to overcome the drive strength of the access circuit 440 to charge the true storage node 426T to the supply voltage VDL(1)(i.e., a logic ‘1’), thus causing a write contention on the true storage node 426T. This in turn can cause a write contention issue on the complement storage node 426C.[0067] Thus, in this example, the positive and negative supply voltages VDL(2), VSL(2)supplied to the complement inverter circuit 418C in the 5T SRAM bit cell circuit 404 can be boosted in a write operation to provide write assist to avoid or reduce the risk of write contention in the 5T SRAM bit cell circuit 404. This is shown by example in the write timing diagram 500 in Figure 5. As shown therein, for a write operation, the bit line BL is pre-charged to a write voltage VPREto achieve a logic ‘0’ or ‘1’ write operation. The word line WL is asserted to activate the access circuit 440. The supply voltages on the respective second positive and negative supply voltage rails 420P(2), 420N(2) powering the complement inverter circuit 418C are initially the lower supply voltage VDL(2), VSL(2)that is coupled to the first positive supply voltage rail 420P(l) and first negative supply voltage rail 420N(l) powering the true inverter circuit 418T. However, as shown in Figure 5, the supply voltages VDL(2), VSL(2)on the respective second positive and negative supply voltage rails 420P(2), 420N(2) can be positively and negatively boosted, respectively, to positive (i.e., greater in voltage) and negative (i.e., lesser in voltage) boosted supply voltages VDH(2), VsH(2), respectively, in response to the falling edge of the word line WL. The supply voltages on the respective second positive and negative supply voltage rails 420P(2), 420N(2) are boosted (increased and reduced in voltage, respectively) over respective lower supply voltages VDL(2), VSL(2)in a write operation to provide a write assist to the complement inverter circuit 418C to avoid a write contention issue to the true storage node 426T.
[0068] Additional dynamic power is expended in the voltage boost, but otherwise during standby times and when multiplications operations are performed, the lower supply voltages VDL<2), VSL<2) can be used to power the CIM circuit 400. A machine- learning application may involve many more read operations than write operations to the CIM circuit 400. Thus, boosting the supply voltages VDL(2), VSL(2)to the boosted supply voltages VDH(2), VsH(2)to the complement inverter circuit 418C in the 5T SRAM bit cell circuit 404 of the CIM circuit 400 for a write operation may not have a significant impact on the overall power consumption in the CIM circuit 400.[0069] If desired, the supply voltages VDL(2), VSL(2)supplied to the complement inverter circuit 418C in the 5T SRAM bit cell circuit 404 in the CIM circuit 400 can also optionally be boosted in a read (multiplication) operation to provide read assist to the 5T SRAM bit cell circuit 404. Providing read assist can make a read operation to the SRAM bit cell circuit 404 faster thus expending less dynamic power in a multiplication operation in the CIM circuit 400. This is shown by example in the read timing diagram 502 in Figure 6. As shown therein, for a read (i.e., multiplication) operation, the word line WL is asserted to activate the access circuit 440 of the desired CIM circuit 400. The supply voltage on the second positive and negative supply voltage rails 420P(2), 420N(2) powering the complement inverter circuit 418C is initially the lower supply voltage VDL(2), VSL(2)that is coupled to the respective first positive supply voltage rail 420P(l) and first negative supply voltage rail 420N(l) in this example powering the true inverter circuit 418T. However, as shown in Figure 6, the supply voltage on the second positive and negative supply voltage rails 420P(2), 420N(2) can be positively and negatively boosted to boosted supply voltages VDH(2), VsH(2), respectively, in response to the rising edge of the assertion of the word line. In this regard, the supply voltage on the second positive supply voltage rail 420P(2) can be positively boosted (i.e., increased in voltage) to boosted supply voltage VDH(2)to exceed (i.e., be greater than) the supply voltage VDL(1)in this example. The supply voltage on the second negative supply voltage rail 420N(2) can be negatively boosted (i.e., reduced in voltage) to boosted supply voltage VsH(2)to negatively exceed (i.e., be lower than) the supply voltage VSL(1)in this example. The supply voltages on the second positive supply voltage rail 420P(2) and second negative supply voltage rail 420N(2) can be positively boosted (i.e., increased in voltage) and
negatively boosted (i.e., reduced in voltage) to boosted supply voltages VDH(2), VsH(2), respectively, in response to the falling edge of the word line WL to provide a read assist. [0070] As discussed above, the CIM system 401 can include a plurality of the CIM circuits 400 to provide a memory array. In this regard, Figure 7 illustrates the CIM system 401 that includes a CIM array 700 that includes a plurality of CIM column array circuits 702(l)-702(X). Each one or more respective CIM column array circuit 702(l)-702(X) includes a plurality of CIM circuits 400. For example, the CIM array 700 includes CIM circuits 400(l)(l)-400(Y)(X), where ‘X* is the number of CIM column array circuits 702(1 )-702(X), and ‘Y’ is the row of the CIM circuit 400 in a given CIM column array circuit 702(l)-702(X). As shown in Figure 7, each CIM column array circuit 702(1)- 702(X) has its own dedicated bit line driver circuit 438(1)-438(X) configured to drive its respective bit line BLi-BLx.[0071] As shown in Figure 7, in another example, to reduce the line capacitance of the bit lines BLi-BLx coupled to the CIM circuits 400(l)()-400(Y)() in a given CIM column array circuit 702(l)-702(X), the bit line driver circuits 438(1)-438(X) that pre- charge the respective bit line BLi-BLx for a write operation can be physically located between the end CIM circuits 400(1)(), 400(Y)() in its respective CIM column array circuit 702(l)-702(X). For example, the bit line driver circuits 438(1)-438(X) can be physically located in the middle of each respective CIM column array circuit 702(1)- 702(X) to reduce the distance between the bit line driver circuit 438(1)-438(X) and the farthest away, end CIM circuits 400(1)(), 400(Y)() in their respective CIM column array circuit 702(l)-702(X). As an example, if ‘X* is equal to 512, bit lines BL1-BL256 can be provided to half of the CIM circuits 400(l)()-400(Y/2) in a CIM column array circuit 702(l)-702(X) that is driven by respective bit line driver circuits 438(1)-438(X). Complement bit lines BLB257-BLBy,512can be driven the respective bit line driver circuit 438(1)-438(X) to a second half of the CIM circuits 400(Y/2+l)-400(Y). In this manner, the two bit lines BL and BLB each have a length that is reduced in approximately half versus a single bit line coupled to all of the CIM circuits 400(1)(), 400(Y)() in the given CIM column array circuit 702(l)-702(X). This allows the length of the bit lines BL, BLB driven by the bit line driver circuits 438(1)-438(X) to be reduced, thus reducing the line capacitance of the respective bit lines BL, BLB. Reducing the line capacitance in the bit lines BL, BLB reduces the time to pre-charge the bit lines BL, BLB to the pre-charge
voltage, thus reducing dynamic power expended to pre-charge the bit lines BL, BLB for the CIM circuits 400(1 )(1), 400(Y)(X).[0072] In another exemplary aspect, as also shown in Figure 7, a global bit line (GBL) driver circuit 704(l)-704(X) is provided and used to pre-charge the GBLs 410(l)-410(X) for the respective CIM column array circuit 702(l)-702(X). The GBL driver circuits 704(l)-704(X) are each configured to drive a pre-charge voltage on respective YL lines in each CIM column array circuit 702(l)-702(X), wherein each YL line is coupled to the transmission gates 436 of the CIM circuits 400 in a respective CIM column array circuit 702(1 )-702(X). The GBLs 410(l)-410(X) are pre-charged to activate the transmission gates 436 of the CIM circuits 400 in a respective CIM column array circuit 702(l)-702(X) to perform a MAC operation in the respective CIM column array circuit 702(l)-702(X). In this manner, the CIM circuits 400(1)(), 400(Y)() for a given CIM column array circuit 702(1 )-702(X) can be activated to perform their multiplications on their respective multiplication outputs 412 and be asserted on their respective GBL 410(l)-410(X) to be accumulated. The GBL driver circuits 704(l)-704(X) can be configured to pre-charge their respective GBLs 410(l)-410(X) at a reduced supply voltage (e.g., VDD/2) to further reduce dynamic power if desired. Column select lines Yi-Yx are also provided for each CIM column array circuit 702(l)-702(X) and coupled to the transmission gates 436(1)()- 436(Y)() in a given CIM circuit 400(1)(), 400(Y)Q to activate the CIM column array circuit 702(1 )-702(X) for a MAC operation.[0073] Also, as shown in Figure 8, by the CIM circuits 400(1X1), 400(Y)(X) not having a complement bit line for their respective SRAM bit cell circuits 404, there is more room in the X-axis direction to provide room for the column select lines Yi-Yx in each CIM column array circuit 702(l)-702(X). This is as compared to the line layout 800 if the 6T SRAM bit cell circuit 304 in Figure 3 were employed in the CIM column array circuit 702(l)-702(X).[0074] Figure 9 is a diagram of another exemplary CIM circuit 900 like the CIM circuit 400 in Figure 4. The CIM circuit 900 can be provided in a CIM system 901 like described in Figures 4 and 7. In the CIM circuit 900 in Figure 9, a multiplication circuit 906 is provided similar to the multiplication circuit 406 in Figure 4. Common components between the CIM circuit 900 in Figure 9 and the CIM circuit 400 in Figure 4 are shown with the same element numbers, and will not be re-described. However, in
Figure 9, the gates G of the true and complement PFETs 430T, 430C are coupled to the respective complement and true storage nodes 426C, 426T. The drains D of the true and complement PFETs 430T, 430C are coupled to the true and complement multiplication input nodes 432T, 432C.[0075] To perform a multiplication operation of storage data in the storage circuit 416 in the 5T SRAM bit cell circuit 404 in the CIM circuit 900 in Figure 9, an XOR-based circuit 908 in the form of an XNOR circuit 928 in this example is configured to perform an XNOR operation. The true and complement multiplication input nodes 432T, 432C of the true and complement PFETs 430T, 430C are configured to receive respective input signals represented as input data X and XB. The gates G of the true and complement PFETs 430T, 430C are configured to receive the true and complement storage data on the true and complement storage nodes 426T, 426C. The storage data at only one of the true and complement storage nodes 426T, 426C is passed by the respective PFETs 430T, 430C of the XNOR circuit 928 to a multiplication output node 914 at a time, because only one of the PFETs 430T, 430C will be active at a time based on complement voltages between the true and complement storage nodes 426T, 426C. The XNOR circuit 928 is configured perform an XNOR operation between the true input signal X and XB and the storage data on the true and complement storage nodes 426T, 426C to generate the multiplication output 912 on the multiplication output node 914.[0076] Note that when a first coupling is referenced to a source/drain of a FET and a second coupling is referenced to a drain/source of the same FET, this means that either a source is involved in the first coupling, and the drain is involved in the second coupling, or the drain is involved in the first coupling, and the source is involved in the second coupling.[0077] A CIM system that includes one or more CIM circuits that each include a 5T SRAM bit cell circuit configured to be coupled to a single bit line for a read and write operation and configured to be operated at a reduced supply voltage to reduce standby and dynamic power, and further include an XOR-based circuit for generating a multiplication output representing a multiplication operation of a read stored data value as storage data in the SRAM bit cell circuit with an input data value, including, but not limited to, the CIM circuits in Figures 4 and 7-10 and according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples,
without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a global positioning system (GPS) device, a mobile phone, a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a tablet, a phablet, a server, a computer, a portable computer, a mobile computing device, a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.), a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, an automobile, a vehicle component, avionics systems, a drone, and a multicopter.[0078] In this regard, Figure 10 illustrates an example of a processor-based system 1000 that includes a CIM system that includes one or more CIM circuits that each include a 5T SRAM bit cell circuit configured to be coupled to a single bit line for a read and write operation and configured to be operated at a reduced supply voltage to reduce standby and dynamic power, and further include an XOR-based circuit for generating a multiplication output representing a multiplication operation of a read stored data value as storage data in the SRAM bit cell circuit with an input data value, including, but not limited to, the CIM circuits in Figures 4 and 7-9 and according to any aspects disclosed herein. In this example, the processor-based system 1000 may be formed as an integrated circuit (IC) 1004 as a system-on-a-chip (SoC) 1006. The processor-based system 1000 includes a central processing unit (CPU) 1008 that includes one or more processors 1010, which may also be referred to as CPU cores or processor cores. The CPU 1008 may have a cache memory 1012 coupled to the CPU 1008 for rapid access to temporarily stored data. The cache memory 1012 may include a CIM system that includes one or more CIM circuits that each include a 5T SRAM bit cell circuit configured to be coupled to a single bit line for a read and write operation and configured to be operated at a reduced supply voltage to reduce standby and dynamic power, and further include an XOR-based circuit for generating a multiplication output representing a multiplication operation of a read stored data value as storage data in the SRAM bit cell circuit with an input data value, including, but not limited to, the CIM circuits in Figures 4 and 7-9, and according to any aspects disclosed herein
[00791 The CPU 1008 is coupled to a system bus 1014 that can intercouple master and slave devices included in the processor-based system 1000. As is well known, the CPU 1008 communicates with these other devices by exchanging address, control, and data information over the system bus 1014. For example, the CPU 1008 can communicate bus transaction requests to a memory controller 1016 as an example of a slave device. Although not illustrated in Figure 10, multiple system buses 1014 could be provided, wherein each system bus 1014 constitutes a different fabric.[0080] Other master and slave devices can be connected to the system bus 1014. As illustrated in Figure 10, these devices can include a memory system 1020 that includes the memory controller 1016 and a memory array(s) 1018, one or more input devices 1022, one or more output devices 1024, one or more network interface devices 1026, and one or more display controllers 1028, as examples. Each of the memory system 1020, the one or more input devices 1022, the one or more output devices 1024, the one or more network inter&ce devices 1026, and the one or more display controllers 1028 may include a CIM system that includes one or more CIM circuits that each include a 5T SRAM bit cell circuit configured to be coupled to a single bit line for a read and write operation and configured to be operated at a reduced supply voltage to reduce standby and dynamic power, and further include an XOR-based circuit for generating a multiplication output representing a multiplication operation of a read stored data value as storage data in the SRAM bit cell circuit with an input data value, including, but not limited to, the CIM circuits in Figures 4 and 7-9 and according to any aspects disclosed herein.[0081] The input device(s) 1022 can include any type of input device, including, but not limited to, input keys, switches, voice processors, etc. The output device(s) 1024 can include any type of output device, including, but not limited to, audio, video, other visual indicators, etc. The network interface device(s) 1026 can be any device configured to allow exchange of data to and from a network 1030. The network 1030 can be any type of network, including, but not limited to, a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 1026 can be configured to support any type of communications protocol desired. [0082] The CPU 1008 may also be configured to access the display controllers) 1028 over the system bus 1014 to control information sent to one or more displays 1032. The
display controllers) 1028 sends information to the display(s) 1032 to be displayed via one or more video processors 1034, which process the information to be displayed into a format suitable for the display(s) 1032. The display(s) 1032 can include any type of display, including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, etc. The display controller(s) 1028, video processors) 1034, and display 1032 can include a CIM system that includes one or more CIM circuits that each include a 5T SRAM bit cell circuit configured to be coupled to a single bit line for a read and write operation and configured to be operated at a reduced supply voltage to reduce standby and dynamic power, and further include an XOR-based circuit for generating a multiplication output representing a multiplication operation of a read stored data value as storage data in the SRAM bit cell circuit with an input data value, including, but not limited to, the CIM circuits in Figures 4 and 7-9 and according to any aspects disclosed herein.[0083] Figure 11 illustrates an exemplary wireless communications device 1100 that includes radio frequency (RF) components formed from one or more ICs 1102, wherein any of the ICs 1102 can include a CIM system that includes one or more CIM circuits that each include a 5T SRAM bit cell circuit configured to be coupled to a single bit line for a read and write operation and configured to be operated at a reduced supply voltage to reduce standby and dynamic power, and further include an XOR-based circuit for generating a multiplication output representing a multiplication operation of a read stored data value as storage data in the SRAM bit cell circuit with an input data value, including, but not limited to, the CIM circuits in Figures 4 and 7-9, and according to any aspects disclosed herein.[0084] As shown in Figure 11, the wireless communications device 1100 includes a transceiver 1104 and a data processor 1106. The data processor 1106 may include a memory to store data and program codes. The transceiver 1104 includes a transmitter 1108 and a receiver 1110 that support bi-directional communications. In general, the wireless communications device 1100 may include any number of transmitters 1108 and/or receivers 1110 for any number of communication systems and frequency bands. All or a portion of the transceiver 1104 may be implemented on one or more analog ICs, RF ICs (RFICs), mixed-signal ICs, etc.
[0085] The transmitter 1108 or the receiver 1110 may be implemented with a super- heterodyne architecture or a direct-conversion architecture. In the super-heterodyne architecture, a signal is frequency-converted between RF and baseband in multiple stages, e.g., from RF to an intermediate frequency (IF) in one stage, and then from IF to baseband in another stage. In the direct-conversion architecture, a signal is frequency-converted between RF and baseband in one stage. The super-heterodyne and direct-conversion architectures may use different circuit blocks and/or have different requirements. In the wireless communications device 1100 in Figure 11, the transmitter 1108 and the receiver 1110 are implemented with the direct-conversion architecture.[0086] In the transmit path, the data processor 1106 processes data to be transmitted and provides I and Q analog output signals to the transmitter 1108. In the exemplary wireless communications device 1100, the data processor 1106 includes digital-to-analog converters (DACs) 1112(1), 1112(2) for converting digital signals generated by the data processor 1106 into I and Q analog output signals, e.g., I and Q output currents, for further processing.[0087] Within the transmitter 1108, lowpass filters 1114(1), 1114(2) filter the I and Q analog output signals, respectively, to remove undesired signals caused by the prior digital-to-analog conversion. Amplifiers (AMPs) 1116(1), 1116(2) amplify the signals from the lowpass filters 1114(1), 1114(2), respectively, and provide I and Q baseband signals. An upconverter 1118 upconverts the I and Q baseband signals with I and Q transmit (TX) local oscillator (LO) signals from a TX LO signal generator 1122 through mixers 1120(1), 1120(2) to provide an upconverted signal 1124. A filter 1126 filters the upconverted signal 1124 to remove undesired signals caused by the frequency upconversion as well as noise in a receive frequency band. A power amplifier (PA) 1128 amplifies the upconverted signal 1124 from the filter 1126 to obtain the desired output power level and provides a transmit RF signal. The transmit RF signal is routed through a duplexer or switch 1130 and transmitted via an antenna 1132.[0088] In the receive path, the antenna 1132 receives signals transmitted by base stations and provides a received RF signal, which is routed through the duplexer or switch 1130 and provided to a low noise amplifier (LNA) 1134. The duplexer or switch 1130 is designed to operate with a specific receive (RX)-to-TX duplexer frequency separation, such that RX signals are isolated from TX signals. The received RF signal is amplified
by the LNA 1134 and filtered by a filter 1136 to obtain a desired RF input signal. Downconversion mixers 1138(1), 1138(2) mix the output of the filter 1136 with I and Q RX LO signals (i.e., LO_I and LO_Q) from an RX LO signal generator 1140 to generate I and Q baseband signals. The I and Q baseband signals are amplified by AMPs 1142(1), 1142(2) and further filtered by lowpass filters 1144(1), 1144(2) to obtain I and Q analog input signals, which are provided to the data processor 1106. In this example, the data processor 1106 includes analog-to-digital converters (ADCs) 1146(1), 1146(2) for converting the analog input signals into digital signals to be further processed by the data processor 1106.[0089] In the wireless communications device 1100 of Figure 11, the TX LO signal generator 1122 generates the I and Q TX LO signals used for frequency upconversion, while the RX LO signal generator 1140 generates the I and Q RX LO signals used for frequency downconversion. Each LO signal is a periodic signal with a particular fundamental frequency. A TX phase-locked loop (PLL) circuit 1148 receives timing information from the data processor 1106 and generates a control signal used to adjust the frequency and/or phase of the TX LO signals from the TX LO signal generator 1122. Similarly, an RX PLL circuit 1150 receives timing information from the data processor 1106 and generates a control signal used to adjust the frequency and/or phase of the RX LO signals from the RX LO signal generator 1140.[0090] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device, or combinations of both. The master and slave devices described herein may be employed in any circuit, hardware component, IC, or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying
ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[00911 The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor Logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0092] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0093] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flowchart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill
in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0094] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
The invention relates to technologies for adjusting a perspective of a captured image for display on a mobile computing device. The technologies include capturing a first image of a user by a first camera and a second image of a real-world environment by a second camera. The mobile computing device determines a position of an eye of the user relative to the mobile computing device based on the first captured image and a distance of an object in the real-world environment from the mobile computing device based on the second captured image. The mobile computing device generates a back projection of the real-world environment captured by the second camera to the display based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device. |
1.A mobile computing device for adjusting a viewing angle of the captured image for display, the mobile computing device comprising:monitor;A camera system comprising a first camera for (i) capturing a first image of a user of the mobile computing device using the first camera, and (i) using the second camera The camera capturing a second image of the real world environment of the mobile computing device;An eye tracking module for determining the positioning of the user & apos; s eye with respect to the mobile computing device based on the captured first image;An object distance determination module for determining a distance of an object in the real world environment relative to the mobile computing device based on the captured second image;An image projection module for generating a rear projection of the real world environment captured by the second camera to the display based on the following parameters, the parameter comprising: the object in the real world environment being opposite At a determined distance of the mobile computing device, a determined positioning of the user & apos; s eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.2.The mobile computing device according to claim 1, wherein generating the post-projection comprises:Determining the light from the user & apos; s eye through the corresponding display pixel to the object in the real world environment for each display pixel of the display;Identifying an image pixel of the captured second image of the real-world environment corresponding to the positioning of the object in the real-world environment pointed to by the corresponding light for each of the determined rays; as well asThe rear projection image is constructed based on the identified image pixels to be displayed on the display of the mobile computing device.3.The mobile computing device according to claim 1, wherein generating the post-projection comprises:Determining an angular size of the mobile computing device from a viewing angle of the user;Determining a distance of the object in the real world environment relative to the user;Determining an area of the object that is obscured by the mobile computing device from a viewing angle of the user;Determining a corrected zoom size of the second camera based on a determined region of the object obscured by the mobile computing device and a distance of the object relative to the mobile computing device; andGenerating a rear projection image for display on the display of the mobile computing device based on the corrected zoom size.4.The mobile computing device according to claim 3, wherein determining the corrected zoom size comprises determining a region corresponding to the area of the object that is obscured by the mobile computing device from a viewing angle of the user , The size of the area of the object from the viewing angle of the second camera.5.The mobile computing device according to claim 3, wherein the corrected zoom size is obtained by using the second camera to capture an object of the object that is obscured by the mobile computing device from a viewing angle of the user The area corresponding to the image size required for the zoom.6.The mobile computing device according to claim 5, wherein said corrected zoom size is captured by said second camera only with said control from said mobile computing device from said viewing angle from said user The size of the image of the image pixel corresponding to the characteristic of the object of the area of the object.7.The mobile computing device according to claim 3, wherein:Determining that the angular size of the mobile computing device from the user & apos; s perspective comprises determining a distance from the user & apos; s perspective based on a distance of the user & apos; s eye relative to the mobile computing device and the size of the mobile computing device Of the size of the mobile computing device;Determining that the distance of the object relative to the user comprises determining that the object is relative to the user based on a distance of the user & apos; s eye relative to the mobile computing device and the distance of the object relative to the mobile computing device The distance;Determining that the area of the object obscured by the mobile computing device from a viewing angle of the user comprises the angle size of the mobile computing device based on a viewing angle from the user and the size of the object relative to the The distance of the user to determine the angular size of the region of the object that is obscured by the mobile computing device.8.The mobile computing device according to claim 3, wherein the angle size, δ, is determined according to, where d is the actual size of the corresponding object and D is between the corresponding object and the point Distance, which is a viewing angle from which the angle is determined.9.A mobile computing device according to any one of claims 1 to 8, wherein:Determining that the positioning of the user & apos; s eye relative to the mobile computing device comprises determining the positioning of the user & apos; s eye with respect to the first camera; andDetermining that the distance of the object in the real world environment relative to the mobile computing device comprises determining a distance of the object relative to the second camera.10.The mobile computing device according to any one of claims 1 to 8, wherein the field of view of the first camera is opposite to the field of view of the second camera with respect to the display.11.The mobile computing device according to claim 1, wherein determining the distance of the object in the real world environment relative to the mobile computing device comprises setting the distance of the object relative to the mobile computing device to a predetermined Defined distance.12.The mobile computing device according to claim 1, further comprising a display module for displaying on the display based on the generated post-projection of the real world environment captured by the second camera image.13.The mobile computing device according to claim 12, wherein displaying the image based on the generated post-projection includes displaying an image corresponding to the post-projection that is modified to include a realistic feature.14.A mobile computing device according to any one of claims 1 to 8, wherein the at least one device parameter comprises at least one of: (i) a focal length of the second camera, (ii) the display (Iii) the size of the mobile computing device, or (iv) the location of the components of the mobile computing device relative to the reference point.15.A method for adjusting a viewing angle of the captured image for display on a mobile computing device, the method comprising:A first image of a user of the mobile computing device is captured by a first camera of the mobile computing device;By which the mobile computing device determines the positioning of the user & apos; s eye with respect to the mobile computing device based on the captured first image;Capturing a second image of a real world environment of the mobile computing device by a second camera different from the mobile computing device of the first camera;Determining, by the mobile computing device, the distance of an object in the real world environment relative to the mobile computing device based on the captured second image; andGenerating a post-projection of the real world environment captured by the second camera to a display of the mobile computing device by the mobile computing device based on the following parameters, the parameters comprising: The determined distance of the object with respect to the mobile computing device with respect to the mobile computing device, and at least one device parameter of the mobile computing device relative to the determined distance of the mobile computing device.16.The method according to claim 15, wherein generating the post-projection comprises:Determining the light from the user & apos; s eye through the corresponding display pixel to the object in the real world environment for each display pixel of the display;Identifying an image pixel of the captured second image of the real-world environment corresponding to the positioning of the object in the real-world environment pointed to by the corresponding light for each of the determined rays; as well asThe rear projection image is constructed based on the identified image pixels to be displayed on the display of the mobile computing device.17.The method according to claim 15, wherein generating the post-projection comprises:Determining an angular size of the mobile computing device from a viewing angle of the user based on a distance of the user & apos; s eye relative to the mobile computing device and the size of the mobile computing device;Determining a distance of the object in the real world environment relative to the user based on a distance of the user & apos; s eye relative to the mobile computing device and a distance of the object relative to the mobile computing device;Determining the object obscured by the mobile computing device from a viewing angle of the user based on the angle size of the mobile computing device from the user & apos; s perspective and the distance of the object relative to the user Area;Determining a corrected zoom size of the second camera based on a determined region of the object obscured by the mobile computing device and a distance of the object relative to the mobile computing device; andGenerating a rear projection image for display on the display of the mobile computing device based on the corrected zoom size.18.The method according to claim 17, wherein said corrected zoom size is captured by said second camera with said area of said object obscured by said mobile computing device from a viewing angle of said user The corresponding image requires the zoom size.19.The method according to claim 15, wherein:Capturing the first image of the user comprises capturing an image of a face of the user;Determining that the positioning of the user & apos; s eye relative to the mobile computing device comprises identifying a location of the user & apos; s eye in the image of the user & apos; s face.20.The method of claim 15, wherein determining the positioning of the user & apos; s eye relative to the mobile computing device comprises determining a distance of the user & apos; s eye to the mobile computing device.21.The method according to claim 15, wherein:Determining that the positioning of the user & apos; s eye relative to the mobile computing device comprises determining the positioning of the user & apos; s eye with respect to the first camera; andDetermining that the distance of the object in the real world environment relative to the mobile computing device comprises determining a distance of the object relative to the second camera.22.The method according to claim 15, wherein determining the distance of the object in the real world environment relative to the mobile computing device comprises setting the distance of the object relative to the mobile computing device to a predefined distance.23.The method of claim 15, further comprising displaying the image on the display by the mobile computing device based on the generated post-projection of the real world environment captured by the second camera.24.The method according to claim 15, wherein said at least one device parameter comprises at least one of: (i) a focal length of said second camera, (ii) a size of said display, (iii) said movement Computing the size of the device, or (iv) the location of the components of the mobile computing device relative to the reference point.25.Comprising one or more machine-readable storage media of a plurality of instructions stored thereon, wherein in response to being executed, the instructions cause the mobile computing device to perform the method of any one of claims 15-24. |
A technique for adjusting the viewing angle of the captured image for displayCross referenceThis application claims priority to U.S. Provisional Application Serial No. 14 / 488,516, filed September 17, 2014, entitled & quot; TECHNOLOGIES FOR ADJUSTINGPERSPECTIVE OF A CAPTURED IMAGE FOR DISPLAY & quot ;, filed on September 17, 2014.Background techniqueThe Augmented Reality System fuses real and virtual worlds by virtualizing virtual roles and objects into physical locations, allowing immersive experiences and novel interactive models. In particular, in some enhanced real systems, virtual roles and objects can be inserted into images captured by the real world environment (e.g., by overlaying the virtual character & apos; s 2D or 3D rendering over the captured real world environment Image or video stream). In some systems, the physical objects identified in the captured image may be replaced by virtual objects associated with the physical object. For example, an identified vehicle may be identified in the captured image and replaced with an animation or a similar cartoon.The enhanced reality system has been implemented in both static computing devices and mobile computing devices. In some mobile enhancement real systems, the camera of the mobile computing device (e.g., a smartphone camera located on the back of the display) captures images of the real world environment. The enhanced reality system then enhances the realistic modification of the captured image and displays the enhanced image (e.g., in real time) in the display of the mobile computing device. In this way, the user can see a virtual world that conforms to his or her real world environment. However, since the camera of the user and the mobile computing device has a different perspective from the real world environment, the immersive experience is compromised by the obstructed visual flow. For example, from the user's perspective, real-world objects (e.g., those at the edge of the mobile computing device) are duplicated in the enhanced reality rendering.Description of the drawingsThe concepts described herein are shown by way of example and not limitation. For the sake of simplicity and clarity of illustration, the elements shown in the figures are not necessarily drawn to scale. In the case of due consideration, the reference numerals may be repeated between the drawings to indicate corresponding or similar elements.Figure 1 is a simplified block diagram of at least one embodiment of a mobile computing device for adjusting the viewing angle of a captured image for display;Figure 2 is a simplified block diagram of at least one embodiment of the environment established by the mobile computing device of Figure 1;Figure 3 is a simplified flow diagram of at least one embodiment of a method for adjusting the viewing angle of the captured image for use by the mobile computing device of Figure 1;Figure 4 is a simplified flow diagram of at least one embodiment of a method for generating a post-projection of a real-world environment of the mobile computing device of Figure 1;Figure 5 is a simplified diagram of the user holding the mobile computing device of Figure 1 during the execution of the method of Figure 4;Figure 6 is a simplified flow diagram of at least one embodiment of a method for generating a post-projection of a real-world environment of the mobile computing device of Figure 1;Figures 7-8 are simplified diagrams showing the user of various angle relationships holding the mobile computing device of Figure 1;Figure 9 is a simplified diagram of the real world environment of the mobile computing device of Figure 1;Figure 10 is a diagram showing the user holding the mobile computing device of Figure 1 and displaying the captured image without the adjusted viewing angle on the mobile computing device;11 is a diagram showing a user holding the mobile computing device of FIG. 1 and displaying the captured image with the adjusted viewing angle by the method of FIG. 3 on the mobile computing device.detailed descriptionWhile the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that there is no intention to limit the notion of the present disclosure to the particular form disclosed, but rather to cover all modifications, equivalents, and alternatives that are consistent with the present disclosure and the appended claims.In the specification, references to & quot; one embodiment, & quot; & quot; an embodiment, & quot; illustrative embodiment, & quot; and the like indicate that the described embodiments may include specific features, structures, or characteristics, but each embodiment may or may not Must include the particular feature, structure, or characteristic. In addition, such phrases do not necessarily refer to the same embodiment. In addition, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is claimed that such features, structures, or features are also implemented in connection with other embodiments, whether or not explicitly described in the art Personnel within the scope of knowledge. In addition, it should be understood that items included in the list in the form of "at least one A, B and C" may mean (A); (B); (C); (A and B); (B and C); or (A, B and C). Similarly, the items listed in the form of "at least one of" A, B, or C "may mean (A); (B); (C); (A and B); (B and C) A, B and C).In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions that are implemented or stored on one or more instantaneous or non-transient machine-readable (e.g., computer-readable) storage media, which may consist of one or more Multiple processors to read or execute. The machine-readable storage medium may be implemented as any storage device, mechanism, or other physical structure (e.g., volatile or non-volatile memory, media disk, etc.) that can be stored or sent in the form of machine read, Or other media devices).In the drawings, some structural or method features may be shown in a particular arrangement and / or sequence. It should be understood, however, that such specific arrangements and / or sequences are not necessary. In contrast, in some embodiments, such features may be arranged in different ways and / or sequences as illustrated in the illustrative drawings. In addition, the inclusion of structural or methodological features in a particular drawing is not intended to imply that such features are required in all embodiments, and in some embodiments such features may not be included, or may be associated with other features To make a combination.Referring now to FIG. 1, there is shown a mobile computing device 100 for adjusting the viewing angle of a captured image for display. In use, the mobile computing device 100 is configured to capture the image of the user of the mobile computing device 100 and the image of the real world environment of the mobile computing device 100, as described in greater detail below. The mobile computing device 100 also analyzes the captured user & apos; s image to determine the positioning of the user & apos; s eye relative to the mobile computing device 100. [ As discussed below, by doing so, the mobile computing device 100 may determine the distance from the user to the computing device 100 and identify / detect the location of the user & apos; s eye in the captured image. Additionally, the mobile computing device 100 determines the distance of one or more objects (e.g., primary objects and / or other objects in the captured scene) relative to the mobile computing device 100 in the captured real world environment. For example, as described below, depending on the particular embodiment, the mobile computing device 100 may analyze images captured by the real world environment, use depth or distance sensing data, or otherwise determine the relative distance of the object. The mobile computing device 100 determines the real world environment to the display 120 of the mobile computing device 100 based on the distance of the real world object relative to the mobile computing device 100, the positioning of the user & apos; s eye relative to the mobile computing device 100, and one or more device parameters After the projection. As discussed below, the post-projection may be implemented as a rear projection image, a set of data (e.g., pixel values) that may be used to generate a post-projected image, and / or other data indicating a corresponding post-projection image. As discussed below, the device parameters may include, for example, the size of the camera of the mobile computing device 100, the size of the display 120 or the mobile computing device 100 itself, the location of the components of the mobile computing device 100 relative to each other or reference points, Or other relevant information associated with the mobile computing device 100. [ The mobile computing device 100 displays the image based on the determined post-projection, and by doing so, the virtual object, the character, and / or the scene may be applied, or the image may be modified in other ways for enhancing the reality. It is to be understood that the techniques described herein are then projected onto the image of the display 120 such that the image visible on the display 120 is mapped directly or directly to the real world, thereby making the user feel as if she is passing through the window Looking at the real world environment. That is, in the illustrative embodiment, the displayed image includes the same contents as the content obscured by the mobile computing device 100 when viewed from the same viewing angle as the user.The mobile computing device 100 may be implemented as any type of computing device capable of performing the functions described herein. For example, the mobile computing device 100 may be implemented as a smart phone, a cellular phone, a wearable computing device, a personal digital assistant, a mobile Internet device, a tablet computer, a netbook, a notebook computer, a super polebook, a laptop computer, and / Other mobile computing / communication equipment. 1, the illustrative mobile computing device 100 includes a processor 110, an input / output (& quot; I / O & quot;) subsystem 112, a memory 114, a data store 116, a camera system 118, a display 120, a Or a plurality of sensors 122, and a communication circuit 124. [ Of course, in other embodiments, the mobile computing device 100 may include other or additional components, such as those often found in general computing devices (e.g., various input / output devices and / or other components). Additionally, in some embodiments, one or more of the components in the illustrative component may be incorporated into another component or otherwise formed into another component. For example, in some embodiments, the memory 114 or portions thereof may be incorporated into the processor 110. [The processor 110 may be implemented as any type of processor capable of performing the functions described herein. For example, the processor 110 may be implemented as a single or multi-core processor, a digital signal processor, a microcontroller, or other processor or processing / control circuitry. Similarly, the memory 114 may be implemented as any type of volatile or nonvolatile memory or data storage capable of performing the functions described herein. In operation, the memory 114 may store various data and software used during operation of the mobile computing device 100, such as operating systems, applications, programs, libraries, and drivers. The memory 114 is communicatively coupled to the processor 110 via the I / O subsystem 112, where the I / O subsystem 112 may be implemented to facilitate input / output operations with the processor 110, the memory 114, and other components of the mobile computing device 100 Circuits and / or components. For example, the I / O subsystem 112 may be implemented or otherwise included: a memory controller hub, an input / output control hub, a firmware device, a communication link (i.e., a point-to-point link, a bus link, a wire, Light guide, printed circuit board traces, etc.), and / or other components and subsystems that facilitate input / output operations. In some embodiments, the I / O subsystem 112 may form part of a system-on-chip (SoC) and be included on a single integrated circuit chip along with the processor 110, the memory 114, and other components of the mobile computing device 100. [The data store 116 may be implemented as any type of device or devices configured for short-term storage or long-term storage of data, such as memory devices and circuits, memory cards, hard disk drives, solid state drives, or other data storage devices. In an illustrative embodiment, the data store 116 may store the device parameters 130 of the mobile computing device 100. [ It is to be understood that the particular device parameter 130 may vary depending on the particular embodiment. The device parameter 130 may include, for example, information or data associated with the size / shape of another component of the mobile computing device 100, the display 120, and / or the mobile computing device 100, associated with one or more cameras of the mobile computing device 100 (E.g., a coordinate system that identifies the relative position of the components of the mobile computing device 100), and the location of the components of the mobile computing device 100 relative to the reference point (e.g., a coordinate system that identifies the relative position of the components of the mobile computing device 100), and / Or other information associated with the mobile computing device 100. [ Additionally, in some embodiments, the data store 116 and / or the memory 114 may store various other data that is useful during the operation of the mobile computing device 100. In some embodiments,The camera system 118 includes a plurality of cameras configured to capture an image or video (i.e., a collection of images or frames) and a function capable of performing the functions described herein. It is to be understood that each camera in the camera of the camera system 118 may be implemented as any peripheral or integrated device suitable for capturing an image, such as a still camera, a video camera, or other means capable of capturing video and / or images equipment. In an illustrative embodiment, the camera system 118 includes a camera 126 facing the user and a camera 128 facing the environment. As shown in the following, each of the cameras facing the user & apos; s camera 126, the environment facing the camera 128, and / or other cameras of the camera system 118 may be implemented as a two-dimensional (2D) camera (e.g., an RGB camera ) Or a three-dimensional (3D) camera. Such a 3D camera includes, for example, a depth camera, a dual focus camera, and / or other means capable of generating depth images, channels, or streams of cameras. For example, one or more cameras may include an infrared (IR) projector and an IR sensor such that the IR sensor estimates the depth values of the objects in the scene by analyzing the light pattern projected onto the scene by the IR projector. In another embodiment, one or more cameras in the camera of the camera system 118 include at least two lenses and corresponding sensors configured to capture images from at least two different viewpoints of the scene (e.g., a stereoscopic camera ).As will be described in greater detail below, the camera 106 facing the user is configured to capture the image of the user of the mobile computing device 100. [ In particular, the user & apos; s camera 126 captures an image of the user & apos; s face, which may be analyzed to determine whether the user & apos; s eye is relative to the mobile computing device 100 (e.g., relative to the camera 126 facing the user or relative to the mobile computing device 100 of the other reference point) position. The environment-facing camera 128 captures an image of the real-world environment of the mobile computing device 100. [ In an illustrative embodiment, the camera 126 facing the user and the camera 100 facing the environment are located on the opposite side of the mobile computing device 100 and thus have a field of view in the opposite direction. In particular, the camera 126 facing the user is on the same side as the display 120 on the mobile computing device 100 so that the camera 126 facing the user can capture her image as the user views the display 120. [The display 120 of the mobile computing device 100 may be implemented as any type of display on which the information may be displayed to the user of the mobile computing device 100. [ In addition, the display 120 may be implemented or otherwise used in any suitable display technology including, for example, liquid crystal display (LCD), light emitting diode (LED) display, cathode ray tube (CRT) display, plasma display, Touch screen display, and / or other display technology. Although only one display 120 is shown in the illustrative embodiment of FIG. 1, in other embodiments, the mobile computing device 100 may include a plurality of displays 120. In some embodiments,As shown in FIG. 1, the mobile computing device 100 may include one or more sensors 122 configured to collect data that is useful when performing the functions described herein. For example, the sensor 122 may include a depth sensor that may be used to determine the distance of the object from the mobile computing device 100. [ Additionally, in some embodiments, the sensor 122 may include an accelerometer, a gyroscope, and / or a magnetometer to determine the relative orientation of the mobile computing device 100. [ In various embodiments, the sensor 122 may be implemented in or otherwise include, for example, proximity sensors, optical sensors, optical sensors, audio sensors, temperature sensors, motion sensors, piezoelectric sensors, and / or other types of sensors The Of course, the mobile computing device 100 may also include components and / or devices configured to facilitate the use of the sensor 122. [The communication circuit 124 may be implemented as any communication circuit, device, or set thereof that can support communication between the mobile computing device 100 and other remote devices over a network (not shown). For example, in some embodiments, the mobile computing device 100 may offload one or more functions (e.g., determinations of post-projection) in the functions described herein to a remote computing device. The communication circuit 124 may be configured to influence such communication using any one or more communication technologies (e.g., wireless or wired communication) and associated protocols (e.g., Ethernet,WiMAX, etc.).Referring now to FIG. 2, in use, the mobile computing device 100 establishes an environment 200 for adjusting the viewing angle of the captured image for display on the display 120 of the mobile computing device 100. [ As discussed below, the mobile computing device 100 captures the user & apos; s image using the camera 126 facing the user and captures the image of the real world environment of the mobile computing device 100 using the environment-facing camera 128. [ In addition, the mobile computing device determines the positioning of the user & apos; s eye relative to the mobile computing device 100 based on the image captured by the user & apos; s camera 126 and determines the real world environment based on the image captured by the environment-facing camera 128 The distance of the object relative to the mobile computing device 100. [ The mobile computing device 100 then generates a post-projection of the real-world object to the display 120 and displays the corresponding image (e.g., including enhanced real modification) on the display 120 based on the generated post-projection.The illustrative environment 200 of the mobile computing device 100 includes an image capture module 202, an eye tracking module 204, an object distance determination module 206, an image projection module 208, and a display module 210. [ Each module in the module of environment 200 may be implemented as hardware, software, firmware, or a combination thereof. For example, in the embodiment, each module in the module of the environment 200 may be implemented as a circuit (e.g., an image capturing circuit, an eye tracking circuit, an object distance determination circuit, an image projection circuit, and a display circuit). Additionally, in some embodiments, one or more of the modules in the illustrative module may form part of another module. For example, in some embodiments, the image projection module 208 may form part of the display module 210. In some embodiments,The image capture module 202 controls the camera system 118 (e.g., the camera 126 facing the user and the camera 128 facing the environment) to capture the image within the field of view of the respective cameras 126, For example, as described herein, a user & apos; s camera 126 is configured to capture an image of a user & apos; s face (e.g., for eye detection / tracking). It is to be understood that the mobile computing device 100 may detect and / or track one or both of the eyes of the user & apos; s eyes and, therefore, in the illustrative embodiment, captured by the user & apos; s camera 126 for mobile computing devices 100 & apos; s analyzed image includes at least one eye in the user & apos; s eye. The techniques described herein are equally applied to the detection / tracking of the two eyes of the user, although the eye tracking and analysis are sometimes discussed in this article for the purpose of simply and clearly describing the individual eye of the user. Additionally, as discussed herein, an environment-facing camera 128 is configured to capture an image of a real-world environment of the mobile computing device 100. [ While such captured images are often described herein as having a single primary object for the sake of simplicity, it should be understood that the captured scene may include any number of primary objects (unique or otherwise important objects ).The eye tracking module 204 determines the position / positioning of the user & apos; s eye relative to the mobile computing device 100 (e.g., relative to the user & apos; s camera 126 or another reference point). By doing so, the eye tracking module 204 detects the presence of one or more human eyes in the image captured by the user facing the camera 126 and determines the presence of the captured image (i.e., a portion of the image associated with the eye) The position of the eye to be tracked. To this end, the eye tracking module 204 may use any suitable technique, algorithm, and / or image filter (e.g., edge detection and image segmentation). In some embodiments, the eye tracking module 204 determines the location of the user & apos; s face in the captured image and uses, for example, the location of the user & apos; s face, for example to reduce the area of the captured image that is analyzed to locate the user & apos; s eye. Additionally, in some embodiments, the eye tracking module 204 analyzes the user & apos; s eye to determine the various characteristics / features of the user & apos; s eye (e.g., flash position, iris position, pupil position, iris-pupil contrast, eye size / shape And / or other characteristics) to determine the direction of the user & apos; s line of sight. For example, the user may be used to determine whether the user is looking at the display 120, identifying the object (e.g., the primary object) pointed to by the user & apos; s line of sight in the scene captured by the environment facing the camera, determining the relative Position or location (e.g., in a three-dimensional space), and / or for other purposes. Additionally, in some embodiments, the eye tracking module 204 may also determine the orientation of the user & apos; s head or otherwise determine the posture of the user & apos; s head.As described below, when determining the positioning of the user & apos; s eye relative to the mobile computing device 100, the eye tracking module 204 determines that the user & apos; s eye is relative to the mobile computing device 100 (e.g., relative to the user & apos; s camera 126 or another Point) distance. It will be appreciated that the eye tracking module 204 may do so using any suitable algorithm and / or technique. For example, in some embodiments, the camera 126 facing the user may be implemented as a depth camera or other 3D that can generate data (e.g., deep stream or depth image) corresponding to the distance of the object in the captured scene camera. In another embodiment, the eye tracking module 204 may use face detection in conjunction with a known approximate size of the user & apos; s face to estimate the distance of the user & apos; s face to the mobile computing device 100. [ In another embodiment, the eye tracking module 204 may analyze the captured region corresponding to the user & apos; s eye to find the light (i.e., flash) and / or pupil of the light from the user & apos; s cornea. Based on those reflections, the eye tracking module 204 may determine the location or positioning of the user & apos; s eye relative to the mobile computing device 100 (e.g., in a three-dimensional space). In addition, in some instances, the eye tracking module 204 may combine the data of the user & apos; s eye in the captured image to determine the user & apos; s eye relative to the movement using the data (e.g., depth / distance information) generated by the sensor 122 The location of the computing device 100.The object distance determination module 206 determines one or more objects in the real world environment captured by the environment-facing camera 128 relative to the mobile computing device 100 (relative to the environment-facing camera 128 or to the mobile computing device 100 A reference point) distance. As indicated in the above, the real world environment captured in the field of view of the environment-facing camera 128 and thus by the environment-facing camera 128 may include any number of objects. Thus, depending on the particular embodiment, the object distance determination module 206 may determine the distance of each object in the object from the mobile computing device 100 or the distance of a subset (e.g., a single object) of the object to the mobile computing device 100. [ For example, in some embodiments, the object distance determination module 206 identifies the primary object in the captured object for which the distance is determined. Such a primary object may be, for example, an object to which the user's line of sight is directed or a main object in the scene. In some embodiments, for simplicity, the object distance determination module 206 assumes that each object in the object in the scene is about the same distance from the mobile computing device 100. In addition, in some embodiments, the object distance determination module 206 assumes or otherwise sets the distance of the object to the mobile computing device 100 to a predefined distance. For example, a predefined distance may be a value that is significantly greater than the focal length of the camera 128 facing the environment, an approximately infinite value (the maximum number available in the number of spaces), or another pre-defined distance value. For ease of discussion, the expression of infinity can be simply referred to as "infinity" in this text.It is to be understood that the object distance determination module 206 may use any suitable technique and / or algorithm to determine the distance of the object in the real world environment relative to the mobile computing device 100. [ For example, in some embodiments, the object distance determination module 206 may use one or more of the techniques and algorithms described above with respect to determining (i.e., by the eye tracking module 204) the user & apos; s distance relative to the mobile computing device 100 Technology and algorithm. In particular, the camera 100 facing the environment may be implemented as a depth camera or other 3D cameras that generate depth data for determining the distance of the objects in the captured image. Additionally or alternatively, in some embodiments, the object distance determination module 206 may estimate the distance of the object to the mobile computing device 100 with reference to the stored data about the size of certain objects. In another embodiment, the object distance determination module 206 may determine the distance and / or position of the object relative to the mobile computing device 100 using data generated by the sensor 122 (e.g., depth / distance information). Of course, in some embodiments, the object distance determination module 206 may specify the distance of a particular object as a predefined value. For example, the object distance determination module 206 may assume that the object is infinite in response to determining that the distance of the object exceeds a predefined threshold. That is, in some embodiments, objects that are at least a threshold distance (e.g., four meters) from the mobile computing device 100 may be considered, for example, as if they are infinitely away from the mobile computing device 100. [ It will be appreciated from such an embodiment that the operational deviation can become negligible (for example, approximately the same result can be derived based on a distance of ten meters and a distance of twenty meters). As described below, the location of the object relative to the mobile computing device 100 may be determined using the distance of the object relative to the mobile computing device 100 (e.g., relative to the camera 128) and generate a post-projection of the real world environment Device parameter 130).The image projection module 208 generates a post-projection of the real world environment captured by the environment-facing camera 128 to the display 120. [ In an illustrative embodiment, the image projection module 208 generates a post-projection based on the following parameters, including: the distance of the object in the real-world environment relative to the mobile computing device 100 (e.g., infinity, a predefined distance, (E.g., the inherent parameters of the cameras 126, 128, the size of the mobile computing device 100 or the display 120, etc.) of the user & apos; s eye relative to the location / location of the mobile computing device 100 and / or the device parameters of the mobile computing device 100 ). As indicated above, the visual content obscured by the mobile computing device 100 is shown on the display 120 by projecting the real world environment to the display 120 (i.e., toward the user & apos; s eyes) so that the user senses As if she were looking through the window. In other words, the visual continuity is maintained because the objects around the edges are not repeated in the displayed image. It is to be understood that the image projection module 208 may use any suitable technique and / or algorithm to generate a post-projection image for display on the display 120 of the mobile computing device 100. [ As described below, Figures 4-8 illustrate illustrative embodiments for doing so.The display module 210 renders the image on the display 120 for viewing by the user of the mobile computing device 100. [ For example, the display module 210 may render the image on the display 120 based on the post-projection generated by the image projection module 208. [ Of course, in some embodiments, the post-projection may not be & quot; projected & quot; on the display 120 in a conventional sense; rather, a corresponding image may be generated for rendering on the display 120. [ In addition, as discussed above, in some embodiments, the display module 210 may modify the post-projected image to include virtual objects, roles, and / or environments for enhanced reality, and render the modified image.The communication module 212 handles communication between the mobile computing device 100 and the remote device through the corresponding network. For example, in some embodiments, the mobile computing device 100 may communicate with a remote computing device to communicate one or more of the functions of the mobile computing device 100 described herein (e.g., for determining a post-projected image or Modify the image for enhanced reality) to the remote computing device. Of course, it may be sent by the remote computing device and received by the communication module 212 of the mobile computing device 100 to receive the associated data associated with such an analysis.Referring now to FIG. 3, in use, the mobile computing device 100 may perform a method 300 for adjusting the viewing angle of the captured image for display by the mobile computing device 100. [ The illustrative method 300 begins at blocks 302 and 310. [ In block 302, the mobile computing device 100 captures the image of the user & apos; s face using the camera 126 facing the user. Depending on the particular embodiment, the camera 126 facing the user may continuously capture the image (e.g., as a video stream) for analysis or to respond to user input (e.g., press the button). In block 304, the mobile computing device 100 identifies the user & apos; s eye in the captured image. As discussed above, the mobile computing device 100 may do so with any suitable technology and / or algorithm (e.g., edge detection and / or image segmentation). In addition, depending on the particular embodiment, the mobile computing device 100 may determine and utilize the location of one or both of the eyes of the user & apos; s eye.In block 306, the mobile computing device 100 determines the location of the user & apos; s eye with respect to the user & apos; s camera 126 or another reference point of the mobile computing device 100. [ By doing so, the mobile computing device 100 determines the distance of the user, or more specifically, determines the distance of the user & apos; s eye relative to the user & apos; s camera 126. As discussed above, the mobile computing device 100 may make such a determination based on, for example, a depth image or by a camera 126 facing the user (i.e., if the camera 126 facing the user is a deep camera or other 3D Camera & apos; s other depth information, user line of sight information, distance information generated by the sensor 122, device parameters 130, and / or other related data. The distance of the user relative to the user & apos; s camera 126 may be used in conjunction with the location of the user & apos; s eye in the captured image to determine whether the user & apos; s eye is relative to the user & apos; s camera 126 or other reference points of the mobile computing device 100 Positioning. It is to be understood that the device parameters 130 may include information relating to the location of the components of the mobile computing device 100 relative to each other, thereby establishing a coordinate system having a reference point as an origin. The reference point selected as the origin may vary depending on the particular embodiment and may be, for example, the position of the camera 126 facing the user, the location of the camera 128 facing the environment, the center of the display 120, or another suitable location TheAs shown in the figure, in the illustrative embodiment of FIG. 3, blocks 302-308 occur in parallel with blocks 310-314; however, in other embodiments, those blocks may be executed serially. In block 310, the mobile computing device 100 captures the image of the real world environment of the mobile computing device 100 using the environment-facing camera 128. [ Similar to the user-facing camera 126, the environment-facing camera 128 may continuously capture an image (e.g., as a video stream) for analysis or to respond to user input, such as a push button, depending on the particular embodiment The For example, in some embodiments, the user may provide some input to begin execution of the method 300 in which the mobile computing device 100 executes each of the blocks 302 and 310. In some embodiments, As indicated above, in an illustrative embodiment, the environment-facing camera 128 is located on the opposite side of the camera 126 facing the user such that the environment-facing camera 128 has a field of view similar to the user (i.e., The general direction).In block 312, the mobile computing device 100 determines the distance of one or more objects in the corresponding real world environment relative to the environment facing the camera 128 or another reference point of the mobile computing device. As discussed above, the mobile computing device 100 may make such a determination based on the following data, for example, by the environment facing the camera 122 (i.e., if the user & apos; s camera 126 is a deep camera or other 3D camera) Generated depth information, distance information generated by the sensor 122, device parameters 130, and / or other related data. In addition, the object for which the relative distance is determined may vary depending on the particular embodiment. For example, as discussed above, in some embodiments, the mobile computing device 100 may determine the relative distance of each object or each of the primary objects in the captured image, whereas in other embodiments the mobile computing device 100 The relative distance of the primary object in the captured image (the object pointed to by the user's line of sight, or otherwise determined to be the primary object) can be determined. In addition, as indicated in the above, in block 314, the mobile computing device 100 may set the distance of the object to a predefined distance.In block 316, the mobile computing device 100 generates a post-projection of the real-world environment to the display 120 based on the following parameters, including the distance of the real-world object relative to the mobile computing device 100 (e.g., determined or pre-defined Distance, the positioning of the user & apos; s eye relative to the mobile computing device 100, and / or one or more device parameters 130 (e.g., the inherent parameters of the cameras 126, 128, the size of the mobile computing device 100 or the display 120, etc.). As indicated above, the mobile computing device 100 may use any suitable algorithm and / or technique for doing so to generate a post-projected image. For example, in some embodiments, the mobile computing device 100 may generate post-projection by performing the method 400 as shown in FIG. 4, while in other embodiments the mobile computing device 100 may perform, as in FIG. 6 The illustrated method 600 generates a post-projection. Of course, it should be understood that the embodiments of Figures 4 and 9 are provided as illustrative embodiments and are not limited to the concepts described herein.In block 318, after the post-projection has been determined, the mobile computing device 100 displays an image on the display 120 based on the generated post-projection. By doing so, in block 320, the mobile computing device 100 may modify the post-projection or the corresponding image as discussed above for the purpose of enhancing the reality. For example, the mobile computing device 100 may incorporate virtual characters, objects, and / or other virtual features into the constructed / generated post-projected images for rendering on the display 120. [ Of course, in some embodiments, the mobile computing device 100 may not modify the post-projection for enhanced reality or other purposes so that the viewer truly feels as if the display 120 is what she can see by the mobile computing device 100 Real world environment window.Referring now to FIG. 4, the illustrative method 400 begins at block 402, where the mobile computing device 100 determines whether a post-projection is generated. If, after generation of the projection, in block 404, the mobile computing device 100 determines the light from the user & apos; s eye 504 through the next display pixel 506 of the display 120 to the real world object 508, as shown in FIG. It is to be understood that the display pixels 506 that make up the & quot; next & quot; display pixel 506 may vary depending on the particular embodiment. In the illustrative embodiment, the mobile computing device 100 will not select the & quot; next display pixel 506 for the display pixel 506 for which the ray 502 is determined during execution of the method 400. [ It should also be understood that in other embodiments, the mobile computing device 100 may determine the light 502 that passes through another sub-region of the display 120 (i.e., for example, sub-regions that differ from the display pixel at different granularity levels).As discussed above, the device parameters 130 of the mobile computing device 100 may include data relating to the relative positions of the various components of the mobile computing device 100 and establish, for example, a three-dimensional coordinate system having a reference point as an origin. For example, in some embodiments, the environment-facing camera 128 may be an origin. It is to be understood that each pixel / point on the display is located at a point relative to the environment facing the camera 128. Thus, in some embodiments, the mobile computing device 100 determines the corresponding three-dimensional coordinates of the user & apos; s eye 504 and the object 508 based on the analysis described above. It should be understood that in the illustrative embodiment there is a coordinate or relative position of the user & apos; s eye 504, the display pixel 506, and the object 508, the mobile computing device 100 determines that from the user & apos; s eye 504 through each of the display pixels 506 Pixel to the corresponding ray 502 of object 508. [In block 406, the mobile computing device 100 identifies the image pixels of the image of the real world environment captured by the environment-facing camera 128 and corresponding to the location / location 510 of the real world object pointed to by the corresponding ray 502. For example, based on the real world coordinates or relative position of the device parameter 130 and the object 508, such as the inherent parameters (e.g., focal length) of the camera-facing camera 128, the mobile computing device 100 may determine how the camera- The captured image is projected from the real world environment to the coordinates of the captured image. In such an embodiment, the mobile computing device 100 may thereby identify image pixels associated with the real world coordinates (i.e., location 510) to which the ray 502 points.In block 408, the mobile computing device 100 determines whether any of the display pixels 506 are left. If remaining, the method 400 returns to block 404 where the mobile computing device 100 determines the light 502 from the user & apos; s eye 504 through the next display pixel 506 to the real world object 508. [ In other words, the mobile computing device 100 determines the light from the user & apos; s eye 504 through the corresponding display pixel 506 to the object 508 in the real world environment for each display pixel 506 (or other sub-region of the display 120) of the display 120 , And for each identified ray 502 the image of the image of the real world environment captured by the environment-facing camera 128 corresponding to the location of the object in the real world environment pointed to by the corresponding ray 502 Pixels. In block 410, the mobile computing device 100 constructs an image from the identified image pixels for display on the mobile computing device 100. [ In an illustrative embodiment, the mobile computing device 100 generates an image with an identified image pixel in the appropriate image coordinates of the generated image. In other words, the mobile computing device 100 may project the visual content from the location pointed to by each corresponding ray 502 to a corresponding point on the display 120 through which the ray 502 passes.Referring now to FIG. 6, in use, the mobile computing device 100 may execute a method 600 for generating a post-projection of a real-world environment of the mobile computing device 100 as indicated above. The illustrative method 600 begins at block 602, where the mobile computing device 100 determines whether to generate a post-projection. If the post-generation projection is determined, the mobile computing device 100 determines, based on the distance 704 and the device parameter 130 of the user 706 relative to the user & apos; s camera 126 (or other reference point of the mobile computing device 100) in block 604, Of the viewing angle of the mobile computing device 100, as shown with reference to Figures 7-8. As indicated above, the device parameters 130 may include, for example, the size, shape, and other characteristics of the components of the mobile computing device 100 and / or the mobile computing device 100. [ It is to be understood that the angular size of the object indicates the viewing angle required from the reference point (e.g., the observer or the camera) that contains the object from a known distance of the object. In the illustrative embodiment, the angle size of the object (e.g., the mobile computing device) from the viewing point is determined according to, where δ is the angle of the object, d is the actual size of the corresponding object, and D is the distance between the corresponding object and the viewing point (that is, the point from which the angle is determined). However, in other embodiments, the angular size of the object may be determined in other ways. It should be understood that the techniques described herein may also be applied to three dimensions (e.g., for both the horizontal angle and the vertical angle, although the angle size may be discussed in the context of the two dimensions, Determine the angular size of the diagonal across the object, project the three-dimensional size into two-dimensional, and apply the three-dimensional equivalence of the angle size formula provided in the above.In block 606, the mobile computing device 100 determines the distance 708 of the real world object 710 relative to the user 706. [ In an illustrative embodiment, the mobile computing device 100 is based on the distance 704 of the user 706 to the camera 126 facing the user (see block 308 of FIG. 3), and the real world object 710 to the environment facing the camera 128 (see FIG. 3 Box 312) or the distance 712 of the other reference points of the mobile computing device 100. In this way, By doing so, in some embodiments, the mobile computing device 100 may assume that the user 706, the mobile computing device 100, and the object 710 are collinear, and the previously calculated two distances are summed to determine the user 706 and the real world The distance 708 between object 710 (e.g., if the object is remote from the user). In other embodiments, the mobile computing device 100 may employ a more complex algorithm to determine the distance between the user 706 and the real world object 710. [ For example, the mobile computing device 100 may make a determination based on the following parameters including a mobile computing device 100, a user 706 (or more specifically, a user & apos; s eye), and an object 710 to each other or to a specific reference point For example, the defined origin), the known distance 704, 712 between the mobile computing device 100 and the user 706 and the mobile computing device 100 and between the mobile computing device 100 and the object 710 (e.g., based on a triangular attribute ).In block 608, the mobile computing device 100 determines the area 714 of the real world object 710 that is obscured by the mobile computing device 100 from the user & apos; s perspective. In an illustrative embodiment, the mobile computing device 100 makes such a determination based on the angular size of the mobile computing device 100 from the user & apos; s perspective and the distance 708 of the real world object 710 relative to the user 706. [In block 610, the mobile computing device 100 determines the corrected camera & apos; s face-to-face environment 128 based on the area 714 of the real-world object that is obscured from the user & apos; s perspective and the distance 712 of the real world object from the camera 100 facing the environment Zoom size. In other words, the mobile computing device 100 determines that the camera 128 facing the environment is used to capture the zoom of the camera 128 facing the environment required for the image corresponding to the area 714 of the object 710 that is obscured by the mobile computing device 100 from the user's perspective size. As discussed above, the device parameters 130 may include inherent parameters (e.g., focal length, image projection parameters, etc.) of the camera 128. [ It will be appreciated that in some embodiments, such device parameters 130 may be used to identify the zoom size corresponding to a particular region of the environment that captures a certain distance to the camera 128. In some embodiments, In some embodiments, the zoom size is determined such that the camera 128 facing the environment captures an image that has only visual content of the object 710 of the area 714 from the object from the user's viewing angle that is obscured by the mobile computing device 100 (E.g., the features of the object 710).In the block 612 of the illustrative embodiment, in order to determine the corrected zoom size, the mobile computing device 100 determines a camera 7 that corresponds to the area 714 of the real-world object 710 that is obscured from the user's viewing angle, The angle 716 of the area 718 of the real world object 710 is viewed. The mobile computing device 100 may make such a determination based on, for example, device parameters 130 and / or corresponding geometries. That is, in some embodiments, the mobile computing device 100 may determine the angle size 716 based on the size of the region 714, the distance 712, and the angle size formula provided above. It is to be understood that in some embodiments, region 718 and region 714 are the same regions, and in some embodiments, those reasons may be different to some extent. Similarly, the corrected zoom size may deviate from the precise zoom (e.g., based on technical, hardware, and / or spatial limitations) required to generate the area 718. [ In block 614, the mobile computing device 100 generates an image with a corrected zoom size for display on the mobile computing device 100. [ For example, in some embodiments, the mobile computing device 100 may capture a new image using a camera 128 that faces the environment from the same viewing angle but with a different zoom size. In other embodiments, the mobile computing device 100 may, for example, modify the original image captured by the environment-facing camera 128 to generate an image with the desired zoom size and other characteristics.Referring now to Figures 9-11, a simplified view of the real world environment 900 (e.g., see FIG. 9) of the mobile computing device 100 and the user holding the computing device 100 (see FIGS. As discussed above, the real world environment 900 may be captured by the environment-facing camera 128 and rendered on the display 120. [ In addition, where the enhanced reality system is utilized, the captured image may be modified to incorporate, for example, a virtual character, object, or other feature into the captured image for display on the mobile computing device 100. [ In an embodiment that does not utilize the method 300 of FIG. 3 (i.e., if the captured image or enhanced realistic version is displayed on the display 120 of the mobile computing device 100), the image 902 displayed on the display 120 Including the real world object 904 (see, e.g., see FIG. 10) that is also visible in the real world environment 900 around the edge of the mobile computing device 100. [ In other words, some of the real world objects 904 visible to the user are duplicated in the displayed image 902, thereby hindering the visual flow. 3, the image 906 displayed on the display 120 includes the same visible content as the content obscured by the mobile computing device 100 as viewed from the same viewing angle as the user. In the embodiment of the method 300 of FIG. Since the visual continuity between the displayed image 906 and the background real world environment 900 is maintained, the user feels as if she is looking at the real world environment through the window.ExampleAn illustrative example of the techniques disclosed herein is provided below. Embodiments of the art may include any one or more of the examples described hereinafter and any combination of the examples described hereinafter.The example 1 includes a mobile computing device for adjusting the viewing angle of the captured image for display, the mobile computing device comprising: a display; a camera system comprising a first camera and a second camera, (i) capturing a first image of a user of the mobile computing device using the first camera, and (i) capturing a second image of a real world environment of the mobile computing device using the second camera; an eye tracking module For determining the positioning of the user & apos; s eye with respect to the mobile computing device based on the captured first image; an object distance determination module for determining the real world environment based on the captured second image ; An image projection module for generating a rear projection of the display by the real world environment captured by the second camera based on the following parameters: the image projection module for generating a backward projection of the object, Comprising: a determined distance of said object in said real world environment relative to said mobile computing device, said user & apos; s eye being The determined positioning of the computing device, and at least one device parameter of the mobile computing device.Example 2 includes the subject of Example 1, and wherein generating the post-projection comprises determining for each pixel of the display from the user & apos; s eye through the corresponding display pixel to the object in the real world environment ; Identifying, for each of the determined rays, the captured second image of the real-world environment corresponding to the positioning of the object in the real-world environment pointed to by the corresponding light Image pixels; and constructing a rear projection image based on the identified image pixels for display on the display of the mobile computing device.Example 3 includes the subject matter of any one of Example 1 and Example 2, and wherein generating the post-projection comprises determining the size of the mobile computing device from the perspective of the user, determining the size of the real- The distance of the object relative to the user; determining an area of the object that is obscured by the mobile computing device from a viewing angle of the user; determining, based on the determined object of the object obscured by the mobile computing device Region and the distance of the object with respect to the mobile computing device; and generating a post-projected image based on the corrected zoom size for the mobile computing device Of the display.Example 4 includes the subject matter of any one of the examples 1-3, and wherein determining the corrected zoom size comprises determining the region of the object that is obscured by the mobile computing device from a viewing angle of the user Corresponding to the size of the area of the object from the viewing angle of the second camera.Example 5 includes the subject matter of any one of the examples 1-4, and wherein the corrected zoom size is captured by the second camera with the view from the user & apos; s viewing angle by the mobile computing device The size of the zoom required for the corresponding image of the area of the object.Example 6 includes the subject matter of any one of the examples 1-5, and wherein the corrected zoom size is captured by the second camera only with a view from the viewing angle from the user by the mobile computing device The size of the image of the image pixel corresponding to the characteristic of the object of the region of the object.Example 7 includes the subject matter of any one of the examples 1-6, and wherein determining the angle size of the mobile computing device from the user & apos; s perspective includes information based on the user & apos; s eye relative to the mobile computing device Distance and the size of the mobile computing device to determine an angle from the mobile computing device of the user & apos; s perspective; determining that the distance of the object relative to the user comprises determining, based on the user & apos; s eye, Calculating a distance of the device and a distance of the object relative to the mobile computing device to determine a distance of the object relative to the user; and determining that the object obscured by the mobile computing device from a viewing angle of the user Wherein the region comprises determining the distance of the object that is obscured by the mobile computing device based on the angular size of the mobile computing device from the user & apos; s perspective, and the distance of the object relative to the user The size of the angle of the area.Example 8 includes the subject of any one of the examples 1-7, and wherein the angle size, δ, is determined according to, where d is the actual size of the corresponding object and D is the corresponding object and And the point is a viewing angle from which the angle is determined.Example 9 includes the subject of any one of the examples 1-8, and wherein capturing the first image of the user comprises capturing an image of a face of the user; and determining that the user & apos; s eye is relative to the mobile The positioning of the device includes identifying the position of the user & apos; s eye in the image of the user & apos; s face.Example 10 includes the subject matter of any of the examples 1-9, and wherein determining the positioning of the user & apos; s eye relative to the mobile computing device comprises determining a distance of the user & apos; s eye from the mobile computing device.Example 11 includes the subject matter of any one of the examples 1-10, and wherein determining the positioning of the user & apos; s eye relative to the mobile computing device comprises determining the positioning of the user & apos; s eye relative to the first camera; and Determining that the distance of the object in the real world environment relative to the mobile computing device comprises determining a distance of the object relative to the second camera.Example 12 includes the subject of any one of the examples 1-11, and wherein the first camera has a field of view opposite to the field of view of the second camera with respect to the direction of the display.Example 13 includes the subject matter of any one of the examples 1-12, and wherein determining the distance of the object in the real world environment relative to the mobile computing device comprises comparing the object with respect to the mobile computing device Distance is set to a predefined distance.Example 14 includes the subject of any one of the examples 1-13, and wherein the pre-defined distance is greater than the focal length of the second camera.Example 15 includes the subject matter of any one of the examples 1-14, and further includes a display module for generating, based on the generated post-projection of the real world environment captured by the second camera, Display the image on the display.Example 16 includes the subject matter of any one of the examples 1-15, and wherein displaying the image based on the generated post-projection comprises displaying an image corresponding to the post-projection that is modified to include a realistic feature.The example 17 includes the subject of any one of the embodiments 1-16, wherein the at least one device parameter comprises at least one of: (i) a focal length of the second camera, (ii) a size of the display, iii) the size of the mobile computing device, or (iv) the location of the components of the mobile computing device relative to the reference point.Example 18 includes a method for adjusting a viewing angle of a captured image for display on a mobile computing device, the method comprising: capturing a first camera of the mobile computing device by a first camera of the mobile computing device An image; the mobile computing device determines the positioning of the user & apos; s eye with respect to the mobile computing device based on the captured first image; and a second computing device that is different from the mobile computing device of the first camera A camera to capture a second image of a real world environment of the mobile computing device; determining by the mobile computing device a distance of an object in the real world environment relative to the mobile computing device based on the captured second image; And generating a post-projection of the display of the mobile computing device by the mobile computing device based on the following parameters generated by the real-world environment captured by the second camera, the parameter comprising: the real- The determined distance of the object relative to the mobile computing device, the user & apos; s eye with respect to the mobile computing device Determining location, and at least one device parameter of the mobile computing device.Example 19 includes the subject of Example 18, and wherein generating the post-projection comprises determining for each pixel of the display from the user & apos; s eye through the corresponding display pixel to the object in the real world environment ; Identifying, for each of the determined rays, the captured second image of the real-world environment corresponding to the positioning of the object in the real-world environment pointed to by the corresponding light Image pixels; and constructing a rear projection image based on the identified image pixels for display on the display of the mobile computing device.Example 20 includes the subject matter of any one of example 18 and example 19, and wherein generating the post-projection comprises determining the size of the mobile computing device from the user & apos; s perspective, determining the size of the real- The distance of the object relative to the user; determining an area of the object that is obscured by the mobile computing device from a viewing angle of the user; determining, based on the determined object of the object obscured by the mobile computing device Region and the distance of the object with respect to the mobile computing device; and generating a post-projection image based on the corrected zoom size for the mobile computing device Of the display.Example 21 includes the subject matter of any of the examples 18-20, and wherein determining the corrected zoom size comprises determining the region of the object that is obscured by the mobile computing device from a viewing angle of the user Corresponding to the size of the area of the object from the viewing angle of the second camera.Example 22 includes the subject matter of any one of the examples 18-21, and wherein the corrected zoom size is captured by the second camera with the view from the user & apos; s perspective by the mobile computing device The size of the zoom required for the corresponding image of the area of the object.Example 23 includes the subject matter of any one of the examples 18-22, and wherein the corrected zoom size is achieved by using the second camera capture only with a view from the viewing angle from the user by the mobile computing device The size of the image of the image pixel corresponding to the characteristic of the object of the region of the object.Example 24 includes the subject matter of any one of the examples 18-23, and wherein determining the angle size of the mobile computing device from the user & apos; s perspective includes information based on the user & apos; s eye relative to the mobile computing device Distance and the size of the mobile computing device to determine an angle from the mobile computing device of the user & apos; s perspective; determining that the distance of the object relative to the user comprises determining, based on the user & apos; s eye, Calculating a distance of the device and a distance of the object relative to the mobile computing device to determine a distance of the object relative to the user; and determining that the object obscured by the mobile computing device from a viewing angle of the user Wherein the region comprises determining the distance of the object that is obscured by the mobile computing device based on the angular size of the mobile computing device from the user & apos; s perspective, and the distance of the object relative to the user Area.Example 25 includes the subject of any one of the examples 18-24, and wherein the angle size, δ, is determined according to, where d is the actual size of the corresponding object and D is the corresponding object and And the point is a viewing angle from which the angle is determined.Example 26 includes the subject of any of the examples 18-25, and wherein capturing the first image of the user comprises capturing an image of a face of the user; and determining that the user & apos; s eye is relative to the mobile The positioning of the device includes identifying the position of the user & apos; s eye in the image of the user & apos; s face.Example 27 includes the subject of any one of the examples 18-26, and wherein determining the positioning of the user & apos; s eye relative to the mobile computing device comprises determining the distance of the user & apos; s eye with the mobile computing device.Example 28 includes the subject matter of any one of the examples 18-27, and wherein determining the positioning of the user & apos; s eye with respect to the mobile computing device comprises determining the positioning of the user & apos; s eye relative to the first camera; and Determining that the distance of the object in the real world environment relative to the mobile computing device comprises determining a distance of the object relative to the second camera.Example 29 includes the subject matter of any one of the examples 18-28, and wherein the first camera has a field of view opposite to the field of view of the second camera with respect to the direction of the display.Example 30 includes the subject matter of any one of the examples 18-29, and wherein determining the distance of the object in the real world environment relative to the mobile computing device comprises comparing the object with respect to the mobile computing device Distance is set to a predefined distance.Example 31 includes the subject of any one of the examples 18-30, and wherein the pre-defined distance is greater than the focal length of the second camera.Example 32 includes the subject matter of any one of the examples 18-31 and also includes a display module for generating, based on the generated post-projection of the real world environment captured by the second camera, Display the image on the display.Example 33 includes the subject matter of any one of the examples 18-32, and wherein displaying the image based on the generated post-projection comprises displaying an image corresponding to the post-projection that is modified to include a realistic feature.The example 34 includes the subject of any one of the examples 18-33, wherein the at least one device parameter comprises at least one of: (i) a focal length of the second camera, (ii) a size of the display, iii) the size of the mobile computing device, or (iv) the location of the components of the mobile computing device relative to the reference point.The example 35 includes a mobile computing device that includes a processor; and a memory having a plurality of instructions stored therein, wherein when executed by the processor, the instructions cause the mobile computing device to perform example 18 -34 in any of the methods described.Example 36 includes one or more machine-readable storage media including a plurality of instructions stored thereon, wherein in response to being executed, the plurality of instructions cause the mobile computing device to perform example 18-34 In any of the methods described.The example 37 includes a mobile computing device for adjusting the viewing angle of the captured image for display, the mobile computing device comprising: a mobile computing device for capturing the user of the mobile computing device by a first camera of the mobile computing device An image unit; means for determining the positioning of the user & apos; s eye with respect to the mobile computing device based on the captured first image; means for determining, by the mobile computing device not used for the first camera A second camera to capture a second image of a real world environment of the mobile computing device; a second camera for determining a distance from an object in the real world environment relative to the mobile computing device based on the captured second image A unit for generating a rear projection of the display by the real world environment captured by the second camera based on the following parameters, the parameter comprising: the object in the real world environment, The determined distance of the mobile computing device, the determined positioning of the user & apos; s eye relative to the mobile computing device, and the mobile computing At least one device parameter prepared.Example 38 includes the subject of Example 37, and wherein the means for generating the post-projection comprises: means for determining from the user & apos; s eye through the corresponding display pixel to the real world for each pixel of the display And a means for identifying, for each of the determined rays, the positioning of the object in the real world environment pointed to by the corresponding light, the real world environment ; And means for constructing a rear projection image based on the identified image pixels for use on the display of the mobile computing device. The method comprises the steps of:Example 39 includes the subject matter of any one of example 37 and example 38, and wherein the means for generating the post-projection comprises: means for determining an angular size of the mobile computing device from an angle of the user; Means for determining a distance of the object in the real world environment relative to the user; means for determining a region of the object that is obscured by the mobile computing device from a viewing angle of the user; Determining a corrected zoom size of the second camera based on a determined region of the object obscured by the mobile computing device and a distance of the object relative to the mobile computing device; and Generating a post-projected image for display on the display of the mobile computing device based on the corrected zoom size.Example 40 includes the subject matter of any one of the examples 37-39, and wherein the means for determining the corrected zoom size comprises means for determining the size of the modified image from the user & apos; s perspective by the mobile computing device A unit of the size of the region of the object corresponding to the area of the object from the second camera.Example 41 includes the subject matter of any one of the examples 37-40, and wherein the corrected zoom size is captured by the second camera with the view from the user & apos; s perspective by the mobile computing device The size of the zoom required for the corresponding image of the area of the object.Example 42 includes the subject matter of any one of the examples 37-41, and wherein the corrected zoom size is captured by the second camera only with a view from the viewing angle from the user by the mobile computing device The size of the image of the image pixel corresponding to the characteristic of the object of the region of the object.Example 43 includes the subject matter of any one of the examples 37-42, and wherein the means for determining the angular size of the mobile computing device from the user & apos; s perspective includes information based on the user & apos; s eye The size of the mobile computing device and the size of the mobile computing device to determine the size of the mobile computing device from the user & apos; s perspective; a unit for determining a distance of the object relative to the user Comprising means for determining a distance of the object relative to the user based on a distance of the user & apos; s eye relative to the mobile computing device and a distance of the object relative to the mobile computing device; and determining The unit of the region of the object obscured by the mobile computing device from the user & apos; s perspective is included for determining, based on the angle size of the mobile computing device from the user & apos; s viewing angle, and the object & apos; s relative Means for determining a location of said area of said object obscured by said mobile computing device at a distance of said user.Example 44 includes the subject of any of the examples 37-43, and where the angle size, [delta], is determined according to, where d is the actual size of the corresponding object and D is the corresponding object and And the point is a viewing angle from which the angle is determined.Example 45 includes the subject matter of any of the examples 37-44, and wherein the means for capturing the first image of the user comprises an image for capturing a face of the user; and for determining that the user Of the eye relative to the mobile computing device comprises means for identifying a position of the user & apos; s eye in the image of the face of the user & apos; s face.Example 46 includes the subject matter of any of the examples 37-45, and wherein the means for determining the positioning of the user & apos; s eye with respect to the mobile computing device comprises means for determining that the user & apos; s eye is moved from the mobile computing The distance of the device is the unit.Example 47 includes the subject matter of any one of the examples 37-46, and wherein the means for determining the location of the user & apos; s eye relative to the mobile computing device comprises means for determining that the user & apos; s eye is associated with the And a means for determining a distance of the object in the real world environment relative to the mobile computing device comprises means for determining a distance of the object with respect to the second camera TheExample 48 includes the subject of any of the examples 37-47, and wherein the first camera has a field of view opposite to the direction of view of the second camera with respect to the display.Example 49 includes the subject matter of any one of the examples 37-48, and wherein the means for determining the distance of the object in the real world environment relative to the mobile computing device comprises means for comparing the object with respect to The distance of the mobile computing device is set to a unit of a predetermined distance.Example 50 includes the subject matter of any one of the examples 37-49, and wherein the pre-defined distance is greater than the focal length of the second camera.The example 51 includes the subject of any one of the examples 37-50 and also includes a unit for displaying an image on the display based on the generated post-projection of the real world environment captured by the second camera TheExample 52 includes the subject matter of any one of the examples 37-51, and wherein the means for displaying the image based on the generated post-projection comprises means for displaying and modifying the post- Project the corresponding image of the unit.The example 53 includes the subject of any one of the examples 37-52, wherein the at least one device parameter comprises at least one of: (i) a focal length of the second camera, (ii) a size of the display, iii) the size of the mobile computing device, or (iv) the location of the components of the mobile computing device relative to the reference point. |
Methods and apparatus to accelerate boot time zeroing of memory based on Non-Volatile Memory (NVM) technology are described. In an embodiment, a storage device stores a boot version number corresponding to a portion of a non-volatile memory. A memory controller logic causes an update of the stored boot version number in response to each subsequent boot event. The memory controller logic returns a zero in response to a read operation directed at the portion of the non-volatile memory and a mismatch between the stored boot version number and a current boot version number. Other embodiments are also disclosed and claimed. |
An apparatus comprising:a storage device to store a boot version number corresponding to a portion of a non-volatile memory; andmemory controller logic, coupled to the non-volatile memory, to cause an update of the stored boot version number in response to each subsequent boot event,wherein the memory controller logic is to return a zero in response to a read operation directed at the portion of the non-volatile memory and a mismatch between the stored boot version number and a current boot version number.The apparatus of claim 1, wherein the memory controller logic is to return the zero in response to the read operation directed at the portion of the non-volatile memory and the mismatch between the stored boot version number and the current boot version number without performing accessing the portion of the non-volatile memory.The apparatus of claim 1, wherein the memory controller logic is to cause the update of the boot version number in response to a write operation to the portion of the non-volatile memory.The apparatus of claim 1, wherein the current boot version number corresponds to an in- process boot event.The apparatus of claim 1, wherein the memory controller logic is to cause the update of the boot version number by incrementing the boot version number.The apparatus of claim 1, comprising refresh engine logic to continuously scrub each line of the non-volatile memory based on a reference address counter, wherein the reference address counter is to point to a line of the non-volatile memory to be scrubbed.The apparatus of claim 6, wherein the reference address counter is to be updated in response to scrubbing of a corresponding line of the non-volatile memory.The apparatus of claim 6, wherein the refresh engine logic is to perform a refresh cycle based on a pre-determined interval.The apparatus of claim 6, wherein the refresh engine logic is to perform a refresh cycle based on a refresh version number.10. The apparatus of claim 1, wherein the memory controller logic is to refrain from using the stored boot version number prior to a completion of a refresh cycle based on data stored in a current version refresh state table.1 1. The apparatus of claim 1, wherein the portion of the non-volatile memory is to store metadata, wherein the metadata is to comprise a saved version number and a zero bit per sub-portion of the portion of the non-volatile memory.12. The apparatus of claim 1 1, wherein the portion of the non- volatile memory is to comprise a memory line and the sub-portion is to comprise a sub-line.13. The apparatus of claim 12, wherein the memory line is to comprise four sub-lines.14. The apparatus of claim 1, wherein the non-volatile memory is to comprise threedimensional cross point memory.15. The apparatus of claim 1, wherein the non-volatile memory is to comprise the storage device.16. The apparatus of claim 1, wherein one or more processor cores are coupled to thememory controller logic to access data stored in the non-volatile memory.17. The apparatus of claim 1, wherein one or more of the memory controller logic, one or more processor cores, the storage device, and the non-volatile memory are on a same integrated circuit die.18. The apparatus of claim 1, wherein the portion of the non-volatile memory is to comprise 256 bytes.19. A method comprising:storing, in a storage device, a boot version number corresponding to a portion of a non-volatile memory; andcausing an update of the stored boot version number in response to each subsequent boot event,wherein a zero is returned in response to a read operation directed at the portion of the non-volatile memory and a mismatch between the stored boot version number and a current boot version number.20. The method of claim 19, wherein the zero is returned in response to the read operation directed at the portion of the non-volatile memory and the mismatch between the stored boot version number and the current boot version number without performing accessing the portion of the non- volatile memory.21. The method of claim 19, further comprising causing the update of the boot versionnumber in response to a write operation to the portion of the non- volatile memory.22. A computer-readable medium comprising one or more instructions that when executed on a processor configure the processor to perform one or more operations of any one of claims 19 to 21.23. An apparatus comprising means to perform a method as set forth in any one of claims 19 to 21. |
ACCELERATING BOOT TIME ZEROING OF MEMORY BASED ON NON-VOLATILEMEMORY (NVM) TECHNOLOGYFIELDThe present disclosure generally relates to the field of electronics. More particularly, some embodiments generally relate to accelerating boot time zeroing of memory based on Non-Volatile Memory (NVM) technology.BACKGROUNDGenerally, memory used to store data in a computing system can be volatile (to store volatile information) or non-volatile (to store persistent information). Volatile data structures stored in volatile memory are generally used for temporary or intermediate information that is required to support the functionality of a program during the run-time of the program. On the other hand, persistent data structures stored in non-volatile memory are available beyond the run-time of a program and can be reused. Moreover, new data is typically generated as volatile data first, before the user or programmer decides to make the data persistent. For example, programmers or users may cause mapping (i.e., instantiating) of volatile structures in volatile main memory that is directly accessible by a processor. Persistent data structures, on the other hand, are instantiated on non-volatile storage devices like rotating disks attached to Input/Output (I/O or 10) buses or nonvolatile memory based devices like flash memory.As computing capabilities are enhanced in processors, one concern is the speed at which memory may be accessed by a processor. For example, to process data, a processor may need to first fetch data from a memory. After completion of the data processing, the results may need to be stored in the memory. Therefore, the memory access speed can have a direct effect on overall system performance.Another important consideration is power consumption. For example, in mobile computing devices that rely on battery power, it is very important to reduce power consumption to allow for the device to operate while mobile. Power consumption is also important for non-mobile computing devices as excess power consumption may increase costs (e.g., due to additional power usage, increased cooling requirements, etc.), shorten component life, limit locations at which a device may be used, etc. Hard disk drives provide a relatively low-cost storage solution and are used in many computing devices to provide non- volatile storage. Disk drives however use a lot of power when compared to solid state drives (including non-volatile memory such as flash memory) since a disk drive needs to spin its disks at a relatively high speed and move disk heads relative to the spinning disks to read/write data. This physical movement generates heat and increases power consumption. Also, flash drives are much faster when performing read and write operations when compared with hard drives. To this end, many computing segments are migrating towards flash memory devices that are non-volatile.BRIEF DESCRIPTION OF THE DRAWINGS The detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.Figs. 1, 2, 5, 6, and 7 illustrate block diagrams of embodiments of computing systems, which may be utilized to implement various embodiments discussed herein.Fig. 3 illustrates a block diagram of a table refresh state table, according to an embodiment.Fig. 4 illustrates a flow diagram of a method to perform boot-up and power-down sequences, in accordance with an embodiment.DETAILED DESCRIPTION In the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Further, various aspects of embodiments may be performed using various means, such as integrated semiconductor circuits ("hardware"), computer-readable instructions organized into one or more programs ("software"), or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, or some combination thereof.As discussed above, many computing segments are replacing volatile with Non-Volatile Memory (NVM). Generally, software applications require volatile memory to be zeroed prior to allocation by the OS (Operating System). As the memory footprint of application grows, the time to initialize the memory grows as well. Typically, the OS initializes memory pages in the background and keeps a pool of these available for allocation. However, this is not the case following a boot (e.g., applying power to the system) as all of memory is in uninitialized state. The time to zero out sufficient memory to start applications following a boot is growing rapidly as memory capacity and application demand has been increasing. This problem becomes especially acute when using certain high-speed NVM technology such as PCM (Phase Change Memory) instead of DRAM (Dynamic Random Access Memory) as main memory. The capacities using this technology can be very large but the write bandwidth is significantly slower than DRAM. Given this, boot time initialization could take in the order of several minutes and become a big issue in large systems.To this end, some embodiments provide instant initialization of NVM based system memory. This in turn allows for very large memory capacities to be supported in computing systems. Various types of NVM may be utilized in various embodiments including NAND flash memory, NOR flash memory, etc. Such embodiments are aimed at avoiding the costly write transactions to the NVM during boot time zeroing. Instead of actually zeroing out memory, an embodiment uses a boot "Version Number" for all of NVM memory and stored along with each portion of this memory that is the size of any read or write operations performed by the processor (e.g., 256B line in NVM). On every boot (e.g., in response to each subsequent boot event), this Boot Version Number is updated (e.g., incremented) and any write transactions/operations to a given portion/line in the NVM will update its Version Number. On a read operation, mismatch between stored Version Number in the line and the current Boot Version Number will cause the memory controller logic to return a zero for data. Hence, actual zeroing operations may be avoided, which significantly improves performance. This technique is sometimes referred to herein as "fast zero" technique or functionality.By contrast, some implementations for boot time memory zeroing may use brute force writing of zeroes to a large address range before applications are launched. Another approach may perform zeroing at a page level granularity on a demand basis. Both these approaches, however, depend on actually writing to the NVM before handing the zeroed memory to the application. Both approaches will have a significantly long application launch time following a boot as the memory capacity increases with NVM.Moreover, the techniques discussed herein may be provided in various computing systems (e.g., including a solid state drive and/or a mobile device such as a smartphone, tablet, UMPC (Ultra-Mobile Personal Computer), laptop computer, Ultrabook™ computing device, smart watch, smart glasses, etc.), such as those discussed with reference to Figs. 1-7. More particularly, Fig. 1 illustrates a block diagram of a computing system 100, according to an embodiment. The system 100 includes one or more processors 102-1 through 102-N (generally referred to herein as "processors 102" or "processor 102"). The processors 102 may communicate via an interconnection or bus 104. Each processor may include various components some of which are only discussed with reference to processor 102-1 for clarity. Accordingly, each of the remaining processors 102-2 through 102-N may include the same or similar components discussed with reference to the processor 102-1. In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as "cores 106," or more generally as "core 106"), a cache 108 (which may be a shared cache or a private cache in various embodiments), and/or a router 110. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches (such as cache 108), buses or interconnections (such as a bus or interconnection 112), logic 120, logic 150, memory controllers (such as those discussed with reference to Figs. 5-7), NVM (Non-Volatile Memory) 152 (e.g., including flash memory, an SSD (with NAND memory cells)), etc., or other components.In one embodiment, the router 1 10 may be used to communicate between various components of the processor 102-1 and/or system 100. Moreover, the processor 102-1 may include more than one router 1 10. Furthermore, the multitude of routers 1 10 may be in communication to enable data routing between various components inside or outside of the processor 102-1.The cache 108 may store data (e.g., including instructions) that are utilized by one or more components of the processor 102-1, such as the cores 106. For example, the cache 108 may locally cache data stored in a volatile memory 1 14 for faster access by the components of the processor 102. As shown in Fig. 1, the memory 1 14 may be in communication with the processors 102 via the interconnection 104. In an embodiment, the cache 108 (that may be shared) may have various levels, for example, the cache 108 may be a mid-level cache and/or a last-level cache (LLC). Also, each of the cores 106 may include a level 1 (LI) cache (116-1) (generally referred to herein as "LI cache 1 16"). Various components of the processor 102-1 may communicate with the cache 108 directly, through a bus (e.g., the bus 112), and/or a memory controller or hub.As shown in Fig. 1, memory 1 14 may be coupled to other components of system 100 through a volatile memory controller 120. System 100 also includes NVM memory controller logic 150 to couple NVM memory 152 to various components of the system 100. Memory 152 includes non- volatile memory such as nanowire memory, Ferro-electric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM), flash memory, Spin Torque Transfer Random Access Memory (STTRAM), Resistive Random Access Memory, 3D Cross Point Memory such as PCM (Phase Change Memory), an SSD with NAND/NOR memory, etc. in some embodiments. Even though the memory controller 150 is shown to be coupled between the interconnection 104 and the memory 152, the logic 150 may be located elsewhere in system 100. For example, logic 150 (or portions of it) may be provided within one of the processors 102, controller 120, etc. in various embodiments. In an embodiment, logic 150 and NVM 152 are included in an SSD. Moreover, logic 150 controls access to one or more NVM devices 152 (e.g., where the one or more NVM devices are provided on the same integrated circuit die in some embodiments), as discussed herein with respect to various embodiments.As discussed above, some implementations for boot time memory zeroing may use brute force writing of zeroes to a large address range before applications are launched. Another approach may perform zeroing at a page level granularity on a demand basis. Both these approaches, however, depend on actually writing to the NVM before handing the zeroed memory to the application. Both approaches will have a significantly long application launch time following a boot as the memory capacity increases with NVM.To this end, some embodiments provide instant initialization of NVM based system memory; thus, allowing very large memory capacities to be supported in computing systems. Such embodiments are aimed at avoiding the costly write transactions to the NVM during boot time zeroing. Instead, an embodiment uses a boot "Version Number" for each portion of the NVM that is the size of any read or write operation performed by the processor (e.g., 256B line in NVM). On every boot (e.g., in response to each subsequent boot event), this Boot Version Number is updated (e.g., incremented) and any write transactions/operations to a given portion/line in the NVM will update its Version Number. On a read operation, mismatch between stored Version Number in the line and the current Boot Version Number will cause the memory controller logic to return a zero for data. Hence, actual zeroing transactions may be avoided, which significantly improves performance.Accordingly, very large memory capacities may be utilized without the penalty of slow boot times. One issue with incrementing or otherwise updating a Version Number on every boot, is that eventually a Version Number will be re-cycled. This condition is referred to as "roll-over". To avoid incorrect zeroing, a roll-over can only happen when it can be guaranteed that the Version Number being re-cycled is not currently stored in any of the NVM lines. One way to ensure this is to write to the entire NVM with an unused Version Number, every time a roll-over occurs. This is a very costly operation. A very large Version Number would make this roll-over condition unlikely but will also add to the cost of storing the number in the NVM. A small Version Number may be used in such a way that a roll-over condition is a (e.g., exceptionally) rare event as further discussed below.Fig. 2 illustrates a block diagram of a portion of a computing system, according to an embodiment. Central Processing Unit (CPU) 202 may be the same or similar to the processors discussed with reference to Figs. 1 and 5-7. CPU 202 is coupled an NVM Dual In-line Memory Module (DIMM) 204. The DIMM 204 in turn includes the NVM memory controller logic 150 and NVM 152 (illustrated as memory address space in Fig. 2). Logic 150 and NVM 152 are coupled by an NVM channel 206.As illustrated in Fig. 2, the NVM controller logic 150 includes a memory interface (LF) to communicate read/write transactions with the CPU 202. The read/write commands and data are transmitted to a read/write (R/W) control logic to communicate with the NVM 152 via the NVM channel 206 and based on information from a refresh engine logic as further discussed below. The controller logic 150 also includes a micro controller (uCTL) and memory (uCTL memory) to store data as further discussed herein.Moreover, in the block diagram of Fig. 2, the NVM 152 is used as memory implemented on a DIMM connected to the CPU via a Memory Interface (I/F), e.g., similar to DDR. The DIMM has the NVM Controller logic 150 that in turn interfaces to the NVM 152 that provides the system memory address range to OS and application software.The NVM controller logic 150 maintains a Boot Version Number labeled CV (for "Current Version"). The CV is incremented following every reboot/boot from the last CV saved in the NVM (e.g., in uCTL memory or another memory in the NVM DIMM 204) on a power-down. In an embodiment, the CV size is 8 bits. As shown in Fig. 2, the NVM address space is subdivided into "lines". In an embodiment, the data size of the line is 256B. The data portion may be further subdivided in "sub-lines" depending on the CPU read or write access granularity. In an embodiment, there are four 64B sub-lines per line.In addition to the data, each line of the NVM address space may also store some metadata. A portion of the metadata is used for the Fast Zero functionality. These elements include: (1) Saved Version number or SV (e.g., 8-bits); and (2) one zero bit per sub-line (four bits in an embodiment). The NVM Controller logic 150 also has a Refresh Engine logic that continuously scrubs each line in the NVM address space. The Refresh Engine uses a Ref-Addr-Counter (reference address counter) to point to a given line to be scrubbed, Once the line is scrubbed, the counter is updated (e.g., incremented). When the entire address space is scrubbed, a "Refresh Cycle" has completed. The Refresh Engine will then start the next Refresh Cycle. A Refresh Cycle may typically complete in a pre-determined interval assuming it is not interrupted by a power-down event. In one embodiment, the uninterrupted Refresh Cycle duration is two days. The entire NVM address range is scrubbed in this timeframe.Generally, a "scrub" by the Refresh Engine involves reading a line, making any error corrections, and then writing it back. The Refresh Engine will also have a Refresh Version number or "RV" that may be dedicated for its use alone. In an embodiment, the CV can never equal RV. If a Refresh Cycle is interrupted by a power-down event, the state of the Ref-Addr-Cnt is saved in NVM (e.g., in uCTL memory or another memory in the NVM DIMM 204). On power-up, the state is restored in the NVM Controller and the Refresh Engine resumes the Refresh Cycle where it left off. This is important in order to ensure that no portion of the NVM Address space is skipped due to this interruption.The CPU 202 issues read or write operations directed at the sub-lines to the NVM Controller logic 150. The NVM Controller 150 in turn uses the CV and the line metadata to determine the action to take.Table 1 below shows the various scenarios in accordance with some embodiments.Table 1Following a reboot, and before any CPU write operation to update the NVM lines, any CPU Read will encounter the case where SV≠CV. The exception to this is a Roll-Over case that will be discussed further below. Given this, CPU reads only "zeroes" from all NVM address space. The incrementing of CV on a reboot is equivalent to instantaneously zeroing the entire memory space.As the CPU starts to write to the NVM address space, the SV's for the updated lines are modified to CV. Any read following this will read the new data and not the emulated Zero. Additionally, for any lines that are not written following a boot, the Refresh Engine will update the SV=RV; thus, ensuring that all stale version numbers are removed once per Refresh Cycle. Referring to the Roll-Over case mentioned above, a CV cannot be re-used prior to a completeRefresh Cycle since the last time the CV was used. In order to track this, a table called the CV Refresh State Table or CVRT 300 illustrated in Fig. 3, according to an embodiment. The CVRT 300 may be stored in the NVM (e.g., in uCTL memory or another memory in the NVM DIMM 204). The size of the table is dictated by the size of CV. So, an 8-bit CV will have a 256 entry table and so on. One entry (=RV) is never used in an embodiment. Each entry in the table has the CV Refresh State for a given CV.Referring to Fig. 3, the V bit is set for a CV that has been used at least once. The last Refresh Address is the Ref-Addr-Cnt value when a power-down happened the last time the CV was used. The Ref State Flags are additional bits that allow the determination of whether at least one Refresh Cycle has been completed since the last time the CV was used. One every boot-up sequence when a new CV is generated, the NVM Controller logic 150 checks the CVRT 300 to see if a complete Refresh Cycle occurred since the last time this CV was used. If it has not, a "Roll-Over" condition is said to exist and the Fast Zero mechanism cannot be used to generate zeroes in response to read operations. There are at least two options to handling the Roll-Over state. One is to have the system software (e.g., including BIOS (Basic Input Output System)) or the NVM Controller logic 150 write the entire NVM Address space with zero data and update the SV=CV at the same time. The other option is to inform the system software (e.g., the OS) that the Fast Zero functionality has not be used for this boot cycle, and that the OS should fall back to its normal approach to zero out pages.Furthermore, the Roll-Over condition should be a rare exception as long as the CV size is reasonable. For instance, for an 8-bit CV, there would need to be 254 reboots within the normal uninterrupted Refresh Cycle Period in order for a Roll-Over condition to occur. Since a complete Refresh Cycle is likely to be completed within a day or two, such a scenario is likely limited to specific situations like system validation and is unlikely to happen during normal operation. Thus, for all practical purposes, this fast zero mechanism should work every time. Fig. 4 illustrates a flow diagram of a method 400 to perform boot-up and power-down sequences, in accordance with an embodiment. In one embodiment, various components discussed with reference to Figs. 1-3 and 5-7 may be utilized to perform one or more of the operations discussed with reference to Fig. 4. In an embodiment, one or more operations of method 400 are implemented in logic (e.g., firmware), such as logic 150 of Fig. 1. Referring to Figs. 1-4, at an operation 402, the fast zero functionality boot up sequence is initiated. At an operation 404, a read operation from the NVM 152 is detected (which triggers the saving of last CV, ref-addr-cnt, and CVRT 300). At an operation 406, the CV value is updated (e.g., incremented). At an operation 408, the CV state is read from CVRT 300 and the saved reference state is compared against the saved ref-addr-cnt. An operation 410, determines (e.g., based on the comparison of operation 408) whether a CV roll-over condition exists. If so, an operation 412 clears the roll-over flag to indicate fast zero operations are complete and are to proceed; otherwise, an operation 414 sets the roll-over flag to no fast zero functionality. After operations 412 or 414, the refresh engine is started at an updated/incremented ref-addr-cnt (or ref- addr-cnt+1) at operation 416. The CVRT is updated and stored at operation 418. At an operation 420, the fast zero initialization is complete.At operation 450, the power-down fast zero sequence is initiated. At an operation 452, the CV is saved as last SV, as well as the ref-address-cnt, and refresh state.Fig. 5 illustrates a block diagram of a computing system 500 in accordance with an embodiment of the invention. The computing system 500 may include one or more central processing unit(s) (CPUs) 502 or processors that communicate via an interconnection network (or bus) 504. The processors 502 may include a general purpose processor, a network processor (that processes data communicated over a computer network 503), an application processor (such as those used in cell phones, smart phones, etc.), or other types of a processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Various types of computer networks 503 may be utilized including wired (e.g., Ethernet, Gigabit, Fiber, etc.) or wireless networks (such as cellular, 3G (Third-Generation Cell-Phone Technology or 3rd Generation Wireless Format (UWCC)), 5G, Low Power Embedded (LPE), etc.). Moreover, the processors 502 may have a single or multiple core design. The processors 502 with a multiple core design may integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 502 with a multiple core design may be implemented as symmetrical or asymmetrical multiprocessors.In an embodiment, one or more of the processors 502 may be the same or similar to the processors 102 of Fig. 1. For example, one or more of the processors 502 may include one or more of the cores 106 and/or cache 108. Also, the operations discussed with reference to Figs. 1-4 may be performed by one or more components of the system 500.A chipset 506 may also communicate with the interconnection network 504. The chipset 506 may include a graphics and memory control hub (GMCH) 508. The GMCH 508 may include a memory controller 510 (which may be the same or similar to the memory controller 120 of Fig. 1 in an embodiment) that communicates with the memory 1 14. System 500 may also include logic 150 (e.g., coupled to NVM 152) in various locations (such as those shown in Fig. 5 but can be in other locations within system 500 (not shown)). The memory 1 14 may store data, including sequences of instructions that are executed by the CPU 502, or any other device included in the computing system 500. In one embodiment of the invention, the memory 114 may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Nonvolatile memory may also be utilized such as a hard disk, flash, 3D Cross Point Memory (such as PCM), Resistive Random Access Memory, NAND memory, NOR memory and STTRAM. Additional devices may communicate via the interconnection network 504, such as multiple CPUs and/or multiple system memories.The GMCH 508 may also include a graphics interface 514 that communicates with a graphics accelerator 516. In one embodiment of the invention, the graphics interface 514 may communicate with the graphics accelerator 516 via an accelerated graphics port (AGP) or Peripheral Component Interconnect (PCI) (or PCI express (PCIe) interface). In an embodiment of the invention, a display 517 (such as a flat panel display, touch screen, etc.) may communicate with the graphics interface 514 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display. The display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display 517.A hub interface 518 may allow the GMCH 508 and an input/output control hub (ICH) 520 to communicate. The ICH 520 may provide an interface to I/O devices that communicate with the computing system 500. The ICH 520 may communicate with a bus 522 through a peripheral bridge (or controller) 524, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 524 may provide a data path between the CPU 502 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 520, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 520 may include, in various embodiments, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices. The bus 522 may communicate with an audio device 526, one or more disk drive(s) 528, and a network interface device 530 (which is in communication with the computer network 503, e.g., via a wired or wireless interface). As shown, the network interface device 530 may be coupled to an antenna 531 to wirelessly (e.g., via an Institute of Electrical and Electronics Engineers (IEEE) 802.1 1 interface (including IEEE 802.11a/b/g/n, etc.), cellular interface, 3G, 5G, LPE, etc.) communicate with the network 503. Other devices may communicate via the bus 522. Also, various components (such as the network interface device 530) may communicate with the GMCH 508 in some embodiments. In addition, the processor 502 and the GMCH 508 may be combined to form a single chip. Furthermore, the graphics accelerator 516 may be included within the GMCH 508 in other embodiments. Furthermore, the computing system 500 may include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 528), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions).Fig. 6 illustrates a computing system 600 that is arranged in a point-to-point (PtP) configuration, according to an embodiment. In particular, Fig. 6 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. The operations discussed with reference to Figs. 1-5 may be performed by one or more components of the system 600.As illustrated in Fig. 6, the system 600 may include several processors, of which only two, processors 602 and 604 are shown for clarity. The processors 602 and 604 may each include a local memory controller hub (MCH) 606 and 608 to enable communication with memories 610 and 612. The memories 610 and/or 612 may store various data such as those discussed with reference to the memory 114 or NVM 152 of Figs. 1 and/or 5. Also, MCH 606 and 608 may include the memory controller 120 and/or logic 150 of Fig. 1 in some embodiments.In an embodiment, the processors 602 and 604 may be one of the processors 502 discussed with reference to Fig. 5. The processors 602 and 604 may exchange data via a point-to-point (PtP) interface 614 using PtP interface circuits 616 and 618, respectively. Also, the processors 602 and 604 may each exchange data with a chipset 620 via individual PtP interfaces 622 and 624 using point-to-point interface circuits 626, 628, 630, and 632. The chipset 620 may further exchange data with a high-performance graphics circuit 634 via a high-performance graphics interface 636, e.g., using a PtP interface circuit 637. As discussed with reference to Fig. 5, the graphics interface 636 may be coupled to a display device (e.g., display 517) in some embodiments.As shown in Fig. 6, one or more of the cores 106 and/or cache 108 of Fig. 1 may be located within the processors 602 and 604. Other embodiments, however, may exist in other circuits, logic units, or devices within the system 600 of Fig. 6. Furthermore, other embodiments may be distributed throughout several circuits, logic units, or devices illustrated in Fig. 6.The chipset 620 may communicate with a bus 640 using a PtP interface circuit 641. The bus 640 may have one or more devices that communicate with it, such as a bus bridge 642 and I/O devices 643. Via a bus 644, the bus bridge 642 may communicate with other devices such as a keyboard/mouse 645, communication devices 646 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 503, as discussed with reference to network interface device 530 for example, including via antenna 531), audio I/O device, and/or a data storage device 648. The data storage device 648 may store code 649 that may be executed by the processors 602 and/or 604.In some embodiments, one or more of the components discussed herein can be embodied on a System On Chip (SOC) device. Fig. 7 illustrates a block diagram of an SOC package in accordance with an embodiment. As illustrated in Fig. 7, SOC 702 includes one or more Central Processing Unit (CPU) cores 720, one or more Graphics Processor Unit (GPU) cores 730, an Input/Output (I/O) interface 740, and a memory controller 742. Various components of the SOC package 702 may be coupled to an interconnect or bus such as discussed herein with reference to the other figures. Also, the SOC package 702 may include more or less components, such as those discussed herein with reference to the other figures. Further, each component of the SOC package 720 may include one or more other components, e.g., as discussed with reference to the other figures herein. In one embodiment, SOC package 702 (and its components) is provided on one or more Integrated Circuit (IC) die, e.g., which are packaged onto a single semiconductor device.As illustrated in Fig. 7, SOC package 702 is coupled to a memory 760 (which may be similar to or the same as memory discussed herein with reference to the other figures) via the memory controller 742. In an embodiment, the memory 760 (or a portion of it) can be integrated on the SOC package 702.The I/O interface 740 may be coupled to one or more I/O devices 770, e.g., via an interconnect and/or bus such as discussed herein with reference to other figures. I/O device(s) 770 may include one or more of a keyboard, a mouse, a touchpad, a display, an image/video capture device (such as a camera or camcorder/video recorder), a touch screen, a speaker, or the like. Furthermore, SOC package 702 may include/integrate the logic 150 in an embodiment. Alternatively, the logic 150 may be provided outside of the SOC package 702 (i.e., as a discrete logic).The following examples pertain to further embodiments. Example 1 includes an apparatus comprising: a storage device to store a boot version number corresponding to a portion of a nonvolatile memory; and memory controller logic, coupled to the non-volatile memory, to cause an update of the stored boot version number in response to each subsequent boot event, wherein the memory controller logic is to return a zero in response to a read operation directed at the portion of the non-volatile memory and a mismatch between the stored boot version number and a current boot version number. Example 2 includes the apparatus of example 1, wherein the memory controller logic is to return the zero in response to the read operation directed at the portion of the non-volatile memory and the mismatch between the stored boot version number and the current boot version number without performing accessing the portion of the non-volatile memory. Example 3 includes the apparatus of example 1, wherein the memory controller logic is to cause the update of the boot version number in response to a write operation to the portion of the nonvolatile memory. Example 4 includes the apparatus of example 1, wherein the current boot version number corresponds to an in-process boot event. Example 5 includes the apparatus of example 1, wherein the memory controller logic is to cause the update of the boot version number by incrementing the boot version number. Example 6 includes the apparatus of example 1 , comprising refresh engine logic to continuously scrub each line of the non-volatile memory based on a reference address counter, wherein the reference address counter is to point to a line of the nonvolatile memory to be scrubbed. Example 7 includes the apparatus of example 6, wherein the reference address counter is to be updated in response to scrubbing of a corresponding line of the non-volatile memory. Example 8 includes the apparatus of example 6, wherein the refresh engine logic is to perform a refresh cycle based on a pre-determined interval. Example 9 includes the apparatus of example 6, wherein the refresh engine logic is to perform a refresh cycle based on a refresh version number. Example 10 includes the apparatus of example 1, wherein the memory controller logic is to refrain from using the stored boot version number prior to a completion of a refresh cycle based on data stored in a current version refresh state table. Example 1 1 includes the apparatus of example 1, wherein the portion of the non- volatile memory is to store metadata, wherein the metadata is to comprise a saved version number and a zero bit per sub-portion of the portion of the non-volatile memory. Example 12 includes the apparatus of example 1 1, wherein the portion of the non-volatile memory is to comprise a memory line and the sub-portion is to comprise a sub-line. Example 13 includes the apparatus of example 12, wherein the memory line is to comprise four sub-lines. Example 14 includes the apparatus of example 1, wherein the nonvolatile memory is to comprise three dimensional cross point memory. Example 15 includes the apparatus of example 14, wherein the non-volatile memory is to comprise a AND flash memory or a NOR flash memory. Example 16 includes the apparatus of example 1 , wherein the non-volatile memory is to comprise the storage device. Example 17 includes the apparatus of example 1, wherein one or more processor cores are coupled to the memory controller logic to access data stored in the non-volatile memory. Example 18 includes the apparatus of example 1 , wherein one or more of the memory controller logic, one or more processor cores, the storage device, and the non-volatile memory are on a same integrated circuit die. Example 19 includes the apparatus of example 1, wherein the portion of the non- volatile memory is to comprise 256 bytes.Example 20 includes a method comprising: storing, in a storage device, a boot version number corresponding to a portion of a non-volatile memory; and causing an update of the stored boot version number in response to each subsequent boot event, wherein a zero is returned in response to a read operation directed at the portion of the non-volatile memory and a mismatch between the stored boot version number and a current boot version number. Example 21 includes the method of example 20, wherein the zero is returned in response to the read operation directed at the portion of the non-volatile memory and the mismatch between the stored boot version number and the current boot version number without performing accessing the portion of the non- volatile memory. Example 22 includes the method of example 20, further comprising causing the update of the boot version number in response to a write operation to the portion of the non-volatile memory.Example 23 includes a computer-readable medium comprising one or more instructions that when executed on a processor configure the processor to perform one or more operations to: store, in a storage device, a boot version number corresponding to a portion of a non-volatile memory; and cause an update of the stored boot version number in response to each subsequent boot event, wherein a zero is returned in response to a read operation directed at the portion of the non-volatile memory and a mismatch between the stored boot version number and a current boot version number. Example 24 includes the computer-readable medium of example 23, further comprising one or more instructions that when executed on the processor configure the processor to perform one or more operations to cause the zero to be returned in response to the read operation directed at the portion of the non-volatile memory and the mismatch between the stored boot version number and the current boot version number without performing accessing the portion of the non- volatile memory. Example 25 includes the computer-readable medium of example 23, further comprising one or more instructions that when executed on the processor configure the processor to perform one or more operations to cause the update of the boot version number in response to a write operation to the portion of the non-volatile memory.Example 26 includes an system comprising: a processor; a storage device, coupled to the processor, to store a boot version number corresponding to a portion of a non-volatile memory; and memory controller logic, coupled to the non-volatile memory, to cause an update of the stored boot version number in response to each subsequent boot event, wherein the memory controller logic is to return a zero in response to a read operation directed at the portion of the non-volatile memory and a mismatch between the stored boot version number and a current boot version number. Example 27 includes the system of example 26, wherein the memory controller logic is to return the zero in response to the read operation directed at the portion of the non-volatile memory and the mismatch between the stored boot version number and the current boot version number without performing accessing the portion of the non-volatile memory. Example 28 includes the system of example 26, wherein the memory controller logic is to cause the update of the boot version number in response to a write operation to the portion of the non-volatile memory. Example 29 includes the system of example 26, wherein the current boot version number corresponds to an in-process boot event. Example 30 includes the system of example 26, wherein the memory controller logic is to cause the update of the boot version number by incrementing the boot version number. Example 31 includes the system of example 26, comprising refresh engine logic to continuously scrub each line of the non-volatile memory based on a reference address counter, wherein the reference address counter is to point to a line of the non-volatile memory to be scrubbed. Example 32 includes the system of example 31, wherein the reference address counter is to be updated in response to scrubbing of a corresponding line of the non-volatile memory. Example 33 includes the system of example 31, wherein the refresh engine logic is to perform a refresh cycle based on a pre-determined interval. Example 34 includes the system of example 31, wherein the refresh engine logic is to perform a refresh cycle based on a refresh version number. Example 35 includes the system of example 26, wherein the memory controller logic is to refrain from using the stored boot version number prior to a completion of a refresh cycle based on data stored in a current version refresh state table. Example 36 includes the system of example 26, wherein the portion of the non- volatile memory is to store metadata, wherein the metadata is to comprise a saved version number and a zero bit per sub-portion of the portion of the non-volatile memory. Example 37 includes the system of example 36, wherein the portion of the non-volatile memory is to comprise a memory line and the sub-portion is to comprise a sub-line. Example 38 includes the system of example 37, wherein the memory line is to comprise four sub-lines. Example 39 includes the system of example 26, wherein the non-volatile memory is to comprise flash memory. Example 40 includes the system of example 39, wherein the flash memory is to comprise a NAND flash memory or a NOR flash memory. Example 41 includes the system of example 26, wherein the non-volatile memory is to comprise the storage device. Example 42 includes the system of example 26, wherein one or more processor cores of the processor are coupled to the memory controller logic to access data stored in the non-volatile memory. Example 43 includes the system of example 26, wherein one or more of the memory controller logic, one or more processor cores of the processor, the storage device, and the non-volatile memory are on a same integrated circuit die. Example 44 includes the system of example 26, wherein the portion of the non-volatile memory is to comprise 256 bytes. Example 45 includes an apparatus comprising means to perform a method as set forth in any preceding example.46. Machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as set forth in any preceding claim.In various embodiments, the operations discussed herein, e.g., with reference to Figs. 1-7, may be implemented as hardware (e.g., circuitry), software, firmware, microcode, or combinations thereof, which may be provided as a computer program product, e.g., including a tangible (e.g., non-transitory) machine-readable or computer-readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. Also, the term "logic" may include, by way of example, software, hardware, or combinations of software and hardware. The machine-readable medium may include a storage device such as those discussed with respect to Figs. 1-7.Additionally, such tangible computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals (such as in a carrier wave or other propagation medium) via a communication link (e.g., a bus, a modem, or a network connection).Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment.Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.Thus, although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter. |
A cache memory system is provided that uses multi-bit Error Correcting Code (ECC) with a low storage and complexity overhead. The cache memory system can be operated at very low idle power, without dramatically increasing transition latency to and from an idle power state due to loss of state. |
An apparatus comprising:a cache memory; andan error correction logic to receive data stored in a cache line in the cache memory, the error correction logic comprising:a first error correction logic to generate a syndrome for the received cache line read from the cache memory to determine a number of errors in the cache line; anda second error correction logic to receive the cache line from the first error correction logic only if the cache line has greater than one error, the second error correction logic to perform multi-bit error correction for the received cache line.The apparatus of claim 1, further comprising:repair logic, the repair logic to fix known errors in the cache line prior to forwarding the cache line to the error correction logic.The apparatus of claim 1, wherein the cache memory is volatile memory.The apparatus of claim 1 , wherein the first error correction logic includes a decoder to perform correction in a shorter period of time than the multi-bit error correction.The apparatus of claim 1 , wherein the first error correction logic includes syndrome generation logic and error correction logic for cache lines with zero or one failures.The apparatus of claim 1, further comprising:a Recently Accessed Line Table (RALT) coupled to the address bus of the cache memory, the RALT to determine if the addressed cache line has been accessed within a current refresh time period of the volatile cache memory.The apparatus of claim 6, wherein the RALT is used to track recently accessed lines.The apparatus of claim 1, wherein the first error correction logic to perform error correction for the single error in the received cache line.A method comprising:storing data in a cache memory; receiving, by an error correction logic data stored in a cache line in the cache memory, the error correction logic comprising a first error correction logic and a second error correction logic;generating, by the first error correction logic, a syndrome for the received cache line read from the cache memory to determine a number of errors in the cache line;forwarding the cache line, by the first error correction logic to a second error correction logic only if the cache line has greater than one error; andperforming, by the second error correction logic, multi-bit error correction for the received cache line.The method of claim 9, further comprising:repairing, by repair logic, known errors in the cache line prior to forwarding the cache line to the error correction logic.The method of claim 9, further comprising:performing, by the first error correction logic, correction in a shorter period of time than the multi-bit error correction.The method of claim 11, further comprising:determining, by a Recently Accessed Line Table (RALT) coupled to the address bus of the cache memory, if the addressed cache line has been accessed within a current refresh time period of the cache memory.The method of claim 12, further comprising:tracking, by the RALT, recently accessed cache lines.The method of claim 9, further comprising:performing, by the first error correction logic, error correction for the single error in the received cache line.An article including a machine-accessible medium having associated information, wherein the information, when accessed, results in a machine performing:storing data in a cache memory;receiving, by an error correction logic data stored in a cache line in the cache memory, the error correction logic comprising a first error correction logic and a second error correction logic;generating, by the first error correction logic, a syndrome for the received cache line read from the cache memory to determine a number of errors in the cache line;forwarding the cache line, by the first error correction logic to a second error correction logic only if the cache line has greater than one error; andperforming, by the second error correction logic, multi-bit error correction for the received cache line.16. The article of claim 15 further comprising:repairing, by repair logic, known errors in the cache line prior to forwarding the cache line to the error correction logic.17. The article of claim 15, further comprising:performing, by the first error correction logic, correction in a shorter period of time than the multi-bit error correction.18. The article of claim 15, further comprising:determining, by a Recently Accessed Line Table (RALT) coupled to the address bus of the cache memory, if the addressed cache line has been accessed within a current refresh time period of the cache memory.19. The article of claim 15, further comprising:performing, by the first error correction logic, error correction for the single error in the received cache line.20. A system comprising:an external memory; anda processor, the processor comprising:a cache memory to store data read from the external memory; and an error correction logic to receive data stored in a cache line in the cache memory, the error correction logic comprising:a first error correction logic to generate a syndrome for the received cache line read from the cache memory to determine a number of errors in the cache line; anda second error correction logic to receive the cache line from the first error correction logic only if the cache line has greater than one error, the second error correction logic to perform multi-bit error correction for the received cache line.21. The system of claim 20, further comprising:repair logic, the repair logic to fix known errors in the cache line prior to forwarding the cache line to the error correction logic.The system of claim 20, wherein the first error correction logic includes a decoder to perform correction in a shorter period of time than the multi-bit error correction and syndrome generation logic and error correction logic for cache lines with zero or one failures. |
METHOD AND APPARATUS FOR USING CACHE MEMORY IN A SYSTEMTHAT SUPPORTS A LOW POWER STATE FIELDThe present invention relates generally to memory, and more particularly to reducing the power consumption of cache memory while a system is in a low power state.BACROU DTechnology advancements have enabled the integration of large on-die embeddedDynamic Random Access Memory (eDRAM) caches with a Central Processing Unit (CPU). Embedded DRAM is significantly denser than traditional Static Random Access Memories (SRAMs), but must be periodically refreshed to retain data. Like SRAM, embedded DRAM is susceptible to device variations, which play a role in determining a refresh period for embedded DRAM cells. Power consumed to refresh eDRAM represents a large portion of overall system power, particularly during low-power states when the CPU is idle.BRIEF DESCRIPTION OF THE DRAWINGSFeatures of embodiments of the claimed subject matter will become apparent as the following detailed description proceeds, and upon reference to the drawings, in which like numerals depict like parts, and in which:Fig. 1 is an embodiment of a processor that includes a cache memory and error code correction logic (ECC) according to the principles of the present invention;Fig. 2 is a block diagram of a system including an embodiment of a RecentlyAccessed Lines Table (RALT) and the cache memory and ECC logic shown in Fig. 1 illustrating a fast access to a cache line in the cache memory;Fig. 3 is a block diagram of the system shown in Fig. 2 illustrating a subsequent read of a cache line within the refresh period;Fig. 4A is a block diagram illustrating an embodiment of an ECC encoder included in the quick ECC logic shown in Fig. 1 ;Fig. 4B is a block diagram illustrating an embodiment of an ECC decoder(decoding logic) included in the quick ECC logic shown in Fig. 1 ; Fig. 5 is a flow graph illustrating an embodiment of a method for using the system shown in Fig. 1 according to the principles of the present invention; andFig. 6 is a block diagram of a system that includes an embodiment of the processor shown in Fig. 1.Although the following Detailed Description will proceed with reference being made to illustrative embodiments of the claimed subject matter, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.Accordingly, it is intended that the claimed subject matter be viewed broadly, and be defined only as set forth in the accompanying claims.DETAILED DESCRIPTIONError-correcting codes (ECC) have are typically used to allow non-volatile memory and storage devices to tolerate both soft and hard errors. On-chip caches in a device and memory devices (chips, integrated circuits) typically use simple and fast ECC such as, Single Error Correction and Double Error Detection (SECDED) Hamming codes. Slower devices such as flash memories use multi-bit ECCs with strong error correcting capabilities, for example, Reed-Solomon codes. The higher decoding latencies of the strong ECC mechanisms do not pose a problem for mass storage devices, for example, disk drives because the encoding/decoding latency is insignificant as compared to intrinsic device access time. However, as a result of technology scaling, the on-chip memory arrays (caches) are more susceptible to multi-bit errors. Thus, strong ECC codes are also desirable for on-chip cache. In addition to the latency overhead, the storage overhead of the additional ECC bits is an obstacle to using multi-bit ECC for on-chip cache memories.In pursuit of improved power and energy efficiency, micro-processors implement a number of idle states to support lower power modes (states). Reducing the power consumed during idle states is particularly important because the typical CentralProcessing Unit (CPU) spends a lot of time in idle state. Embedded DRAM technology enables smaller memory cells as compared to SRAM cells, resulting in a large increase in memory density. Thus, DRAM may be used to replace SRAM as the last-level on-chip cache in high performance processors.However, a problem with embedded DRAM (eDRAM) cells is that the cells lose charge over time due to leakage currents. The retention time of an eDRAM cell is defined as the length of time for which the cell can retain its state (charge). Cell retention time is dependent on the leakage current, which, in turn, is dependent on the device leakage. To preserve the state of stored data, eDRAM cells need to be refreshed on a periodic basis. In order to prevent loss of state in the cache, the refresh period needs to be less than the cell retention time. Since eDRAM is DRAM integrated on a conventional logic process it uses fast logic transistors with a higher leakage current than transistors used in conventional DRAM. Therefore, the refresh time for eDRAM is about a thousand times shorter than conventional DRAM. The shorter refresh period increases power consumed during the idle state and also leads to reduced availability.In SRAM caches, intrinsic variations force operation at high voltages due to a few weak cells that fail (lose charge (state)) at lower voltages, and impede efforts to reduce power consumption during idle states. Likewise, in embedded DRAM caches, device variations affect the retention time between refresh of individual DRAM cells, with a few particularly weak cells (bits) determining the refresh period of the entire cache. Variations in threshold voltage cause retention times of different DRAM cells to vary significantly. These variations are caused predominantly by random dopant fluctuations and manifest themselves as a random distribution of retention times amongst eDRAM cells. However, increasing the refresh rate significantly increases cache power.A method to reduce cache power is to use power gates. Power gates are switches on the power supply that allow power to be completely shut off to a block of transistors. Since memory technologies such as eDRAM and SRAM are unable to retain state when deprived of power, power-gating is performed at the cost of losing memory state.As cache density increases, the performance and power consumption of power gating also increase. As the size of the embedded DRAM cache increases there is a tradeoff between idle exit latency (time to restore the state of the cache by retrieving cache lines from main memory) and power consumption during the idle state.The DRAM refresh period may be increased through the use of error-correcting codes (ECC) to dynamically identify and repair cells that lose their state. The refresh rate is set irrespective of the weakest eDRAM cells, using ECC to compensate for lost state. A stronger error-correcting code, with the ability to correct multi-bit errors, implies increased refresh rate and reduced power consumption. However, multi-bit ECC codes have a high storage and complexity overhead which limit their applicability.An embodiment of the present invention provides a flexible memory structure that uses multi-bit ECC codes with a low storage and complexity overhead and can operate at very low idle power, without dramatically increasing transition latency to and from the idle power state due to loss of state of cells (bits).Fig. 1 is an embodiment of a processor 100 that includes a cache memory and error code correction (ECC) logic 122 according to the principles of the present invention. The ECC logic 122 is low-latency, low-cost, multi-bit error- correcting logic that compensates for high failure rates in volatile memory such as the memory cache 1 10 shown in Fig. 1. In the embodiment shown, the memory cache 1 10 is embedded DRAM (eDRAM). In other embodiments, the memory cache 1 10 may be Static Random Access Memory (SRAM) or any other type of volatile memoryCorrecting more errors requires higher redundancy, which leads to a high check bit overhead. For example, to correct f-bit errors in k-bit input data, a BCH code typically requires r = t * ceil(log2k) + 1 check bits. Due to the logarithmic relationship between r and k, the number of check bits increases much slower than the size of the input data. Thus, the ECC check bit overhead is reduced by increasing k.For example, a Single Error Correcting, Double Error Detecting (SECDED) code for a 64 Byte (512-bit) cache line requires 1 1 bits which is an overhead of about 2%. The number of bits in an ECC code relative to the number of bits in the data word diminishes as the number of bits in the data word increases. For example, a SECDED code for a 64 Byte cache line has an 1 1-bit overhead (2%), and a SECDED code for a 1024 Byte (1KB) cache line has a 15-bit overhead (0.18%).However, when a large cache line is used, writes to sub-blocks within the cache- line may require the entire cache line to be read every time in order to regenerate the ECC bits. As a linear code, BCH inherits the additive property of linear systems, which ensures that ECC check bits can be updated using only the information of the modified sub-block (chunk of data). The data word d (representing a cache line) is divided into multiple chunks (sub-blocks) [di-i,di-2,— ,do\- The G matrix used in ECC encoding can be divided into two parts as G=[h, P], where P is the generator for ECC check word C, i.e., C = dx P. If the jthchunk of data djis written with a new value dj_new, then the new ECC is:C„ew= d„ewx P = (d + [0,-, (dJ old+ dJ„ew),...,0]) x P( 1 )= C + [<),..., (<., _M+ dj_„),...,0]χΡEquation (1) shows that the generation of new check bits requires only the old value of check bits and the old and new values of the sub-block being modified.Returning to Fig. 1, the ECC logic 122 is low-latency, low-cost, multi-bit error- correction logic that can compensate for high failure rates in the eDRAM cache 1 10. The ECC logic 122 implements a strong BCH code with the ability to correct five errors (5EC) and to detect six errors (6ED) (hereafter referred to as a 5EC6ED code). A traditional approach using multi-bit ECC suffers from two prohibitive overheads that limit its applicability. First, building a low-latency decoder for multi-bit ECC codes is extremely costly. Second, the storage overhead of ECC bits is high (around 10% for a 5EC6ED ECC code for a cache line having 64 bytes).The ECC logic 122 implements a multi-bit error-correcting code with very small area, latency, and power overhead. The ECC logic 122 minimizes embedded DRAM power consumption in low-power operating modes (idle states) without penalizing performance in the normal operating mode. The ECC logic 122 includes a quick ECC logic 104 that is optimized for the cache lines that require little or no correction. The ECC logic 122 includes a high latency ECC logic 106 for cache lines that require complex multi-bit correction. In an embodiment, to minimize the performance impact of processing high latency multi-bit correction, the ECC logic 122 disables lines with multi- bit failures. In another embodiment, the ECC logic 122 leverages the natural spatial locality of the data to reduce the cost of storing the ECC bits.In one embodiment, the embedded DRAM 110 is a 128 Mega Bytes (MB) last level (Level 3 (L3)) cache included in the processor 100. In a baseline configuration with no error correction capability, the time between refreshes for the embedded DRAM cache 1 10 is 30 microseconds (us). This results in a significant amount of power consumed even when the Central Processing Unit (CPU) 102 is idle. Power consumed during refresh (refresh power) may be reduced by flushing and power gating the embedded DRAM cache 1 10 during low-power operating modes, for example, idle states. This, however, causes a significant performance penalty when the CPU 102 wakes up from idle mode (state) because the CPU 102 needs to reload data from external memory (main memory (not shown)) into the embedded DRAM cache 110, thereby incurring a large number of cold start cache misses. Alternatively, refresh power consumption may be reduced by decreasing the refresh frequency, that is, increasing the refresh period (time between refreshes) of the data stored in cache lines in the embedded DRAM cache 110. However, there is a higher number of failures (loss of sate of individual bits (cells)) for each cache line, if refresh frequency is decreased.The ECC logic 122 implements a code on each 1KB cache line (5EC6ED), requiring an additional 71 bits (0.87% overhead) for each cache line to store the 5EC6ED code. In an embodiment in which the refresh period is chosen such that no more than 1E- 03 (that is, 1/1000) of the cache lines will fail, the baseline configuration with no failure mitigation operates at the baseline refresh time of 30 micro seconds (us). The error correction code logic 122 allows an increase in the refresh period to 440 micro seconds which is almost a 15 times reduction in the refresh period compared to the baseline configuration.Logic to support a 5EC6ED code is very complex and imposes a long decoding latency penalty, proportional to both the number of error bits corrected and the number of data bits. If full encoding/decoding is required for every access to the cache memory, this can significantly increase cache access latency. In an embodiment of the present invention, error-prone portions of the cache can be disabled, avoiding the high latency of decode during operation.The error code correction logic 122 includes a quick error correction code (ECC) logic (first error correction logic) 104 and a high-latency error correction code (ECC) logic (second error code correction logic) 106.The Quick-ECC logic (unit) 104 includes syndrome generation logic and error correction logic for cache lines in eDRAM 110 with zero or one failures. The Quick-ECC logic 104 also classifies cache lines into two groups based on the syndrome: cache lines that require complex multi-bit error correction and cache lines that have less than two, that is, zero or one errors. Cache lines that require multi-bit error correction are forwarded to the high latency ECC processing logic (unit) 106 that performs multi-bit error correction. Cache lines that are corrected by the Quick ECC logic 104 are forwarded to the CPU 102 via Ll/L2 cache 124.In one embodiment, the high latency ECC processing logic 106 performs error correction using software. In another embodiment, the high latency multi-bit ECC processing logic 106 performs multi-bit error correction using a state machine. The combination of the quick ECC logic 104 and the high-latency ECC processing logic 106 allows cache lines in the eDRAM 1 10 that require one or less error corrections to be immediately corrected and forwarded with low latency to the CPU 102 via the L1/L2 cache 124. Latency increases for forwarding of cache lines in the eDRAM 1 10 with two or more failures to the CPU 102.The quick ECC logic 104 in the ECC logic 122 performs a one cycle ECC to correct a single bit error in a cache line in the embedded DRAM 110. The high latency correction logic 106 in the ECC logic 122 performs un-pipelined, high-latency ECC processing to correct multiple bit errors in a cache line.When a cache line is read from the embedded DRAM 110, it is passed through data buffer 1 14 to the quick error correction logic 104 together with the tag and ECC associated with the cache line read from the tag/ECC array 108. The tag and ECC are passed through data buffer 1 16 to the Quick ECC logic 104. A decoder (not shown) in the quick ECC logic 104 generates the syndrome for the received cache line. The generated syndrome includes information on whether the cache line has zero, one, or a higher number of errors. If the cache line has zero or one bit failures, the decoder in the quick ECC logic 104 performs the correction of the one bit failure in a short period of time. In one embodiment, the short period of time can be a single cycle (500 pico seconds (ps)). In other embodiments, the short period of time can be more than one cycle. The period of time is shorter than the time to perform multi-bit error correction by the high-latency ECC processing logic 106.The high latency associated with handling multi-bit failures may significantly reduce performance. To avoid incurring this high latency, in an embodiment, disabling problematic lines or a mechanism such as bit- fix may be integrated in repair logic 120.The frequency of errors plays a role in the disable strategy. If there is a low multi- bit error rate, an approach such as disabling cache lines containing multi-bit errors reduces the performance penalty. However, cache line disable results in unacceptable cache capacity loss if multi-bit error rates are high. If there is a high multi-bit error rate, a more complex mechanism such as bit-fix may be used to minimize the capacity lost to disabling cache lines.In an embodiment, repair logic 120 is coupled between the data buffers 1 14, 116 and the quick ECC logic 122. With the additional repair logic 120, the performance penalty of multi-bit decoding is incurred only once, that is, the first time an error due to a weak cell in the eDRAM 1 10 is identified. The repair logic 120 allows the number of errors to be reduced prior to forwarding the cache line to the ECC logic 122. Thus, overall latency is reduced by first using a repair mechanism to fix known errors in a cache line prior to applying ECC to the cache line.In one embodiment, the repair logic 120 includes bit fix logic. Bit fix logic identifies "broken" bit-pairs and maintains patches to repair the "broken" bit-pairs in the cache line. In an embodiment, the bit fix logic uses a quarter of the ways in a cache set to store positions and fixing bits for failing bits (that is, the correct state (value) for the failing bits in other ways of the set). In low- voltage mode, in an embodiment for a cache memory 1 10 implemented as an 8-way cache, two of the eight ways are reserved to store defect-correction information to correct the "broken" bit pairs.The bit fix logic allows defective pairs, that is, groups of 2-bits in the cache line in which at least one bit is defective (due to a logic state retention failure) to be disabled. The bit fix logic maintains a 2-bit "patch" (correct bit state) that can be used to correct the defective 2-bit pair. Repair patterns are stored in selected cache lines in cache memory (eDRAM) 1 10. During low-voltage operation, the repair patterns (repair pointers and patches) are stored in the cache memory 110. A read or write operation on a cache-line first fetches the repair patterns for the cache line. When reading, repair patterns allow reads to avoid reading data from "broken" bits (defective bits). Using patches from the repair patterns, the cache line is reconstructed before being forwarded to the CPU 102, another cache, or written back to memory. When writing, repair patterns allow writes to avoid writing to failed bits. New patches are written to the repair patterns to reflect new data written to the cache. An embodiment of a repair mechanism (repair logic 120 that uses bit-fix logic has been described. In other embodiments, repair mechanisms other than bit fix can be used to fix known errors prior to applying ECC.In an embodiment, the cache memory 1 10 is a 32K 8-way cache having 64B cache lines. Each access to data stored in the cache memory 1 10 requires an additional access to retrieve the appropriate repair patterns. To access the repair patterns without increasing the number of ports, the bit-fix scheme organizes the cache memory 110 into two banks. Two fix-lines are maintained, one in each bank, and each is used for repairing cache-lines in the opposite bank. The repair patterns for three cache lines fit in a single cache line. Thus a single fix-line (a cache line storing repair patterns) for every three cache lines is maintained. A fix-line is assigned to the bank opposite to the three cache lines that use its repair patterns. This allows a cache line to be fetched in parallel with its repair patterns without increasing the number of cache ports.On a cache hit, the data line is read from one bank in the cache memory 110 and a fix-line is read from another bank in the cache memory 1 10. The data line passes through 'n' bit shift stages, where 'n' represents the number of defective bit pairs. Each stage removes a defective pair, replacing it with the fixed pair. As the fix-line may also contain broken bits, SECDED ECC is applied to correct the repair patterns in the fix line before they are used. After the repair patterns have been fixed, they are used to correct the data- line. Repairing a single defective pair consists of three parts. First, SECDED ECC repairs any defective bits in the repair pattern. Second, a defect pointer identifies the defective pair. Third, after the defective pair has been removed, a patch reintroduces the missing correct bits into the cache line.Fig. 2 is a block diagram of a system 200 including an embodiment of a Recently Accessed Line Table (RALT) 1 12 and the embedded DRAM cache 110 and ECC logic 122 shown in Fig. 1 illustrating a fast access to a cache line in the eDRAM cache 110.A cache line size greater than 64 bytes is used to reduce the memory storage required to store multi-bit ECC codes. In an embodiment, the eDRAM cache 110 is a Level 3 (L3) cache which is 128MB embedded DRAM and the size of a cache line 202 is 1024 Bytes (1 Kilobytes (KB)). A Level 2 (L2) cache/Level 1 (LI) cache 124 has a 64 Byte (B) cache line (referred to as a sub-block of the L3 cache line). Most writes to the L3 eDRAM cache 110 are in the form of smaller 64 Byte sub-blocks generated at lower-level (LI or L2) cache memories 124 or fetched from non-cache memory (mainmemory/external memory (not shown)).To modify a 64B sub-block 204 in a 1KB cache line 202, a read-modify -write operation is performed by the CPU 102 in order to compute the ECC code. First, the 64B sub-block 204 that is being overwritten is read from the eDRAM cache 1 10 together with the ECC code 208 for the entire 1 KB cache line 202. The old data, old ECC code, and new data are used to compute the new ECC 208 for the entire 1KB cache line 202. The new 64B sub-block 204 and a new ECC code 208 are written back to the L3 eDRAM cache 110. However, the entire 1 KB cache line 202 is not read in order to compute the new ECC 208 as will be discussed later.Most reads from L3 cache 110 are performed to provide cache lines for allocation in lower-level (LI and L2) caches 124. Processing any sub-block 204 of a cache line 202 requires the ECC 208 to be processed with the entire data word (a 1KB cache line) 202 that it protects. As each 64B sub-block 204 in the 1KB cache line 202 needs to be checked, each reference to a 64B sub-block 204 is accompanied by a reference to the surrounding 64B sub-blocks 204. Thus, any read of the L3 embedded DRAM cache 110 accesses all 16 64-bit sub-blocks 204 in the 1KB cache line 202, in addition to the ECC 208 (per cache line) that all of the sub-blocks 204 share in the cache line 202. For example, in order to read only eight of the 16 sub-blocks 204 in one 1KB cache line 202, all 16 sub-blocks 204 are read eight times, for a total of 128 separate sub-block reads. This large number of additional sub-block reads results in a substantial increase in dynamic power consumption and a reduction in the useful cache bandwidth provided by the eDRAM cache 110.The majority of eDRAM failures are due to retention failures because as already discussed, the eDRAM cache 110 needs to be periodically refreshed to maintain the current state of each memory cell. In an embodiment, the retention time is 30 micro seconds (us), and each read of a particular cache line automatically implies a refresh of that cache line. Thus, retention failures should not occur for 30us in a particular cache line after that cache line has been read. This observation allows the number the superfluous reads to be minimized. The RALT 1 12 is used to track cache lines that have been referenced (read) within the last 30us.The first read to a cache line 202 in the eDRAM cache 110 results in all of the sub- blocks 204 in the cache line 202 being read and checked for errors. The address of the cache line 202 that is read is stored in a RALT entry 206 in the RALT 1 12. The stored address indicates that the cache line 202 has recently been read and checked and thus should remain free from retention errors for the next 30us. While the address of the read cache line is stored in the RALT 112, any subsequent reads of sub-block from that cache line 202 can forgo ECC processing and thus avoid reading the ECC 208 associated with the cache line 202 and other sub-blocks 204 in the cache line 202. The RALT 1 12 ensures that none of its entries 206 have been stored for more than 30us by dividing each 30us time period into a plurality of equal "cache line read" periods. Entries 206 that are allocated in the RALT 112 during each period are marked with a period identifier 214 identifying the sub-refresh period. Transitions between sub-refresh periods results in all RALT entries previously allocated in one of the plurality of "cache line read" periods to be invalidated (as indicated by the state of the "valid" field associated with the entry 206 in the RALT).Each entry 206 in the RALT 1 12 includes the following fields: a line address field209 to identify the cache line that the entry is associated with; a valid field 212, a period identifier field 214 to indicate in which period the line was allocated; and a parity field 211 that includes one parity bit for each sub-block in the cache line. In an embodiment the period identifier field 214 has two bits to indicate which of four periods (P0, PI, P2, P3) the cache line was allocated and the parity field 21 1 has 16-bits, one per 64B sub-block in the cache line. The RALT 1 12 is direct mapped, but supports a CAM (ContentAddressable Memory) invalidate on the period field 214 to allow bulk invalidates of entries 206 in the RALT 1 12 during period transitions.The first time a sub-block 204 is read, the entire ECC 208 is also read along with each sub-block in the 1KB cache line 202 to allow ECC processing for a single 64B sub- block 204. The entry 206 associated with the cache line 202 in the RALT 1 12 is updated with the line address of the referenced cache line 202, a period identifier, and a single parity bit for each sub-block 204 in the cache line 202. After the first hit to a cache line 202, future accesses to the same cache line 202 within the refresh period do not require ECC processing.The first read to a cache line causes all sub-blocks in the line to be read and checked for failures. The address of the line is then stored in the RALT to indicate that it has recently been checked and will remain free from retention failures for the next 30usec. As long as the address of the line is stored in the RALT, any sub-block reads from the line can forgo ECC processing and thus avoid reading the ECC code and other sub-blocks in the line.To operate correctly, the RALT 1 12 ensures that none of its entries are more than 30us old. To guarantee this, a counter 216 is used to measure the passage of each 30us period. Each 30us period is divided into four equal sub-periods (P0, PI, P2, P3). Entries allocated in the RALT 1 12 during each period are marked with a 2 bit identifier to specify the allocation sub-period which can be determined by checking the current value of the counter. For example, the passage of 30us in a 2GHz processor 100 can be measured using a counter 216 that increments every cycle counting to 60000. Counter values between 0-15000 for example correspond to sub-period P0, 15001-30000 correspond to sub-period PI, 30001-45000 correspond to sub-period P2, and 45001-60000 correspond to sub-periodP3. When the counter 216 reaches 60000 it resets to 0, resulting in a transition from P3 to P0. Each sub-period transition can cause the invalidation of some or all of the RALT entries allocated during the previous instance of that sub-period. For example, a transition from sub-period P0 to sub-period P 1 will result in all RALT entries previously allocated in sub-period PI to be invalidated.Fig. 3 is a block diagram of the system shown in Fig. 2 illustrating a subsequent read of a cache line within the refresh period. In most cases, only the requested 64B sub- block 204 is read. Parity for the 64B sub-block 204 is computed and compared to the parity 21 1 for that 64B sub-block 204 of the cache line 202 stored in the RALT 1 12. If there is a match, the inference is that the 64B sub-block 204 is valid and the 64B sub- block 204 is forwarded to the requesting cache 124 or processor 102. A parity mismatch is treated as a RALT miss and the entire 1KB cache line 202 is read. The RALT 1 12 is used to track recently accessed cache lines 202 to avoid reading the entire 1KB cache line 202 on every cache read, thus minimizing dynamic power.Fig. 4A is a block diagram illustrating an embodiment of an ECC encoder 400 included in the quick ECC logic 104 shown in Fig. 1. BCH codes are a large class of multi-bit error-correcting codes which can correct both highly concentrated and widely scattered errors. In general, each BCH code is a linear block code defined over a finite Galois Field GF(2m) with a generator polynomial, where 2mrepresents the maximum number of code word bits.The ECC encoder (encoding logic) 400 takes the k-bit input data word d and uses a pre-defined encoder matrix G to generate the corresponding code word u (u = d x G). As BCH is a systematic code, the original k-bit data is retained in the code word u(x), and is followed by r check bits.Fig. 4B is a block diagram illustrating an embodiment of an ECC decoder (decoding logic) 402 included in the quick ECC logic shown in Fig. 1. The decoding logic 402 detects and corrects any errors in the received code word u(x) to recover the original value of data. The decoding logic 402 includes syndrome generation logic 404, error classification logic 406 and error correction logic 408.The syndrome generation logic 404 first computes a syndrome S by multiplying v (a code word with error e, such that v = u + e ) with the transpose of a pre-defined H- matrix (S = v x HT). The G and H matrices are constructed in such a way that G x HT= 0. The general form of H-matrix is as follows:Parity"1 1 1 1 11 a a aH = H3= 1 a3a6a9. . a3*-1'1 a{2'-l)a3**2'-1' . . a(2'- "-lIn the finite field GF(2m), each element d can be represented as a polynomial of a with a degree less than m, or simply a vector with m binary coefficients of the polynomial. Therefore, the H matrix can be expanded into a binary matrix with (t*m+l) rows, where t is the maximum number of errors that the code can correct. Since S = v x HT, S also has t*m+l bits, which can be divided into multiple components [Parity, Si, S3,The error classification logic uses the syndrome S to detect if the code word has any errors. Since:S = v x HT= (u + e) x HT= (d x G + e) x HT= d x (G x HT) + e x HT= e x HT(3) Therefore, in case of zero errors, S = 0 and the following equation would hold true: Parity = S1= S3 = ... = S2i_j= 0 ^No errors in the code word lead to a syndrome value of zero, as shown in equation (3). This case can be detected by performing a logical OR of all the syndrome bits. This requires ceil(log2tm) 2-input gate delays.The error correction logic uses the syndrome value to pinpoint the locations of corrupted bits, if the above equation is not satisfied.Let the error locations in e be denoted as \j1 2,■■■ , jt], then each syndrome component Si can be specified as:Si= ajm+ aJ2*i+ ... + aJt*iThe correction logic implements the following three steps:Step 1: Determine the coefficients of error location polynomial σ(χ), where σ(χ) is defined such that the roots οΐσ(χ) are given by the inverse of error elements d1, d2, ... , d' respectively,σ(χ) = 1 + σιχ + ... + σ1χ' = {l- aiix){l-a x)...{l -ai'x)Step 1 of error correction is based on a t-step iterative algorithm, where each iteration involves a Galois Field inversion, which alone takes 2m operationsStep 2: Solve the roots of σ(χ), which are the error locations. When polynomial σ(χ) is determined, each field element d is substituted into the polynomial. Those elements which make the polynomial equal to zero are the roots. The implementation of Step 2 can either take n-cycles with one circuit, or a single cycle with n parallel circuits. Either way, the base circuit is 0(t*m2).Step 3: Calculate the correct value for data bits. This is done by simply flipping the bits at error locations.In the case of a single-bit error, the syndrome exactly matches the H-matrix column that corresponds to the error bit. Therefore, a single-bit error can be detected by comparing each column of the H-matrix with the syndrome. This correction is significantly faster than the general case of f-bit correction (with t > 1) because it does not require Step 1 and most of the Step 2 of the error correction logic. All the syndrome components do not need to be matched with entire H-matrix columns. All that is needed is to compare Si to each column in Hi (defined in equation (1)) and verify that the following equation is satisfied:[{parity = l) & (Sl3== Si) & (Sl5== 55) & ... & (512" =^SW)] = 1 ^To minimize latency, the comparison of Si with Hi and all the comparisons in equation (7) can proceed in parallel.Fig. 5 is a flow graph illustrating an embodiment of a method for using the system 100 shown in Fig. 1 according to the principles of the present invention.At block 500, if the cache line address (addr) is associated with a RALT entry 206 in the Recently Accessed Line Table 1 12, the cache line has been recently accessed and no error checking is required, the cache line address is forwarded to the CPU 102 via the L1/L2 cache 124. If not, processing continues with block 502.At block 502, the cache line address (addr) is stored in a line address field 209 in a RALT entry 206 the RALT Table 1 12 as discussed earlier in conjunction with Fig. 2. Data stored in the cache line in cache memory 1 10 and the Tag/ECC stored in the Tag/ECC array 1 18 corresponding to the address is read and forwarded through data buffers 1 14, 1 16. Processing continues with block 504.At block 504, in an embodiment that includes repair logic 120, processing continues with block 512 to repair the cache line. In an embodiment that does not include repair logic, processing continues with block 506.At block 506, quick ECC is performed by quick ECC logic 104 to determine if there are errors in the cache line. Processing continues with block 508.At block 508, if there are more than two errors in the cache line to be corrected, processing continues with block 514. If there are less than two errors, the error is corrected by the quick ECC logic 104 and processing continues with block 510. At block 510, the corrected cache line data is forwarded to the CPU 102 via L1/L2 cache 124.At block 512, the cache line data read from cache memory 1 10 and forwarded through data buffers 1 14, 1 16 is repaired as discussed earlier. Processing continues with block 506.At block 514, the Hi ECC logic corrects the multi-bit error in the cache line and processing continues with block 510.Fig. 6 is a block diagram of a system 600 that includes an embodiment of the processor 100 shown in Fig. 1. The system 600 includes a processor 100 with embedded cache memory, a Memory Controller Hub (MCH) 602 and an Input/Output (I/O)Controller Hub (ICH) 604. The MCH 602 includes a memory controller 606 that controls communication between the processor 601 and external memory (main memory) 610. The processor 100 and MCH 602 communicate over a system bus 616.The CPU 102 may be any one of a plurality of processors such as a single core Intel® Pentium IV ® processor, a single core Intel Celeron processor, an Intel®XScale processor or a multi-core processor such as Intel® Pentium D, Intel® Xeon® processor, or Intel® Core® Duo processor or any other type of processor.The memory 610 may be Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), Synchronized Dynamic Random Access Memory (SDRAM), Double Data Rate 2 (DDR2) RAM or Rambus Dynamic Random Access Memory (RDRAM) or any other type of memory.The ICH 604 may be coupled to the MCH 602 using a high speed chip-to-chip interconnect 614 such as Direct Media Interface (DMI). DMI supports 2 Gigabit/second concurrent transfer rates via two unidirectional lanes.The ICH 604 may include a storage Input/Output (I/O) controller for controlling communication with at least one storage device 612 coupled to the ICH 604. The storage device may be, for example, a disk drive, Digital Video Disk (DVD) drive, Compact Disk (CD) drive, Redundant Array of Independent Disks (RAID), tape drive or other storage device. The ICH 604 may communicate with the storage device 612 over a storage protocol interconnect 618 using a serial storage protocol such as, Serial Attached Small Computer System Interface (SAS) or Serial Advanced Technology Attachment (SATA).It will be apparent to those of ordinary skill in the art that methods involved in embodiments of the present invention may be embodied in a computer program product that includes a computer usable medium. For example, such a computer usable medium may consist of a read only memory device, such as a Compact Disk Read Only Memory (CD ROM) disk or conventional ROM devices, or a computer diskette, having a computer readable program code stored thereon.While embodiments of the invention have been particularly shown and described with references to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of embodiments of the invention encompassed by the appended claims. |
Various embodiments of light emitting dies and solid state lighting ("SSL") devices with light emitting dies, assemblies, and methods of manufacturing are described herein. In one embodiment, a light emitting die includes an SSL structure configured to emit light in response to an applied electrical voltage, a first electrode carried by the SSL structure, and a second electrode spaced apart from the first electrode of the SSL structure. The first and second electrode are configured to receive the applied electrical voltage. Both the first and second electrodes are accessible from the same side of the SSL structure via wirebonding. |
CLAIMS We claim: 1. A light emitting die, comprising: a first semiconductor material having a first surface; a second semiconductor material having a second surface facing away from the first surface of the first semiconductor material; an active region between the first and second semiconductor materials, wherein the first semiconductor material, the active region, and the second semiconductor material together have a stack thickness equal to a distance between the first and second surfaces; a first electrode in contact with the first surface of the first semiconductor material; a second electrode in contact with the second surface of the second semiconductor material, the second electrode being spaced apart from the first electrode by at least the stack thickness; and wherein a portion of the second electrode is exposed through the first semiconductor material, the active region, and the second semiconductor material. 2. The light emitting die of claim 1 wherein both the first electrode and the second electrode are accessible from the same side of the light emitting die. 3. The light emitting die of claim 1 wherein: the second electrode includes a first portion and a second portion extending from the first portion; the first portion of the second electrode is covered by the first semiconductor material, the active region, and the second semiconductor material; and the second portion of the second electrode is exposed through the first semiconductor material, the active region, and the second semiconductor material. 4. The light emitting die of claim 1 wherein: the second electrode includes a first portion and a second portion extending from the first portion; the first portion of the second electrode is covered by the first semiconductor material, the active region, and the second semiconductor material; and the second portion of the second electrode laterally extends beyond the second semiconductor material, the active region, and the first semiconductor material. 5. The light emitting die of claim 1 wherein: the second electrode includes a first portion and a second portion extending from the first portion; the first portion of the second electrode is covered by the first semiconductor material, the active region, and the second semiconductor material; the second portion of the second electrode includes a plurality of individual sections spaced apart from one another; and the individual sections of the second portion laterally extend beyond the first semiconductor material, the active region, and the second semiconductor material. 6. A light emitting device, comprising: a first semiconductor material; a second semiconductor material spaced apart from the first semiconductor material; an active region between the first and second semiconductor materials; an opening extending completely through the second semiconductor material and the active region and only a portion of the first semiconductor material; a passivation material having a first passivation portion lining a sidewall of the opening and a second passivation portion extending laterally relative to the first passivation portion external to the opening; a first electrode in the opening and adjacent the first passivation portion; and a second electrode external to the opening between the second semiconductor material and the second passivation portion of the passivation material, a portion of the second electrode being exposed through the first semiconductor material, the active region, and the second semiconductor material. 7. The light emitting device of claim 6 wherein: the light emitting device further includes a conductive substrate proximate the second semiconductor material; and the first electrode is in direct contact with the conductive substrate. 8. The light emitting device of claim 6 wherein: the light emitting device further includes a conductive substrate proximate the second semiconductor material; the first electrode is in direct contact with the conductive substrate; and the second passivation portion electrically isolates the second electrode from the conductive substrate. 9. The light emitting device of claim 6 wherein: the light emitting device further includes a conductive substrate proximate the second semiconductor material; the first electrode has a first end proximate the first semiconductor material and a second end proximate the second semiconductor material; the second passivation portion has a first surface and a second surface opposite the first surface, the first surface being in contact with the second electrode; and the second surface of the second passivation portion is generally co-planar with the second end of the first electrode. 10. The light emitting device of claim 6 wherein: the light emitting device further includes a conductive substrate proximate the second semiconductor material; the first electrode has a first end proximate the first semiconductor material and a second end proximate the second semiconductor material; the second passivation portion of the passivation material has a first surface and a second surface opposite the first surface, the first surface being in contact with the second electrode; the second surface of the second passivation portion is generally co-planar with the second end of the first electrode; and both the second surface of the second passivation portion and the second end of the first electrode are in contact with the conductive substrate. 11. The light emitting device of claim 6 wherein: the first electrode has a first conductive portion in the opening and a second conductive portion external to the opening; and the second conductive portion extends laterally beyond the second electrode. 12. The light emitting device of claim 6 wherein: the first electrode has a first conductive portion in the opening and a second conductive portion external to the opening; the first passivation portion of the passivation material electrically isolates the first conductive portion from the active region and the second semiconductor material; and the second conductive portion extends laterally beyond the second electrode. 13. The light emitting device of claim 6 wherein: the first electrode has a first conductive portion in the opening and a second conductive portion external to the opening; the first passivation portion of the passivation material electrically isolates the first conductive portion from the active region and the second semiconductor material; the second conductive portion is in contact with the second passivation portion; and the second conductive portion extends laterally beyond the second passivation portion and the second electrode. 14. The light emitting device of claim 6 wherein: the light emitting device further includes a substrate and an insulating material on the substrate; the first electrode has a first conductive portion in the opening and a second conductive portion external to the opening; the first passivation portion of the passivation material electrically isolates the first conductive portion from the active region and the second semiconductor material; the second conductive portion is between the insulating material and the second passivation portion; and the second conductive portion extends laterally beyond the second passivation portion and the second electrode. 15. A light emitting diode ("LED") die, comprising: a first semiconductor material; a second semiconductor material spaced apart from the first semiconductor material; an active region between the first and second semiconductor materials; a first electrode extending through the second semiconductor material, the active region and a portion of the first semiconductor material, wherein the first electrode is electrically coupled to the first semiconductor material and electrically isolated from the second semiconductor material and the active region; and a second electrode in contact with the second semiconductor material and surrounding at least a portion of the first electrode, wherein the second electrode is electrically isolated from the first electrode and a portion of the second electrode is exposed through the first semiconductor material, the active region, and the second semiconductor material. 16. The LED die of claim 15 wherein: the second electrode includes a first portion and a second portion extending from the first portion; the first portion of the second electrode is covered by the first semiconductor material, the active region, and the second semiconductor material; and the second portion of the second electrode is exposed through the first semiconductor material, the active region, and the second semiconductor material. 17. The LED die of claim 15 wherein: the second electrode includes a first portion and a second portion extending from the first portion; the first portion of the second electrode is covered by the first semiconductor material, the active region, and the second semiconductor material; the LED die has an opening extending through the first portion of the second electrode; the second portion of the second electrode is exposed through the first semiconductor material, the active region, and the second semiconductor material; the first electrode has a first conductive portion in the opening and a second conductive portion external to the opening; and the second conductive portion extends laterally beyond the second portion of the second electrode. 18. The LED die of claim 15 wherein: the second electrode includes a first portion and a second portion extending from the first portion; the first portion of the second electrode is covered by the first semiconductor material, the active region, and the second semiconductor material; the LED die has an opening extending through the first portion of the second electrode; the second portion of the second electrode is exposed through the first semiconductor material, the active region, and the second semiconductor material; the first electrode has a first conductive portion in the opening and a second conductive portion external to the opening; the second conductive portion extends laterally beyond the second portion of the second electrode; and the second conductive portion and the second portion of the second electrode are accessible from the same side of the LED die. 19. The LED die of claim 15 wherein: the second electrode includes a first portion and a second portion extending from the first portion; the first portion of the second electrode is covered by the first semiconductor material, the active region, and the second semiconductor material; the LED die has an opening extending through the first portion of the second electrode; the second portion of the second electrode is exposed through the first semiconductor material, the active region, and the second semiconductor material; the first electrode has a first conductive portion in the opening and a second conductive portion external to the opening; the second conductive portion extends laterally beyond the second portion of the second electrode; and the LED die further includes a passivation material between the second conductive portion of the conductive material and the second electrode. 20. A solid state lighting ("SSL") die, comprising: a first semiconductor material; a second semiconductor material spaced apart from the first semiconductor material; an active region between the first and second semiconductor materials; an opening extending from the second semiconductor material into the first semiconductor material through the active region; a first electrode having a first portion in the opening and a second portion external to the opening; a second electrode having a first part covered by the second semiconductor material and a second part extending laterally from the first part; and wherein the second portion of the first electrode and the second part of the second electrode are both at least partially exposed through the first semiconductor material, the active region, and the second semiconductor material. 21. The SSL die of claim 20 wherein the second portion of the first electrode extends laterally beyond the second part of the second electrode. 22. The SSL die of claim 20 wherein: the second part of the second electrode extends laterally beyond the first semiconductor material, the active region, and the second semiconductor material; and the second portion of the first electrode extends laterally beyond the second part of the second electrode. 23. The SSL die of claim 20 wherein: the SSL die further includes a passivation material having a first passivation portion in the opening and a second passivation portion external to the opening; the first passivation portion generally conforms to a sidewall of the opening; and the second passivation portion is between the second portion of the first electrode and the second part of the second electrode. 24. The SSL die of claim 20 wherein: the SSL die further includes a passivation material having a first passivation portion in the opening and a second passivation portion external to the opening; the second passivation portion is between the second portion of the first electrode and the second part of the second electrode; and the second portion of the first electrode extends laterally beyond both the second part of the second electrode and the second passivation portion. 25. The SSL die of claim 20, further comprising: a substrate proximate the second portion of the first electrode; and an insulation material between the substrate and the second portion of the first electrode. 26. A method of forming a light emitting device having a first semiconductor material, a second semiconductor material spaced apart from the first semiconductor material, and an active region between the first and second semiconductor materials, the method comprising: forming an opening extending completely through the second semiconductor material and the active region and only a portion of the first semiconductor material; depositing a passivation material onto the light emitting device, the passivation material having a first passivation portion lining a sidewall of the opening and a second passivation portion extending laterally relative to the first passivation portion external to the opening; forming a first electrode in the opening and adjacent the first passivation portion; forming a second electrode external to the opening on the second semiconductor material; and exposing a portion of the second electrode through the first semiconductor material, the active region, and the second semiconductor material. 27. The method of claim 26, further comprising: attaching a conductive substrate to the second passivation portion on the second semiconductor material; and contacting the conductive substrate with the first electrode. 28. The method of claim 26 wherein exposing a portion of the second electrode includes removing a portion of the first semiconductor material, the active region, and the second semiconductor material. |
SOLID STATE LIGHTING DEVICES WITH ACCESSIBLE ELECTRODES AND METHODS OF MANUFACTURING TECHNICAL FIELD [0001] The present disclosure is related to light emitting dies (e.g., light emitting diodes ("LEDs")) and solid state lighting ("SSL") devices with light emitting dies having accessible electrodes and methods of manufacturing. BACKGROUND [0002] SSL devices can have light emitting dies with different electrode configurations. For example, Figure 1A is a cross-sectional view of a light emitting die 10 with lateral electrodes. As shown in Figure 1A, the light emitting die 10 includes a substrate 12 carrying an LED structure 11 comprised of N-type gallium nitride (GaN) 14, GaN/indium gallium nitride (InGaN) multiple quantum wells ("MQWs") 16, and P-type GaN 18. The light emitting die 10 also includes a first electrode 20 on the N-type GaN 14 and a second electrode 22 on the P-type GaN 18. As shown in Figure 1A, the first and second electrodes 20 and 22 are both on the front side of the LED structure 1 1 and readily accessible. [0003] Figure IB shows a light emitting die 10' with vertical electrodes. The light emitting die 10' includes a first electrode 20 on the N-type GaN 14 and second electrode 22 under the P-type GaN 18. The light emitting die 10' can have higher degrees of current spreading between the first and second electrodes 20 and 22 than the light emitting die 10 of Figure 1A. However, the second electrode 22 is not readily accessible because it is buried between the P-type GaN 18 and the substrate 12. In addition, the first electrode 20 partially blocks the generated light (as indicated by the arrow 15 a), and thus only allows a portion of the generated light to be extracted (as indicated by the arrow 15b). Thus, the light extraction efficiency of the light emitting die 10' may be limited. [0004] One approach for improving the light extraction efficiency of light emitting dies with vertical electrodes is by incorporating a "buried" electrode. As shown in Figure 1C, an light emitting die 10" includes an opening 21 extending into the N-type GaN 14 from the substrate 12. An insulating material 25 lines the sidewalls 23 of the opening 21. A conductive material is disposed in the opening 21 to form the first electrode 20. The light emitting die 10" with the buried first electrode 20 can have improved light extraction efficiencies because it does not cover any portion of the N-type GaN 14. However, neither of the first and second electrodes 20 and 22 are readily accessible in this design, and they require precise alignment with external conductors to avoid electrode mismatch. Accordingly, several improvements in electrode configuration of light emitting dies may be desirable. BRIEF DESCRIPTION OF THE DRAWINGS [0005] Figure 1 A is a schematic cross-sectional diagram of a light emitting die with lateral electrodes in accordance with the prior art. [0006] Figure IB is a schematic cross-sectional diagram of a light emitting die with vertical electrodes in accordance with the prior art. [0007] Figure 1C is a schematic cross-sectional diagram of a light emitting die with a buried electrode in accordance with the prior art. [0008] Figure 2A is a schematic cross-sectional diagram of a light emitting die with vertical electrodes in accordance with embodiments of the present technology. [0009] Figure 2B is a schematic top plan view of the light emitting die shown in Figure 2A. [0010] Figure 3 A is a schematic cross-sectional diagram of a light emitting die with a buried electrode in accordance with embodiments of the present technology. [0011] Figure 3B is a schematic top plan view of the light emitting die shown in Figure 3A. [0012] Figure 4 is a schematic illustration of an SSL device incorporating the light emitting dies of Figures 2A-3B in accordance with embodiments of the present technology. [0013] Figure 5A is a schematic cross-sectional diagram of a light emitting die with a buried electrode in accordance with embodiments of the present technology. [0014] Figure 5B is a schematic top plan view of the light emitting die shown in Figure 5A. [0015] Figure 5C is a schematic cross-sectional diagram of a light emitting die with a buried electrode in accordance with embodiments of the present technology. [0016] Figure 6 A is a schematic cross-sectional diagram of a light emitting die with a buried electrode in accordance with additional embodiments of the present technology. [0017] Figure 6B is a schematic top plan view of the light emitting die shown in Figure 6A. DETAILED DESCRIPTION [0018] Various embodiments of light emitting dies, SSL devices with light emitting dies, and methods of manufacturing are described below. As used hereinafter, the term "SSL device" generally refers to devices with one or more solid state light emitting dies, such as LEDs, laser diodes ("LDs"), and/or other suitable sources of illumination other than electrical filaments, a plasma, or a gas. A person skilled in the relevant art will also understand that the technology may have additional embodiments, and that the technology may be practiced without several of the details of the embodiments described below with reference to Figures 2A-6B. [0019] Figure 2A is a schematic cross-sectional diagram of a light emitting die 100, and Figure 2B is a top plan view of the light emitting die 100 shown in Figure 2A. As shown in Figure 2A, the light emitting die 100 can include an SSL structure 111, a first electrode 120, a second electrode 122, and a substrate 102 carrying the SSL structure 111 with an insulating material 103 therebetween. Only certain components of the light emitting die 100 are shown in Figures 2 A and 2B, and it will be appreciated that the light emitting die 100 can also include a lens, a mirror, and/or other suitable optical and/or electrical components in other embodiments. [0020] In one embodiment, the substrate 102 can include a metal, a metal alloy, a doped silicon, and/or other electrically conductive substrate materials. For example, in one embodiment, the substrate 102 can include copper, aluminum, and/or other suitable metals. In other embodiments, the substrate 102 can also include a ceramic material, a silicon, a polysilicon, and/or other generally non-conductive substrate materials. For example, the substrate 102 can include intrinsic silicon and/or polysilicon materials. Even though only one SSL structure 111 is shown on the substrate 102, two, three, or any other desired number of SSL structure 111 may be formed on the substrate 102 in practice. [0021] In certain embodiments, the insulating material 103 can include silicon oxide (Si02), silicon nitride (Si3N4), and/or other suitable non-conductive materials formed on the substrate 102 via thermal oxidation, chemical vapor deposition ("CVD"), atomic layer deposition ("ALD"), and/or other suitable techniques. In other embodiments, the insulating material 103 can include a polymer (e.g., polytetrafluoroethylene and/or other fluoropolymer of tetrafluoroethylene), an epoxy, and/or other polymeric materials. In one example, the polymeric materials may be configured as a preformed sheet or tape that can be attached to the substrate 102 via solid-solid bonding, adhesives, and/or other suitable techniques. In another example, the polymeric materials may be configured as a paste or a liquid that may be applied to the substrate 102 and subsequently cured. In further embodiments, the insulating material 103 may be omitted if the substrate 102 is electrically insulative. [0022] The SSL structure 111 is configured to emit light and/or other types of electromagnetic radiation in response to an applied electrical voltage. In the illustrated embodiment, the SSL structure 111 includes a first semiconductor material 104 having a first surface 113a proximate a first side 11 la of the light emitting die 100, an active region 106, and a second semiconductor material 108 having a second surface 113b proximate a second side 11 lb of the light emitting die 100. The SSL structure 111 has a stack thickness equal to the sum of the thicknesses of the first semiconductor material 104, the active region 106, and the second semiconductor material 108. The stack thickness of the SSL structure 111 shown in Figure 2A, for example, is the distance between the first surface 113a and the second surface 113b. In other embodiments, the SSL structure 111 can also include silicon nitride, aluminum nitride (A1N), and/or other suitable intermediate materials. [0023] In certain embodiments, the first semiconductor material 104 can include N-type GaN (e.g., doped with silicon (Si)), and the second semiconductor material 108 can include P- type GaN (e.g., doped with magnesium (Mg)). In other embodiments, the first semiconductor material 104 can include P-type GaN, and the second semiconductor material 108 can include N-type GaN. In further embodiments, the first and second semiconductor materials 104 and 108 can individually include at least one of gallium arsenide (GaAs), aluminum gallium arsenide (AlGaAs), gallium arsenide phosphide (GaAsP), gallium(III) phosphide (GaP), zinc selenide (ZnSe), boron nitride (BN), AlGaN, and/or other suitable semiconductor materials. [0024] The active region 106 can include a single quantum well ("SQW"), MQWs, and/or a bulk semiconductor material. As used hereinafter, a "bulk semiconductor material" generally refers to a single grain semiconductor material (e.g., InGaN) with a thickness greater than about 10 nanometers and up to about 500 nanometers. In certain embodiments, the active region 106 can include an InGaN SQW, GaN/InGaN MQWs, and/or an InGaN bulk material. In other embodiments, the active region 106 can include aluminum gallium indium phosphide (AlGalnP), aluminum gallium indium nitride (AlGaInN), and/or other suitable materials or configurations. [0025] In certain embodiments, at least one of the first semiconductor material 104, the active region 106, and the second semiconductor material 108 can be formed on the substrate material 102 via metal organic chemical vapor deposition ("MOCVD"), molecular beam epitaxy ("MBE"), liquid phase epitaxy ("LPE"), and hydride vapor phase epitaxy ("HVPE"). In other embodiments, at least one of the foregoing components and/or other suitable components (not shown) of the SSL structure 111 may be formed via other suitable epitaxial growth techniques. [0026] As shown in Figures 2A and 2B, the first electrode 120 is spaced apart from the second electrode 122 by the vertical thickness of the entire SSL structure 111. The shortest distance between the first and second electrodes in this embodiment, therefore, is the distance from the first surface 113a to the second surface 113b. In the illustrated embodiment, the first electrode 120 includes a plurality of electrode fingers 121 (three are shown for illustration purposes) coupled to one another by a cross member 123. The electrode fingers 121 extend generally parallel to an axis 105 (Figure 2B) of the SSL structure 111, and the cross member 123 is generally perpendicular to the electrode fingers 121. In certain embodiments, the electrode fingers 121 and/or the cross member 123 can include indium tin oxide ("ITO"), aluminum zinc oxide ("AZO"), fluorine-doped tin oxide ("FTO"), and/or other suitable transparent conductive oxides ("TCOs"). In other embodiments, the electrode fingers 121 and/or the cross member 123 can include copper (Cu), aluminum (Al), silver (Ag), gold (Au), platinum (Pt), and/or other suitable metals. In further embodiments, the electrode fingers 121 and/or the cross member 123 can include a combination of TCOs and one or more metals. Techniques for forming the electrode fingers 121 and/or the cross member 123 can include MOCVD, MBE, spray pyrolysis, pulsed laser deposition, sputtering, electroplating, and/or other suitable deposition techniques. [0027] The second electrode 122 can include a reflective and conductive material (e.g., silver or aluminum), at least a portion of which can be exposed through the SSL structure 111. For example, as shown in Figures 2A and 2B, the second electrode 122 includes a covered first portion 122a and an exposed second portion 122b laterally extending beyond the SSL structure 111. As a result, the exposed second portion 122b can form a connection site 126 for interconnecting with external components (not shown) via a wirebond and/or other suitable couplers. [0028] During manufacturing, in certain embodiments, the substrate 102 may be selected to have a first lateral dimension Ls greater than a second lateral dimension LD of the SSL structure 111. The insulating material 103 and the second electrode 122 (e.g., aluminum, silver, or other reflective and conductive materials) can then be formed on the substrate 102 in sequence. In one embodiment, the SSL structure 1 11 may be attached to the second electrode 122 on the substrate 102 via solid-solid bonding (e.g., copper-copper bonding, nickel-tin bonding, and gold-tin bonding) between the second electrode 122 and the second semiconductor material 108. In another embodiment, a bonding material (e.g., gold-tin, not shown) may be formed on the second semiconductor material 108. In yet another embodiment, a reflective material (e.g., silver, not shown) may be formed on the second semiconductor material 108 before forming the bonding material. The SSL structure 111 can then be bonded to the substrate 102 via solid-solid bonding between the second electrode 122 and the bonding material. In further embodiments, the SSL structure 111 may be attached to the substrate 102 via other suitable mechanisms. [0029] In other embodiments, the substrate 102 may be selected to have a first lateral dimension Ls that is generally the same as the lateral dimension LD of the SSL structure 1 11. After attaching the SSL structure 111 to the substrate 102, a portion of the SSL structure 111 may be removed to form the exposed second portion 122b of the second electrode 122. Techniques for removing a portion of the SSL structure 1 11 can include partial dicing (e.g., with a die saw), laser ablation, wet etching, dry etching, and/or other suitable technique. In further embodiments, the partially exposed second electrode 122 may be formed via other suitable techniques. [0030] Several embodiments of the light emitting die 100 can have the connection accessibility of the light emitting die 10 of Figure 1A with current spreading characteristics generally similar to that of the light emitting die 10' of Figure IB. As shown in Figures 2 A and 2B, the exposed second portion 122b of the second electrode 122 provides ready access for external connection. As a result, both the first electrode 120 and the second electrode 122 can be accessed from the same side (i.e., the first side 11 la) of the SSL structure 111. Meanwhile, the covered first portion 122a of the second electrode 122 is arranged vertically across the SSL structure 111 with respect to the first electrode 120, and thus providing better current distribution through the SSL structure 111 compared to the lateral device in Figure 1 A. As a result, several embodiments of the light emitting die 100 can operate with high efficiency while providing the connection accessibility of the light emitting die 10 of Figure 1A. [0031] Even though the exposed second portion 122b of the second electrode 122 is shown in Figure 2B as extending substantially the entire depth D (Figure 2B) of the SSL structure 111 along the axis 105, in other embodiments the second portion 122b may extend only partially along the axis 105 of the SSL structure 111. For example, as shown in Figures 3 A and 3B, the second portion 122b may be exposed through a notch 128 in the SSL structure 111 formed on the substrate 102 with the insulating material 103. The notch 128 has a depth d (Figure 3B) that is less than the depth D (Figure 2B) of the SSL structure 111. In other embodiments, the second portion 122b may also include a plurality of individual sections spaced apart from one another. For example, three sections (identified individually as first, second, and third sections 122b, 122b', and 122b") are shown in Figure 3B for illustration purposes. Each of the three sections 122b, 122b', and 122b" may form a connection site 126 for connecting to an external component (not shown). As a result, the light emitting die 100 can provide a plurality of connection sites 126 to receive/transmit signals and/or power to/from more than one component. In further embodiments, the insulating material 103 may be omitted from the light emitting die 100. [0032] Several embodiments of the light emitting die 100 can be packaged in an SSL device with improved thermal dissipation characteristics over conventional devices. For example, Figure 4 is a schematic illustration of an SSL device 150 incorporating the light emitting dies 100 of Figures 2A-3B in accordance with embodiments of the present technology. As shown in Figure 4, the SSL device 150 can include a carrier 152 carrying a plurality of light emitting dies 100. Four light emitting dies 100 are shown in Figure 4 for illustration purposes. In other embodiments, the SSL device 150 can include any other desired number of light emitting dies 100. [0033] The carrier 152 can include a metal, a metal alloy, and/or other types of thermally conductively structure. The SSL assembly 150 can also include a first terminal 154 laterally spaced apart from a second terminal 156 on the carrier 152. The first and second terminals 154 and 156 are formed on insulative pads 155 and 157, respectively. The insulative pads 155 and 157 can include silicon oxide, silicon nitride, and/or other suitable types of electrically insulative materials. [0034] As shown in Figure 4, the first terminal 154, the plurality of light emitting dies 100, and the second terminal 156 are electrically coupled with wirebonds 158 in series because the first and second electrodes 120 and 122 are both on the front side of the individual light emitting dies 100. As a result, the back side of the light emitting dies 100 can directly contact the surface 152a of the carrier 152. In operation, such direct contact allows the light emitting dies 100 to readily transfer heat to the thermally conductive carrier 152, and thus efficiently dissipate heat away from the light emitting dies 100. [0035] Figure 5A is a schematic cross-sectional diagram of an light emitting die 200 with a buried electrode in accordance with another embodiment of the technology, and Figure 5B is a top plan view of the light emitting die 200 in Figure 5A. The light emitting die 200 can include components that are generally similar in structure and function as those of the light emitting die 100 in Figures 2A-3B. For example, the light emitting die 200 can include the substrate 102 carrying the SSL structure 1 11 and the exposed second electrode 122 that are generally similar to those discussed above with reference to Figures 2A-3B. As such, common acts and structures are identified by the same reference numbers, and only significant differences in operation and structure are described below. [0036] As shown in Figure 5 A, the SSL structure 111 includes a plurality of openings 130 (only one is shown in Figure 5A after it has been filled for clarity) extending from the second electrode 122 into the first semiconductor material 104 of the SSL structure 111. A passivation material 125 (e.g., silicon oxide or silicon nitride) has a first portion 125a in the opening 130 and a second portion 125b external to the opening 130. The first portion 125a generally conforms to the sidewall 131 of the opening 130 and forms a dielectric liner. The second portion 125b has a first surface 127a in contact with the second electrode 122 and a second surface 127b in contact with the substrate 102. [0037] The first electrode 120 can include a conductive material 132 adjacent the passivation material 125 in the opening 130. In the illustrated embodiment, the conductive material 132 has a first end 132a that is generally co-planar with the passivation material 125 such that the first end 132a of the conductive material 132 is in direct contact with the substrate 102. The conductive material 132 also includes a second end 132b in contact with the first semiconductor material 104. As a result, the conductive material 132 electrically couples the first semiconductor material 104 to the substrate 102. [0038] Several embodiments of the light emitting die 200 can have more accessible electrical connections than conventional buried electrode devices. For example, as shown in Figure 5 A, the first electrode 120 is electrically coupled to the substrate 102. As a result, in certain embodiments, the substrate 102 may be electrically conductive and used as a connection site/path to electrically couple external components (not shown). Thus, precise alignment with external conductors may be avoided to reduce production complexity and costs. [0039] In other embodiments, the substrate 102 may be electrically insulative and may include signal routing components (e.g., metal routing layers 134) that route the individual first electrodes 120 to respectively electrical couplers 136 (e.g., solder bumps, solder balls, and/or pillar bumps), as shown in Figure 5C. In further embodiments, the substrate 102 may be partially electrical conductive and partially electrically insulative. In yet further embodiments, the light emitting die 200 may include other suitable configurations, as discussed in more detail below with reference to Figures 6 A and 6B. [0040] Figure 6 A is a schematic cross-sectional diagram of a light emitting die 300 with a buried electrode, and Figure 6B is a schematic top plan view of the light emitting die 300 shown in Figure 6A. As shown in Figure 6A, the light emitting die 300 includes the substrate 102, the insulating material 103 on the substrate 102, and the SSL structure 111 with exposed first and second electrodes 120 and 122. The second electrode 122 can be generally similar to that discussed above with reference to Figure 5 A. In other embodiments, the insulating material 103 may be omitted. [0041] The first electrode 120 includes the conductive material 132. A first part 133a of the conductive material 132 is adjacent the passivation material 125 in the opening 130. A second part 133b of the conductive material 132 is external to the opening 130. In the illustrated embodiment, a portion of the second part 133b laterally extends beyond the second portion 125b of the passivation material 125 and the second portion 122b of the second electrode 122. As a result, the second part 133b of the conductive material 132 (generally designated as connection area 135) is at least partially exposed through the SSL structure 111. In other embodiments, the second portion 122b of the second electrode 122 may be laterally opposite and/or having other arrangements relative to the connection area 135. In further embodiments, the conductive material 132 may include a stack of a plurality of conductive materials (not shown). As shown in Figure 6B, both the first and second electrodes 120 and 122 are accessible from the same side of the SSL structure 111. [0042] Even though the light emitting dies 200 and 300 shown in Figures 5B and 6B include first and/or second electrodes 120 and 122 extending the entire depth D of the substrate 102, in other embodiments, the first and/or second electrodes 120 and 122 may also extend a partial depth D of the substrate 102, generally similar to the light emitting die 100 discussed above with reference to Figure 3B. In further embodiments, the first and/or second electrodes 120 and 122 may include a plurality of electrode elements (not shown). [0043] From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. In addition, many of the elements of one embodiment may be combined with other embodiments in addition to or in lieu of the elements of the other embodiments. Accordingly, the disclosure is not limited except as by the appended claims. |
An enhanced heat dissipation system and a method to extract heat from an integrated circuit device includes a thermally conductive core having upper and lower outer surface areas. The system further includes a first conductive ring having a first array of radially extending fins. The first conductive ring is thermally coupled to the upper surface area. The thermally conductive core includes the first array and the lower outer surface area are of sufficient size to allow components on a motherboard to encroach on to the integrated circuit device when the heat dissipation device is mounted on to the integrated circuit device. |
WHAT IS CLAIMED IS: 1. An enhanced heat dissipation device, comprising: a thermally conductive core, wherein the thermally conductive core has upper and lower outer surface areas ; and a first conductive ring having a first array of radially extending fins, the array being thermally coupled to the upper outer surface area of the thermally conductive core.2. The device of claim 1, wherein the thermally conductive core further has an axis, wherein the upper and lower outer surface areas are parallel to the axis, wherein the thermally conductive core further has a base, wherein the base is disposed such that it is perpendicular to the axis and in close proximity to the lower outer surface area.3. The device of claim 2, wherein the upper and lower outer surface areas are concentric to the axis.4. The device of claim 2, wherein the first conductive ring is thermally coupled to the upper outer surface area such that components can be mounted around and in close proximity to the lower outer surface area and below the first conductive ring when the device is mounted on an integrated circuit device. 5. The device of claim 4, wherein the components can encroach on the integrated circuit device without mechanically interfering with the device.6. The device of claim 4, wherein the thermally conductive core is a solid body having a shape selected from the group consisting of cylindrical, conical, square, and rectangular.7. The device of claim 4, wherein the thermally conductive core includes a heat transport medium such as one or more heat pipes, a liquid, a thermo-siphon, or other similar heat transport mediums.8. The device of claim 7, wherein the thermally conductive core and the first array of radially extending fins are made from materials selected from the group consisting of aluminum, copper, and other such materials capable of extracting heat away from the integrated circuit device.9. The device of claim 1, wherein the first array comprises a first plurality of folded fins.10. The device of claim 9, wherein the first plurality of folded fins comprises: a plurality of alternating deep and shallow folds in a continuous ribbon such that the alternating deep and shallow folds wraps around the upper outer surface area.11. The device of claim 10, wherein the shallow folds have a first depth and the deep folds have a second depth, wherein the first depth is less than the second depth.12. The device of claim 10, wherein the thermally conductive core has a plurality of slots parallel to the axis and around the upper outer surface area, wherein the first plurality of folded fins are attached to the plurality of slots.13. The device of claim 1, further comprising: a second conductive ring, thermally coupled to the lower outer surface area, wherein the first conductive ring has a first outer diameter and the second conductive ring has a second outer diameter, wherein the second outer diameter being less than the first outer diameter. 14. The device of claim 13, wherein the second outer diameter has a size sufficient to allow components to be mounted around and in close proximity to the second conductive ring and below the first conductive ring when the device is mounted on an integrated circuit device.15. The device of claim 14, wherein the second conductive ring has a second array of radially extending fins, wherein the second array is coupled to the lower outer surface area of the thermally conductive core.16. The device of claim 15, wherein the second array comprises a second plurality of folded fins. 17. The device of claim 16, wherein the second plurality of folded fins comprises: a plurality of alternating deep and shallow folds in a continuous ribbon around the lower outer surface area.18. A heat dissipation system, comprising: an integrated circuit device, having a front side and a back side, wherein the front side is disposed across from the back side, wherein the front side is attached to a circuit board having components; an enhanced heat dissipation device comprising: a thermally conductive core, attached to the back side of the integrated circuit device, the thermally conductive core having upper and lower core surface areas, wherein the upper and lower core surface areas having a first and second length ; and a first conductive ring having a first plurality of folded fins, the first plurality of folded fins thermally coupled to the upper core surface area, the first plurality of folded fins surrounding the upper core surface area, the first length of the first conductive ring being sufficient to permit components to be mounted on the circuit board and below the first conductive ring.19. The system of claim 18, wherein the thermally conductive core further comprises a base, wherein the base is in close proximity to the lower core surface area and the base and the back side of the integrated circuit device have coinciding footprint sizes so that temperatures of the integrated circuit device, the base, the first plurality of folded fins, and the thermally conductive core are close to each other during operation to enhance heat transfer from the integrated circuit device.20. The heat system of claim 19, further comprising: a heat transport medium, wherein the thermally conductive core further has a top surface disposed across from the base and in close proximity to the upper core surface area, wherein the heat transport medium is attached to the top surface such that a direction of flow of a cooling medium introduced by the heat transfer medium over the first plurality of folded fins enhances the heat extraction from the integrated circuit device. 21. The system of claim 20, further comprising: a second conductive ring having a second plurality of folded fins, the second plurality of folded fins thermally coupled to the lower core surface area, the second conductive ring having a second diameter, the first conductive ring having a first diameter, wherein the second diameter is less than the first diameter, and sufficient to permit components to be mounted on the circuit board and below the first conductive ring.22. The system of claim 18, wherein the integrated circuit device is a microprocessor.23. A method of forming an enhanced heat dissipation device to extract heat from an integrated circuit device mounted on an assembled printed circuit board, comprising : forming a thermally conductive core having upper and lower core surface areas; forming a first array of radially extending fins; forming a first conductive ring from the formed first array, wherein the first conductive ring has a first diameter ; and attaching the first conductive ring to the upper core surface area such that the lower core surface area has sufficient space below the first conductive ring to allow components to encroach around the integrated circuit device when mounted on to the integrated circuit device.24. The method of claim 23, wherein forming the first array of radially extending fins comprises: forming a first conductive ribbon; forming a first alternative series of deep and shallow folds from the first conductive ribbon; and forming a first conductive ring from the formed first alternative series of deep and shallow folds.25. The method of claim 23, further comprising: forming a second array of radially extending fins; forming a second conductive ring from the formed second array, wherein the second conductive ring has a second diameter, wherein the second diameter is less than about half the first diameter ; and attaching the second conductive ring to the lower core surface area such that the second diameter is of sufficient size to allow the components to be encroach around the integrated circuit device and below the first conductive ring.26. The method of claim 25, wherein forming the second array of radially extending fins comprises: forming a second conductive ribbon; forming a second alternative series of deep and shallow folds from the second conductive ribbon; and forming a second conductive ring from the formed second alternative series of deep and shallow folds.27. The method of claim 26, further comprising: attaching an integrated circuit device to the thermally conductive core.28. The method of claim 27, wherein the integrated circuit device comprises a microprocessor.29. The method of claim 27, wherein the thermally conductive core, the first conductive ring and the second conductive ring are made of a thermally conductive material.30. The method of claim 29, wherein the thermally conductive core, the first conductive ring, and the second conductive ring are made of materials selected from the group consisting of aluminum, copper, and other such materials capable of extracting heat away from the integrated circuit device. |
A HIGH-PERFORMANCE FIN CONFIGURATION FOR AIR-COOLEDHEAT DISSIPATION DEVICETechnical FieldThis invention relates generally to a heat dissipation system and method for an integrated circuit assembly, and more particularly to a system and method of dissipating heat from an integrated circuit device. BackgroundIntegrated circuit devices, microprocessors and other related computer components are becoming more and more powerful with increasing capabilities, resulting in increasing amounts of heat generated from these components.Packaged units and integrated circuit device sizes of these components are decreasing or remaining the same, but the amount of heat energy given off by these components per unit volume, mass, surface area or any other such metric is increasing. In current packaging techniques, heat sinks typically consist of a flat base plate, which is mounted on to the integrated circuit device on one side. The heat sinks further include an array of fins running perpendicular to the flat base plate on the other side. Generally, the integrated circuit devices (which are the heat sources) have a significantly smaller footprint size than the flat base plate of the heat sink. The flat base plate of the heat sink has a large footprint, that is requires more motherboard real estate than the integrated circuit device in contact there with. The larger size of the base plate causes the outermost part of the base plate that is not directly in contact with the integrated circuit device to have a significantly lower temperature than the part of the base plate that is directly in contact with the integrated circuit device.Furthermore, as computer-related equipment becomes more powerful, more components are being placed inside the equipment and on the motherboard which further requires more motherboard real estate. In addition, the base plate of prior art heat sink designs is at the same level as the integrated circuit device to which it is attached. Consequently, the flat base plate configuration of the heat sink generally ends up consuming more motherboard real estate than the integrated circuit device on which it is mounted. As a result, the larger footprint size of the base plate prevents other motherboard components, such as low-cost capacitors, from encroaching around or on the microprocessor. Thus, the large amounts of heat produced by many of such integrated circuits, and the increasing demand for motherboard real estate need to be taken into consideration when designing the integrated circuit mounting and packaging devices. For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for an enhanced heat dissipation device and method that conserve motherboard real estate and allows electronic components to encroach on and around the microprocessor. Brief Description of the DrawingsFigure 1 is an isometric view of a prior art heat sink attached to a microprocessor on an assembled motherboard. Figure 2 is an isometric view of one embodiment of an enhanced heat dissipation device according to the present invention. Figure 3 is an isometric view showing the enhanced heat dissipation device of Figure 2 attached to a microprocessor on an assembled motherboard.Figure 4 is a flow diagram of one exemplary method of forming the heat dissipation device of Figure 2. Detailed DescriptionIn the following detailed description of the embodiments, reference is made to the accompanying drawings that illustrate the present invention and its practice. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. Moreover, it is to be understood that the various embodiments of the invention, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described in one embodiment may be included in other embodiments. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled. This document describes, among other things, an enhanced heat dissipation device that allows electronic components to encroach on a microprocessor while maintaining high performance and cost effectiveness by leveraging currently enabled high-volume manufacturing techniques. Figure 1 shows an isometric view 100 of a prior art heat sink 110 mounted on a microprocessor 120 of an assembled mother board 130. Also, shown inFigure 1 are low-cost capacitors 140 mounted around the heat sink 110 and on the mother board 130. The prior art heat sink 100 has a flat base plate 150 including an array of fins 160 extending perpendicularly away from the flat base plate 150. This configuration of the heat sink 110 dictates the use of the flat base plate 110, with the array of fins 160 for dissipating heat from the microprocessor 120. Increasing the heat dissipation using the prior art heat sink 110 shown in Figure 1, generally requires enlarging the surface area of the flat base plate 150 and/or the array of fins 160. This in turn results in consuming more motherboard real estate.Generally, the microprocessor 120 (which is the heat source) have a smaller footprint size than the flat base plate 150 configuration of the heat sink 110 shown in Figure 1. A larger footprint size of the flat base plate 150 can cause the outermost part of the flat base plate 150 (the portion that is not directly in contact with the integrated circuit device) to have a significantly lower temperature than the part of the flat base plate 150 that is directly in contact with the integrated circuit device. Consequently, the prior art heat sink 110 with the larger flat base plate 150 is not effective in dissipating heat from the integrated circuit device.Furthermore, the packaged units and integrated circuit device sizes are decreasing, while the amount of heat generated by these components is increasing. The prior art heat sink 110 configuration dictates that the array of fins 160 extend to the edge of the flat base plate 150 to extract heat from the integrated circuit device. Also, the prior art heat sink 110 requires increasing the size of the array of fins 160 to increase the heat dissipation. In order to enlarge the fins 120 laterally, the flat base plate 150 has to increase in size. Enlarging the flat base plate 150 consumes more motherboard real estate. Consuming more motherboard real estate is generally not a viable option in an environment where system packaging densities are increasing with each successive, higher performance, integrated circuit device generation. Also, the prior art heat sink 110 is at the same level as the integrated circuit device on which it is mounted. It can be seen in Figure 1, that the flat base plate 150 configuration of the prior art heat sink 110 mounted on the microprocessor 120 generally prevents other motherboard components, such as low-cost capacitors 140, from encroaching around the microprocessor 120. Figure 2 is an isometric view of one embodiment of the enhanced heat dissipation device 200 according to the present invention. Shown in Figure 2 is the enhanced heat dissipation device 200 including a thermally conductive core 210, and a first conductive ring 220. Also, shown in Figure 2 is the thermally conductive core 210 having upper and lower outer surface areas 230 and 240. The first conductive ring 220 includes a first array of radially extending fins 250. The first conductive ring 220, including the first array of radially extending fins 250 is thermally coupled to the upper outer surface area 230 of the thermally conductive core 210. Figure 2 further shows an optional second conductive ring 290 thermally coupled to the lower outer surface area 240 of the thermally conductive core 210. The thermally conductive core has an axis 260. In some embodiments, the upper and lower outer surface areas 230 and 240 are parallel to the axis 260. The thermally conductive core 260 further has a base 270. In some embodiments, the base 270 is disposed such a way that it is in close proximity to the lower outer surface area 240 and perpendicular to the axis 260. The upper and lower outer surface areas 230 and 240 can be concentric to the axis 260. The first conductive ring 220 is thermally coupled to the upper outer surface area such that components can be mounted around and in close proximity to the lower outer surface area and below the first conductive ring when the device 200 is mounted on to an integrated circuit device. In some embodiments, the components can encroach on to the integrated circuit device without mechanically interfering with the device 200. The thermally conductive core 210 can be a solid body. The solid body can be cylindrical, conical, square, rectangular, or any other similar shapes that facilitates in mounting on to the integrated circuit device and in attaching the first conductive ring 220 to the upper outer surface area 230. The thermally conductive core 210 can include heat transport mediums such as one or more heat pipes, a liquid, a thermo-siphon, or other such heat transport medium that enhance heat dissipation from the integrated circuit device. The device 200, including the thermally conductive core 210 and the first conductive ring 220, can be made from materials such as aluminum, copper, or any other materials that are capable of dissipating heat away from the integrated circuit device. The first array of radially extending fins 250 can be made of a first plurality of folded fins. The first plurality of folded fins can also be made of alternating deep and shallow folds 280 and 285 from a continuous ribbon such that the alternating deep and shallow folds 280 and 285 wrap around the upper outer surface area 230. The shallow folds have a first depth, and the deep folds have a second depth, and the first depth is less than the second depth. The thermally conductive core 210 can have a plurality of slots 287 parallel to the axis 260 and around the upper outer surface area 230. The first plurality of folded fins can be attached to the plurality of slots 287. The first conductive ring 220 has a first outer diameter and the second conductive ring 290 has a second outer diameter. The second outer diameter is less than the first outer diameter. The first conductive ring 220 has a first depth and the second outer ring 290 has a second depth. The first and second outer diameters including the first and second depths are of sufficient size to allow components to be mounted around and in close proximity to the integrated circuit device when the device is mounted on the integrated circuit device. The second conductive ring 290 can have second array of radially extending fins 292. The second array of radially extending fins are thermally coupled to the lower outer surface area 240 of the thermally conductive core 210.The second array can include a second plurality of folded fins. The second plurality of folded fins can be made from a plurality of alternating deep and shallow folds from a continuous ribbon similar to the first plurality of folded fins shown in Figure 2. Figure 3 is an isometric view 300 showing the enhanced heat dissipation device 200 shown in Figure 2, attached to the microprocessor 120 on an assembled motherboard 130. In the example embodiment shown in Figure 3, the microprocessor 120 has a front side 340 and a back side 330. The front side 340 is disposed across from the back side 330. The front side 340 is attached to the assembled motherboard 130 having components such as the low-cost capacitors 140 and other such electrical components. The base 270 shown in Figure 2, of the enhanced heat dissipation device 200 attached to the back side 330 of the microprocessor 120. It can be seen from Figure 3 that the first and second conductive rings 220 and 290 including the first and second plurality of folded fins 250 and 292 are of sufficient size so as to allow low-cost capacitors 140 mounted on the assembled board 130 to encroach around the microprocessor 120.It can also be seen the low-cost capacitors 140 are below the first conductive ring 220 and around the second conductive ring 290. Also, it can be seen in Figure 3 that the first conductive ring 220 is larger than the second conductive ring 290, thereby increasing the heat dissipation rate without increasing a footprint size of the base 270 of the heat dissipation device 200 any more than the back side 330 of the microprocessor 120. The coinciding footprint sizes of the base 270 of the heat dissipation device 200 and the back side 330 of the microprocessor 120 enables the base 270 and the back side 330 of the microprocessor 120 to have same heat transfer rates. This in turn increases the efficiency of heat transfer between the base 270 and the back side 330 of the microprocessor 120. The thermally conductive core 210 further has a top surface 275 disposed across from the base 270. In some embodiments, the top surface 275 is perpendicular to the axis 260 and is in close proximity to the second conductive ring 290. A heat transport medium can be attached to the top surface 275 to introduce a heat transfer medium 297 such as air in a direction shown in Figure 2, to enhance the heat dissipation by the heat dissipation device 200. A heat transport medium 295 such as a heat pipe, or other such medium can be included in the thermally conductive core 210 to further enhance the heat transfer from the heat dissipation device 200. Figure 4 is a flow diagram illustrating generally a method 400 of forming an enhanced heat dissipation device to extract heat from an integrated circuit device mounted on an assembled printed circuit board. Method 400 as shown inFigure 4, begins with action 410 of forming a thermally conductive core having upper and lower core surface areas. The next action 420 requires forming a first array of radially extending fins. The next action 430 is to form a first conductive ring having a first diameter from the formed first array of radially extending fins.The next action 440, requires attaching the first conductive ring to the upper core surface area such that the lower core surface area has sufficient space below the first conductive ring to allow components to be mounted in close proximity and around the lower core surface area. In some embodiments, forming the first array of radially extending fins further includes forming a first conductive ribbon, and forming a first alternative series of deep and shallow folds from the first conductive ribbon, and further forming a first conductive ring from the formed first alternative series of deep and shallow folds. In some embodiments, the method 400 further includes forming a second array of radially extending fins, and forming a second conductive ring having a second diameter from the formed second array. Further, the second conductive ring is attached to the lower core surface area of the thermally conductive core such that the second diameter is sufficient to allow the components to encroach around the integrated circuit device. In some embodiments, forming the second array of radially extending fins further includes forming a second conductive ribbon, and forming a second alternative series of deep and shallow folds from the second conductive ribbon, and further forming a second conductive ring from the formed second alternative series of deep and shallow folds. The second diameter of the second conductive ring is less than the first diameter of the first conductive ring. In some embodiments, the enhanced heat dissipation device is made of thermally conductive materials such as copper, aluminum, or any other such material capable of extracting heat away from the integrated circuit device. In some embodiments, the thermally conductive core can include heat transport mediums such as one or more heat pipes, a liquid, a thermo-siphon, or other similar heat transport medium suitable for enhancing the extraction of heat from the integrated circuit device. ConclusionThe above-described device and method provides, among other things, an enhanced heat dissipation using an array of radially extending fins where possible, to allow electronic components to encroach around an integrated circuit device it is mounted on, while maintaining high performance and cost effectiveness by leveraging currently enabled high volume manufacturing techniques |
Disclosed are devices, fabrication methods and design rules for flip-chip devices. Aspects include an apparatus including a flip-chip device. The flip-chip device including a die having a plurality of under bump metallizations (UBMs). A package substrate having a plurality of bond pads is also included. A plurality of solder joints coupling the die to the package substrate. The plurality of solder joints are formed from a plurality of solder bumps plated on the plurality of UBMs, where the plurality of solder bumps are directly connected to the plurality of bond pads. |
CLAIMSWHAT IS CLAIMED IS:1. An apparatus including a flip-chip device, the flip-chip device comprising: a die having a plurality of under bump metallizations (UBMs); a package substrate having a plurality of bond pads; and a plurality of solder joints coupling the die to the package substrate, wherein the plurality of solder joints are formed from a plurality of solder bumps plated on the plurality of UBMs, the plurality of solder bumps being directly connected to the plurality of bond pads.2. The flip-chip device of claim 1, wherein the flip-chip device has a bond line thickness to solder joint diameter ratio of approximately 0.64, where the bond line thickness is a distance between the die and the package substrate.3. The flip-chip device of claim 2, wherein the bond line thickness is approximately 35 um.4. The flip-chip device of claim 2, wherein a solder joint diameter for each of the plurality of solder joints is approximately 95 um.5. The flip-chip device of claim 1, further comprising: a solder resist layer, of the package substrate, having a solder resist opening (SRO) over each bond pad of the plurality of bond pads, wherein a ratio of SRO to solder joint diameter is approximately 0.95.6. The flip-chip device of claim 5, wherein the SRO over each bond pad is approximately 35 um.7. The flip-chip device of claim 5, wherein the solder joint diameter for each of the plurality of solder joints is approximately 95 um.8. The flip-chip device of claim 1, wherein the flip-chip device has a bond line thickness to solder joint diameter ratio in a range of approximately 0.3 to 0.7, where the bond line thickness is a distance between the die and the package substrate.9. The flip-chip device of claim 8, wherein the bond line thickness is in a range of approximately 30um to 60um.10. The flip-chip device of claim 8, wherein a solder joint diameter for each of the plurality of solder joints is in a range of approximately 70um to 180um.11. The flip-chip device of claim 1, wherein the plurality of solder joints, each has a generally cylindrical or columnar shape.12. The flip-chip device of claim 1, wherein the plurality of bond pads are formed of copper.13. The flip-chip device of claim 1, wherein each UBM, of the plurality of UBMs, has a minimum metal density and a minimum via density in an area under each UBM.14. The flip-chip device of claim 13, wherein the minimum metal density is 20 percent.15. The flip-chip device of claim 13, wherein the minimum via density is 0.1 percent.16. The flip-chip device of claim 13, wherein the area under the UBM is divide into a plurality of checking windows to check the minimum metal density and the minimum via density.17. The flip-chip device of claim 16, wherein each checking window is in a range of 5um by 5um to 20um by 20um.18. The apparatus of claim 1, wherein the apparatus is selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal
digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, and a device in an automotive vehicle.19. A method for manufacturing a flip-chip device, the method comprising: providing a die having a plurality of solder bumps plated on a plurality of under bump metallizations (UBMs); providing a package substrate having a plurality of bond pads; and forming a plurality of solder joints coupling the die to the package substrate, wherein the plurality of solder joints are formed from the plurality of solder bumps being directly connected to the plurality of bond pads during a reflow process.20. The method of claim 19, wherein the flip-chip device has a bond line thickness to solder joint diameter ratio of approximately 0.64, where the bond line thickness is a distance between the die and the package substrate.21. The method of claim 20, wherein the bond line thickness is approximately 35 um.22. The method of claim 20, wherein a solder joint diameter for each of the plurality of solder joints is approximately 95 um.23. The method of claim 19, wherein the package substrate includes a solder resist layer having a solder resist opening (SRO) over each bond pad of the plurality of bond pads and wherein a ratio of SRO to solder joint diameter is approximately 0.95.24. The method of claim 23, wherein the SRO over each bond pad is approximately 35 um.25. The method of claim 23, wherein the solder joint diameter for each of the plurality of solder joints is approximately 95 um.26. The method of claim 19, further comprising:
checking a minimum metal density and a minimum via density in an area under each UBM of the plurality of UBMs.27. The method of claim 26, wherein the minimum metal density is 20 percent.28. The method of claim 26, wherein the minimum via density is 0.1 percent.29. The method of claim 26, wherein the area under the UBM is divide into a plurality of checking windows to check the minimum metal density and the minimum via density.30. The method of claim 29, wherein each checking window is in a range of 5um by 5um to 20um by 20um. |
FLIP-CHIP DEVICECROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present Application for Patent claims the benefit of Provisional Application No. 62/923,237 entitled “FLIP-CHIP DEVICE” filed October 18, 2019, and U.S. Non- Provisional Application No. 17/071,432 entitled “FLIP-CHIP DEVICE” filed October 15, 2020, both of which are assigned to the assignee hereof and expressly incorporated herein by reference in their entirety.FIELD OF DISCLOSURE[0002] This disclosure relates generally to package devices, and more specifically, but not exclusively, to flip-chip devices and fabrication techniques for flip-chip devices.BACKGROUND[0003] Integrated circuit (IC) technology has achieved great strides in advancing computing power through miniaturization of active components. The flip-chip devices can be found in many electronic devices, including processors, servers, radio frequency (RF) integrated circuits, etc. Flip-chip packaging technology becomes cost-effective in high pin count devices. The flip-chip bonding conventionally uses solder-on-pad (SOP) technology for flip-chip substrates. There are many solutions for the SOP technology and each has its advantages and disadvantages.[0004] For example, conventional substrate designs use SOP to fill in the opening of the solder resist on the top of the bond pad. Also, the SOP can help form the solder joint interconnection to the die. In conventional designs and fabrication processing, removing the SOP process could result in solder joint voids and failure issues. Accordingly, conventional baseline processes uses SOP for lead-free (LF) bump assembly. However, the SOP process adds additional costs and additional complexity to substrate fabrication processes.[0005] Accordingly, there is a need for systems, apparatus, and methods that overcome the deficiencies of conventional SOP processes including the methods, system and apparatus provided herein.
SUMMARY[0006] The following presents a simplified summary relating to one or more aspects and/or examples associated with the apparatus and methods disclosed herein. As such, the following summary should not be considered an extensive overview relating to all contemplated aspects and/or examples, nor should the following summary be regarded to identify key or critical elements relating to all contemplated aspects and/or examples or to delineate the scope associated with any particular aspect and/or example. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects and/or examples relating to the apparatus and methods disclosed herein in a simplified form to precede the detailed description presented below.[0007] At least one aspect includes a flip-chip device including a die having a plurality of under bump metallizations (UBMs). A package substrate having a plurality of bond pads is also included. A plurality of solder joints, coupling the die to the package substrate, are formed from a plurality of solder bumps plated on the plurality of UBMs being directly connected to metal bond pads.[0008] At least one additional aspect includes a method for fabricating a flip-chip device. The method includes providing a die having a plurality of solder bumps plated on a plurality of under bump metallizations (UBMs); providing a package substrate having a plurality of bond pads; and forming a plurality of solder joints coupling the die to the package substrate and wherein the plurality of solder joints are formed from the plurality of solder bumps being directly connected to metal bond pads during a reflow process.[0009] Other features and advantages associated with the apparatus and methods disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0010] A more complete appreciation of aspects of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the disclosure.
[0011] FIG. 1 illustrates a partial cross-sectional view of a conventional interconnection of a flip-chip device.[0012] FIG. 2A illustrates a partial cross-sectional view of an interconnection of a flip-chip device in accordance with at least one aspect of the disclosure.[0013] FIG. 2B illustrates a checking windows for a portion of a flip-chip device in accordance with at least one aspect of the disclosure.[0014] FIG. 3 illustrates an image of a partial cross-sectional view of interconnection of a flip- chip device in accordance with at least one aspect of the disclosure.[0015] FIG. 4 illustrates an integrated device including a flip-chip device in accordance with at least one aspect of the disclosure.[0016] FIG. 5 illustrates a mobile device in accordance with at least one aspect of the disclosure.[0017] FIG. 6 illustrates various electronic devices that may be integrated with any of the aforementioned flip-chip devices in accordance with at least one aspect of the disclosure.[0018] FIG. 7 illustrates a flowchart of a method for manufacturing a flip-chip device in accordance with at least one aspect of the disclosure.[0019] In accordance with common practice, the features depicted by the drawings may not be drawn to scale. Accordingly, the dimensions of the depicted features may be arbitrarily expanded or reduced for clarity. In accordance with common practice, some of the drawings are simplified for clarity. Thus, the drawings may not depict all components of a particular apparatus or method. Further, like reference numerals denote like features throughout the specification and figures.DETAILED DESCRIPTION[0020] Aspects of the present disclosure are illustrated in the following description and related drawings directed to specific aspects. Alternate aspects may be devised without departing from the scope of the teachings herein. Additionally, well-known elements of the illustrative aspects herein may not be described in detail or may be omitted so as not to obscure the relevant details of the teachings in the present disclosure.[0021] In certain described example implementations, instances are identified where various component structures and portions of operations can be taken from known, conventional techniques, and then arranged in accordance with one or more exemplary aspects. In
such instances, internal details of the known, conventional component structures and/or portions of operations may be omitted to help avoid potential obfuscation of the concepts illustrated in the illustrative aspects disclosed herein.[0022] FIG. 1 illustrates a partial side view of a conventional flip-chip device 100. As shown in FIG. 1, a flip-chip device 100 includes a package substrate 110 having a plurality of insulating and metal layers. The various metal layers can be interconnected using vias, such as via 116. On a backside of the package substrate a ball grid array (BGA) 130 can be used to connect to the package substrate and flip-chip package (formed of die 120 (also referred to herein as “chip” and package substrate 110) to external devices, circuitry, etc. On the front side of the package substrate 110 is a bond pad 114, illustrated as a copper bond pad 114. A solder resist layer 112 is formed over the bond pad 114. The solder resist layer 112 can be a photosensitive polymer material having a narrow opening to allow for connection to the bond pad 114. A solder-on-pad (SOP) 115 is provided to fill the opening to facilitate connection to the bond pad 114 in later operations. The SOP 115 can be formed by a solder drop or can be printed with a solder paste and reflow process to fill the opening. As discussed above, the SOP is used to prevent voids in the interconnection of the package substrate 110 to the die 120. The under bump metallization (UBM) 122 of the die 120 is used for connecting the die 120 to the package substrate 110 with solder bump 125 for flip-chip packages. The UBM 122 of the die 120 may be formed of aluminum or copper. A detailed image 150 illustrates a cross-section of the interconnection after the die 120 is attached to the package substrate 110. As illustrated, the solder joint 135 has a specific geometry (which will be discussed further below) and is formed from the solder bump 125 being fused with the SOP 115 to form an alloy. The solder joint 135 forms the electrical connection between the UBM 122 and bond pad 114 through the opening in the solder resist layer 112, which provides the electrical connection between die 120 and package substrate 110 for the flip-chip device 100. It will be appreciated that although only one interconnection between the die 120 and package substrate 110 is illustrated, a plurality of interconnections are used for the flip-chip device 100.[0023] FIG. 2A illustrates a partial cross-sectional view of a flip-chip device 200 in accordance with one or more aspects of the disclosure. As shown in FIG. 2, a flip-chip device 200 includes a package substrate 210 having a plurality of insulating layers 217 and conductive layers 218. The insulating layers 217 may be interlayer dielectric (ILD)
layers and may be formed of materials such as doped silicon dioxide (Si02), or its fluorine-doped, carbon-doped, and carbon-doped forms, as well as spin-on organic polymeric dielectrics such as polyimide (PI), polynorbornenes, benzocyclobutene (BCB), polytetrafluoroethylene (PTFE) and/or silicone based polymeric dielectrics. The conductive layers 218 can be formed of any conductive material with high conductivity such as copper (Cu), silver (Ag), gold (Au), aluminum (Al) and other like materials, alloys or combination of materials. The various conductive layers can be interconnected using vias 216. On a backside of the package substrate a connection structure 230 (e.g., ball grid array (BGA)) can be used to connect to the package substrate 210 / flip-chip device 200 (formed of die 220 (also referred to herein as “chip” and package substrate 210) to external devices, circuitry, etc. On the front side of the package substrate 210 is a bond pad 214, illustrated as a copper bond pad 214, however, it can be formed of any conductive material, such as the aforementioned materials. A solder resist layer 212 is formed over the bond pad 214. The solder resist layer 212 can be a photosensitive polymer material with a thickness of about 15 um having an enlarged solder resist opening (SRO) 215 to allow for connection to the bond pad 214. In the various aspects disclosed herein, it will be appreciated that in contrast to conventional designs and processes, SOP material is not provided to fill the SRO 215. As in the conventional art, the under bump metallization (UBM) 222 of the die 220 is used for connecting the die 220 to the package substrate 210 using solder bump 225 for the flip-chip package 200. The UBM 222 of the die 220 may be formed of aluminum or copper or any suitable conductive material.[0024] A detailed image 250 illustrates a cross-section of the interconnection after the die 220 is attached to the package substrate 210. As illustrated in 250, a solder joint 235 is formed from the solder bump 225 being attached directly to the bond pad 214 (without SOP) to form an electrical connection between the UBM 222 and bond pad 214 through the SRO 215 in the solder resist layer 212. The solder joint 235 has a geometry that is distinct from the conventional solder joint (e.g., solder joint 135 discussed above), which will be described in greater detail below. The solder joint 235 forms the electrical connection between die 220 and package substrate 210 for the flip-chip device 200. It will be appreciated that although only one interconnection between the die 220 and package substrate 210 is illustrated, a plurality of interconnections are used for the flip-chip device 200.
[0025] According to various aspects disclosed herein, a new package substrate and die design rule was defined and tested to remove the SOP from the package substrate for the plated LF solder bump 225 attachment. The die level design rules, in some aspects, includes increased metal and via density under the solder bump 225. In conventional design rules, there is no clear or only limited specification for metal and via density under the UBM inside the back-end-of-line (BEOL) metal. Some foundries only have a general metal density rule for chemical mechanical polishing (CMP), such as lOOum by lOOum checking window, which does not focus on the UBM, with minimum metal of 10% or 20% and no special via density rule. The new die level design rules include that under the UBM area, a min. 20% metal density and 0.1% via density at lOum by lOum checking window, which provides for a high resolution checking window for extreme low-k (ELK) dielectric and upper metal and via layers under the UBM area. Ii will be appreciated that the resolution is significantly increased over the checking window of conventional designs. Further, according to one or more aspects, a new solder control rule was defined to reduce solder diameter and height of the solder bump 225. For example, in some aspects, a new bump specification for a LF bump on non-SOP pad includes for a UBM of 80um, the bump height is reduced from 75 to 69um and the solder diameter is reduced from 102um to 90um. According to one or more aspects, a new SRO rule was defined to form the SRO 215 to allow for good solder joint formation without voids. In some aspects, the SRO without SOP will be similarly sized to the solder diameter. For example, for 80um UBM (solder diameter 90um), SRO will also be 90um. It will be appreciated that the various aspects disclosed allow for cost savings, such as removing the SOP process on the package substrate 210, in contrast to the SOP process performed in conventional processes.[0026] FIG. 2B illustrates high resolution checking windows for a portion of the flip-chip device 200 in accordance with one or more aspects of the disclosure. As part of the new design rules, in die 220, a UBM area 265 contains a plurality of checking windows 262 at a higher resolution (i.e., smaller size) than conventional designs. Conventional larger windows can be used in areas on the die with no UMBs. In contrast, in conventional design rules, there is no clear or only limited specification for metal density under the UBM, as noted above. Conventional design rules use large windows that broadly check for metal without any concern for the metal in the die layers under the UBM. Accordingly, there may be low to zero metal and via density in the die layers under the
UBM. In contrast, in the various aspects disclosed, the plurality of checking windows 262 cover the UBM area 265 and allow for determination of the metal and via density under the UBM area 265. The metal and vias under the UBM may be referred to and illustrated herein as metal filling 264. Checking windows 262 allow for further verification of a distribution of the metal fillings 264 under the UBM as determined by the UBM area 265. In the illustrated example, the checking windows 262 are in a 5 by 5 array covering the UBM area 265. In some aspects, the checking windows 262 may be on the range of 5um by 5um to 20um by 20um and in some aspects may be selected based on the UBM size. For example, a larger UBM may have a larger checking window, but still have multiple checking windows covering the UBM area to evaluate the metal and via density under the UBM. In a specific example, the checking window is lOum by lOum, as discussed above, to confirm a minimum metal density and via density under the UBM area 265. However, it will be appreciated that the checking windows 262 are not limited to the illustrated configurations or sizes, which are provided solely for discussion and illustration of the various aspects disclosed herein. Further, in some aspects, the checking windows 262 can be used to confirm that a majority of checking windows 262 within the UBM area 265 has at least some metal filling 264 to determine that metal fillings 264 are not concentrated in a particular portion of the UBM area 265. In further aspects, the checking windows 262 can be used to confirm that at least a given number of checking windows 262 within the UBM area 265 have a minimum density of metal fillings 264 (e.g., metal and via density). The various aspects disclosed herein establish and can determine (e.g., using the smaller checking windows 262) a minimum metal density and via density under the UBM area 265. Designers can use the determination from the checking windows 262 to redesign the metal layer routing and vias, if needed, to ensure the established minimums are met. In some aspects, increased mechanical stress may occur during the fabrication of the flip-chip device (e.g., during the coupling of the die to the package substrate). By ensuring a minimum metal density and via density under the UBM area 265, failures at the UBM can be reduced as the metal fillings 264 (e.g., metal and vias) under the UBM area 265 can provide increased support for the UBM compared to conventional designs. As discussed above, conventional designs do not check metal density or via density under the UBM area, which may allow for low density or no metal under the UBM area, which would provide less support to the UBM.
[0027] FIG. 3 illustrates a cross-section of an interconnection in accordance with one or more examples of the disclosure. A second detailed image 320 illustrates the cross-section of an interconnection between a die 220 and package substrate 210 without SOP according to various aspects of the disclosure. As illustrated in 320, the solder bump 225 has been attached directly to the bond pad 214 to form an electrical connection between the UBM 222 and bond pad 214 through the SRO 215 in the solder resist layer 212. This interconnection forms the electrical connection between die 220 and package substrate 210 for the flip-chip device 200. It will be appreciated that although only one interconnection between the die 220 and package substrate 210 is illustrated, a plurality of interconnections are used for the flip-chip device 200.[0028] In addition to illustrating the various elements in image 320, various measurement references are provided. Some example measurements and example ranges are provided in Table 1 below. As can be seen from Table 1, the dimensions of the interconnections of flip-chip device 200 are generally smaller than the conventional design, with only the UBM 222 being generally the same size and the SRO 215 being slightly larger. Additionally, the solder bump 225 height of the various aspects of the disclosure has both a smaller diameter and height than the conventional design. After the die 220 attachment to the package substrate 210 (e.g., via reflow), the solder joint 235 diameter (“A” in 320) is still smaller than in the conventional designs. However, the bond line thickness (“B” in 320), which is on the range of half of the conventional designs. This results in a reduced bond line thickness to solder joint 235 diameter (B/A) ratio. For example, the B/A ratio is 0.37 for solder joint 235. Additionally, the bond line thickness impacts the overall height of the flip-chip device 200 as it represents the distance between the die 220 and the package substrate 210 at the solder joint 235. Accordingly, reducing the B/A ratio allows for a reduced overall height of the flip-chip device 200 and potentially increased RF performance since the connections between the die 220 and package substrate 210 will be shorter. Further, it will be appreciated that, unlike the conventional design with SOP, the solder bump 225 diameter is substantially equal to the SRO 215. This results in a solder joint 235 diameter that is only slightly larger than the SRO 215 (e.g., C/A ratio of 0.95) which results in the solder joint 235 having a generally cylindrical or column like shape, as can be seen in image 320. Table 1 provides some values for advance technology 28nm or later. It will be appreciated that the specific example values provided above and in the table below are merely for
illustration. In addition to some specific example values, Table 1 also provides example ranges for each reference.Table 1[0029] FIG. 4 illustrates components of an integrated device 400 according to one or more aspects of the disclosure. Regardless of the various configurations of the flip-chip packages (e.g., die 410 and package substrate 420) discussed above, it will be appreciated that the package substrate 420 may be configured to couple the die 410 to a printed circuit board (PCB) 490. The PCB 490 may also be coupled to a power IC 480 (e.g., a power management integrated circuit (PMIC)), which regulates power to the integrated device 400 and allows the package substrate 420 and the die 410 to be electrically coupled to the power IC 480. Specifically, one or more power supply (VDD) lines 491 and one or more ground (GND) lines 492 may be coupled to the power IC 480 to distribute power to the PCB 490, package substrate 420 via VDD BGA pin 425 and GND BGA pin 427 and to the die 410 via die bumps 412 directly connected to the bond pads (not illustrated) of the package substrate 420, as discussed above. The VDD line 491 and GND line 492 each may be formed from traces, shapes or patterns in one or more metal layers of the PCB 490 (e.g., layers 1-6) coupled by one or more vias through insulating layers separating the metal layers 1-6 in the PCB 490. The PCB 490 may have one or more PCB capacitors (PCB cap) 495 that can be used to condition the power supply signals, as is known to those skilled in the art. Additional connections and devices may be coupled to and/or pass through the PCB 490 to the package via one or more additional BGA pins (not illustrated) on the package 420. Some or all of these signals may be coupled to the die 410 via die bumps 412 directly connected to the bond pads of the package substrate 420, as disclosed herein. It will be appreciated that the
illustrated configuration and descriptions are provided merely to aid in the explanation of the various aspects disclosed herein. For example, the PCB 490 may have more or less metal and insulating layers, there may be multiple lines providing power, digital and/or analog signals to the various components, additional dies and/or package substrates may be coupled to the PCB 490, etc. Accordingly, the forgoing illustrative examples and associated figures should not be construed to limit the various aspects disclosed and claimed herein[0030] FIG. 5 illustrates a mobile device in accordance with some examples of the disclosure. Referring now to FIG. 5, a block diagram of a mobile device that is configured according to exemplary aspects is depicted and generally designated mobile device 500. In some aspects, mobile device 500 may be configured as a wireless communication device. As shown, mobile device 500 includes processor 501. Processor 501 is shown to comprise instruction pipeline 512, buffer processing unit (BPU) 508, branch instruction queue (BIQ) 511, and throttler 510 as is well known in the art. Other well- known details (e.g., counters, entries, confidence fields, weighted sum, comparator, etc.) of these blocks have been omitted from this view of processor 501 for the sake of clarity. Processor 501 may be communicatively coupled to memory 532 over a link, which may be a die-to-die or chip-to-chip link. Mobile device 500 also includes display 528 and display controller 526, with display controller 526 coupled to processor 501 and to display 528.[0031] In some aspects, FIG. 5 may include coder/decoder (CODEC) 534 (e.g., an audio and/or voice CODEC) coupled to processor 501; speaker 536 and microphone 538 coupled to CODEC 534; and wireless circuits 540 (which may include a modem, RF circuitry, filters, etc., which may be implemented using one or more flip-chip devices, as disclosed herein) coupled to wireless antenna 542 and to processor 501.[0032] In a particular aspect, where one or more of the above-mentioned blocks are present, processor 501, display controller 526, memory 532, CODEC 1234, and wireless circuits 540 can be included in a system-in-package or system-on-chip device 522 which may be implemented in whole or part using the flip-chip techniques disclosed herein. Input device 530 (e.g., physical or virtual keyboard), power supply 544 (e.g., battery), display 528, input device 530, speaker 536, microphone 538, wireless antenna 542, and power supply 544 may be external to system-on-chip device 522 and may be coupled to a component of system-on-chip device 522, such as an interface or a controller.
[0033] It should be noted that although FIG. 5 depicts a mobile device, processor 501, memory 532 and other components may also be integrated into a set top box, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a computer, a laptop, a tablet, a communications device, a mobile phone, or other similar devices.[0034] FIG. 6 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device or semiconductor device in accordance with various examples of the disclosure. For example, a mobile phone device 602, a laptop computer device 604, and a fixed location terminal device 606 may each be consider generally user equipment (UE) and may include a flip-chip device 600 as described herein. The flip-chip device 600 may be, for example, any of the integrated circuits, dies, integrated devices, integrated device packages, integrated circuit devices, device packages, integrated circuit (IC) packages, package-on-package devices described herein. The devices 602, 604, 606 illustrated in FIG. 6 are merely exemplary. Other electronic devices may also feature the flip-chip device 600 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), an Internet of things (IoT) device or any other device that stores or retrieves data or computer instructions or any combination thereof.[0035] In accordance with the various aspects disclosed herein, at least one aspect includes a flip-chip device (e.g., 200) including a die (e.g., 220) having a plurality of under bump metallizations (UBMs) (e.g., 222). The flip-chip device further includes a package substrate (e.g., 210) having a plurality of bond pads (e.g., 214). A plurality of solder joints (e.g., 235) to couple the die to the package substrate. The plurality of solder joints are formed from a plurality of solder bumps (e.g., 225) plated on the plurality of UBMs and being directly connected to the plurality of bond pads. Among the various technical advantages provided by the disclosed aspects, in at least some aspects, having solder joints (e.g., 235) formed by directly coupling the solder bumps (e.g., 225) plated on the
plurality of UBMs (e.g., 222) to the plurality of bond pads (e.g., 214) eliminates the need for SOP processing of the package substrate (e.g., 210), as discussed above. Additionally, the spacing between the die (e.g., 220) and the package substrate (e.g., 210) and overall height of the flip-chip device (e.g., 200) is reduced. Further, the solder bump plating time can be reduced since the solder diameter and height is reduced relative to conventional designs.[0036] In order to fully illustrate the various aspects of the present disclosure, methods of fabrication are presented. It will be appreciated that the illustrated configurations, materials and descriptions are provided merely to aid in the explanation of the various aspects disclosed herein. Additionally, details related to the fabrication are not provided, as they are not necessary for an understanding of the aspects disclosed and would be easily recognized by one skilled in the art. Further, various methods of fabrication are possible, and discussed fabrication methods are presented only to aid in the understanding of the concepts disclosed herein.[0037] It will be appreciated from the foregoing that there are various methods for fabricating semiconductors including a flip-chip disclosed herein. FIG. 7 illustrates a flowchart of a method for manufacturing a flip-chip device in accordance with some examples of the disclosure. As shown in FIG. 7, the partial method 700 may begin in block 702 with providing a die having a plurality of solder bumps plated on a plurality of under bump metallizations (UBMs). The partial method 700 may continue in block 704 with providing a package substrate having a plurality of bond pads. The partial method 700 continue in block 706 with forming a plurality of solder joints coupling the die to the package substrate, wherein the plurality of solder joints are formed from the plurality of solder bumps being directly connected to metal bond pads during a reflow process. As discussed above, by directly connecting the solder bumps to the bond pads without SOP, package substrate processing costs can be saved and the solder joint is smaller resulting in a potential reduction of overall height of the flip-chip package. The package substrate and/or die can be formed using the new design rules discussed herein. Accordingly, it will be appreciated from the foregoing disclosure that additional processes for fabricating the various aspects disclosed herein will be apparent to those skilled in the art and a literal rendition of the processes discussed above will not be provided or illustrated in the included drawings.
[0038] The foregoing disclosed devices, design rules and functionalities may be designed and configured into computer files (e.g., register-transfer level (RTL), Geometric Data Stream (GDS) Gerber, and the like) stored on computer-readable media. Some or all such files may be provided to fabrication handlers who fabricate devices based on such files. Resulting products may include semiconductor wafers that are then cut into semiconductor die and packaged into a flip-chip package. The flip-chip packages may then be employed in devices described herein.[0039] It will be appreciated that various aspects disclosed herein can be described as functional equivalents to the structures, materials and/or devices described and/or recognized by those skilled in the art. For example, in one aspect, an apparatus may comprise a means for performing the various functionalities discussed above. It will be appreciated that the aforementioned aspects are merely provided as examples and the various aspects claimed are not limited to the specific references and/or illustrations cited as examples.[0040] One or more of the components, processes, features, and/or functions illustrated in FIGs. 1-7 may be rearranged and/or combined into a single component, process, feature or function or incorporated in several components, processes, or functions. Additional elements, components, processes, and/or functions may also be added without departing from the disclosure. It should also be noted that FIGs. 1-7 and corresponding description in the present disclosure are not limited to dies and/or ICs. In some implementations, FIGs. 1-7 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated devices. In some implementations, a device may include a die, an integrated device, a die package, an integrated circuit (IC), a device package, an integrated circuit (IC) package, a wafer, a semiconductor device, a package on package (PoP) device, and/or an interposer.[0041] As used herein, the terms “user equipment” (or “UE”), “user device,” “user terminal,” “client device,” “communication device,” “wireless device,” “wireless communications device,” “handheld device,” “mobile device,” “mobile terminal,” “mobile station,” “handset,” “access terminal,” “subscriber device,” “subscriber terminal,” “subscriber station,” “terminal,” and variants thereof may interchangeably refer to any suitable mobile or stationary device that can receive wireless communication and/or navigation signals. These terms include, but are not limited to, a music player, a video player, an entertainment unit, a navigation device, a communications device, a smartphone, a
personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, an automotive device in an automotive vehicle, and/or other types of portable electronic devices typically carried by a person and/or having communication capabilities (e.g., wireless, cellular, infrared, short-range radio, etc.). These terms are also intended to include devices which communicate with another device that can receive wireless communication and/or navigation signals such as by short-range wireless, infrared, wireline connection, or other connection, regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the other device. In addition, these terms are intended to include all devices, including wireless and wireline communication devices, that are able to communicate with a core network via a radio access network (RAN), and through the core network the UEs can be connected with external networks such as the Internet and with other UEs. Of course, other mechanisms of connecting to the core network and/or the Internet are also possible for the UEs, such as over a wired access network, a wireless local area network (WLAN) (e.g., based on Institute of Electrical and Electronics Engineers (IEEE) 802.11, etc.) and so on. UEs can be embodied by any of a number of types of devices including but not limited to printed circuit (PC) cards, compact flash devices, external or internal modems, wireless or wireline phones, smartphones, tablets, tracking devices, asset tags, and so on.[0042] The wireless communication between electronic devices can be based on different technologies, such as code division multiple access (CDMA), W-CDMA, time division multiple access (TDMA), frequency division multiple access (FDMA), Orthogonal Frequency Division Multiplexing (OFDM), Global System for Mobile Communications (GSM), 3GPP Long Term Evolution (LTE), 5G New Radio, Bluetooth (BT), Bluetooth Low Energy (BLE), IEEE 802.11 (WiFi), and IEEE 802.15.4 (Zigbee/Thread) or other protocols that may be used in a wireless communications network or a data communications network.[0043] The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any detail described herein as “exemplary” is not to be construed as advantageous over other examples. Likewise, the term “examples” does not mean that all examples include the discussed feature, advantage or mode of operation. Furthermore, a particular feature and/or structure can be combined with one or more
other features and/or structures. Moreover, at least a portion of the apparatus described hereby can be configured to perform at least a portion of a method described hereby.[0044] Nothing stated or illustrated depicted in this application is intended to dedicate any component, action, feature, benefit, advantage, or equivalent to the public, regardless of whether the component, action, feature, benefit, advantage, or the equivalent is recited in the claims.[0045] Although some aspects have been described in connection with a device, it goes without saying that these aspects also constitute a description of the corresponding method, and so a block or a component of a device should also be understood as a corresponding method action or as a feature of a method action. Analogously thereto, aspects described in connection with or as a method action also constitute a description of a corresponding block or detail or feature of a corresponding device. Some or all of the method actions can be performed by a hardware apparatus (or using a hardware apparatus), such as, for example, a microprocessor, a programmable computer or an electronic circuit. In some examples, some or a plurality of the most important method actions can be performed by such an apparatus.[0046] In the detailed description above it can be seen that different features are grouped together in examples. This manner of disclosure should not be understood as an intention that the claimed examples have more features than are explicitly mentioned in each claim. Rather, the various aspects of the disclosure may include fewer than all features of an individual example disclosed. Therefore, the following claims should hereby be deemed to be incorporated in the description, wherein each claim by itself can stand as a separate example. Although each dependent claim can refer in the claims to a specific combination with one of the other claims, the aspect(s) of that dependent claim are not limited to the specific combination. It will be appreciated that other aspects disclosed can also include a combination of the dependent claim aspect(s) with the subject matter of any other dependent claim or independent claim or a combination of any feature with other dependent and independent claims. The various aspects disclosed herein expressly include these combinations, unless it is explicitly expressed or can be readily inferred that a specific combination is not intended (such as contradictory aspects, where the combination would define an element as two alternative components, materials, etc.). Furthermore, it is also intended that aspects of a claim can be included in any other independent claim(s), even if the claim is not directly dependent on the
independent claim(s). For example, further aspects may include one or more of the following features discussed in the various example aspects.[0047] Example aspect 1 includes an apparatus including a flip-chip device, the flip-chip device comprising: a die having a plurality of under bump metallizations (UBMs); a package substrate having a plurality of bond pads; and a plurality of solder joints coupling the die to the package substrate, wherein the plurality of solder joints are formed from a plurality of solder bumps plated on the plurality of UBMs, the plurality of solder bumps being directly connected to the plurality of bond pads.[0048] Example aspect 2, which may be combined with the foregoing example aspect 1, includes wherein the flip-chip device has a bond line thickness to solder joint diameter ratio of approximately 0.64, where the bond line thickness is a distance between the die and the package substrate.[0049] Example aspect 3, which may be combined with the foregoing example aspects 1 and 2, includes wherein the bond line thickness is approximately 35 um.[0050] Example aspect 4, which may be combined with the foregoing example aspects 1 to 3, includes wherein a solder joint diameter for each of the plurality of solder joints is approximately 95 um.[0051] Example aspect 5, which may be combined with the foregoing example aspects 1 to 3, further includes a solder resist layer, of the package substrate, having a solder resist opening (SRO) over each bond pad of the plurality of bond pads, wherein a ratio of SRO to solder joint diameter is approximately 0.95.[0052] Example aspect 6, which may be combined with the foregoing example aspect 5, includes wherein the SRO over each bond pad is approximately 35 um.[0053] Example aspect 7, which may be combined with the foregoing example aspects 5 and 6, includes wherein the solder joint diameter for each of the plurality of solder joints is approximately 95 um.[0054] Example aspect 8, which may be combined with the foregoing example aspects 1 to 3, includes wherein the flip-chip device has a bond line thickness to solder joint diameter ratio in a range of approximately 0.3 to 0.7, where the bond line thickness is a distance between the die and the package substrate.[0055] Example aspect 9, which may be combined with the foregoing example aspect 8, includes wherein the bond line thickness is in a range of approximately 30um to 60um.
[0056] Example aspect 10, which may be combined with the foregoing example aspect 9, includes wherein a solder joint diameter for each of the plurality of solder joints is in a range of approximately 70um to 180um.[0057] Example aspect 11, which may be combined with the foregoing example aspects 1 to10, includes wherein the plurality of solder joints, each has a generally cylindrical or columnar shape.[0058] Example aspect 12, which may be combined with the foregoing example aspects 1 to11, includes wherein the plurality of bond pads are formed of copper.[0059] Example aspect 13, which may be combined with the foregoing example aspects 1 to12, includes wherein each UBM, of the plurality of UBMs, has a minimum metal density and a minimum via density in an area under each UBM.[0060] Example aspect 14, which may be combined with the foregoing example aspect 13, includes wherein the minimum metal density is 20 percent.[0061] Example aspect 15, which may be combined with the foregoing example aspects 13 and14, includes wherein the minimum via density is 0.1 percent.[0062] Example aspect 16, which may be combined with the foregoing example aspects 13 to15, includes wherein the area under the UBM is divide into a plurality of checking windows to check the minimum metal density and the minimum via density.[0063] Example aspect 17, which may be combined with the foregoing example aspect 16, includes wherein each checking window is in the range of 5um by 5um to 20um by 20um.[0064] Example aspect 18, which may be combined with the foregoing example aspects 1 to 17, includes wherein the apparatus is selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, and a device in an automotive vehicle. [0065] Example aspect 19 includes a method for manufacturing a flip-chip device, the method comprising: providing a die having a plurality of solder bumps plated on a plurality of under bump metallizations (UBMs); providing a package substrate having a plurality of bond pads; and forming a plurality of solder joints coupling the die to the package substrate, wherein the plurality of solder joints are formed from the plurality of solder bumps being directly connected to the plurality of bond pads during a reflow process.
[0066] Example aspect 20, which may be combined with the foregoing example aspect 19, includes wherein the flip-chip device has a bond line thickness to solder joint diameter ratio of approximately 0.64, where the bond line thickness is a distance between the die and the package substrate.[0067] Example aspect 21, which may be combined with the foregoing example aspect 20, includes wherein the bond line thickness is approximately 35 um.[0068] Example aspect 22, which may be combined with the foregoing example aspects 19 to 21, includes wherein a solder joint diameter for each of the plurality of solder joints is approximately 95 um.[0069] Example aspect 23, which may be combined with the foregoing example aspect 22, includes wherein the package substrate includes a solder resist layer having a solder resist opening (SRO) over each bond pad of the plurality of bond pads and wherein a ratio of SRO to solder joint diameter is approximately 0.95.[0070] Example aspect 24, which may be combined with the foregoing example aspects 22 to23, includes wherein the SRO over each bond pad is approximately 35 um.[0071] Example aspect 25, which may be combined with the foregoing example aspects 22 to24, includes wherein the solder joint diameter for each of the plurality of solder joints is approximately 95 um.[0072] Example aspect 26, which may be combined with the foregoing example aspects 19 to 23, further includes checking a minimum metal density and a minimum via density in an area under each UBM of the plurality of UBMs.[0073] Example aspect 27, which may be combined with the foregoing example aspect 26, includes wherein the minimum metal density is 20 percent.[0074] Example aspect 28, which may be combined with the foregoing example aspects 26 and27, includes the minimum via density is 0.1 percent.[0075] Example aspect 29, which may be combined with the foregoing example aspects 26 to28, includes the area under the UBM is divide into a plurality of checking windows to check the minimum metal density and the minimum via density.[0076] Example aspect 30, which may be combined with the foregoing example aspect 29, includes each checking window is in the range of 5um by 5um to 20um by 20um.[0077] It should furthermore be noted that methods, systems, and apparatus disclosed in the description or in the claims can be implemented by a device comprising means for performing the respective actions and/or functionalities of the methods disclosed.
[0078] Furthermore, in some examples, an individual action can be subdivided into a plurality of sub-actions or contain a plurality of sub-actions. Such sub-actions can be contained in the disclosure of the individual action and be part of the disclosure of the individual action.[0079] While the foregoing disclosure shows illustrative examples of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions and/or actions of the method claims in accordance with the examples of the disclosure described herein need not be performed in any particular order. Additionally, well- known elements will not be described in detail or may be omitted so as to not obscure the relevant details of the aspects and examples disclosed herein. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
PROBLEM TO BE SOLVED: To provide techniques for extending the architecture of a general purpose graphics processing unit (GPGPU) with parallel processing units to allow efficient processing of pipeline-based applications.SOLUTION: The techniques include configuring local memory buffers 44A to 44C connected to parallel processing units 42A to 42D operating as stages of a processing pipeline to hold data for transfer between the parallel processing units. The local memory buffers allow on-chip, low-power, direct data transfer between the parallel processing units. The local memory buffers include hardware-based data flow control mechanisms to enable transfer of data between the parallel processing units. The data is passed directly from one parallel processing unit to the next parallel processing unit in the processing pipeline via the local memory buffers, in effect, transforming the parallel processing units into a series of pipeline stages. |
A general-purpose graphics processing unit (GPGPU), and two or more programmable parallel processing apparatus of the GPGPU configured to operate selectively as a stage of the processing pipeline is included in the pipeline, the be one or more of the local memory buffers of said GPGPU configured to hold the data for transfer between the parallel processing apparatus, each of the local memory buffer, out of the parallel processing device in the processing pipeline is connected directly between at least two of said data, said being passed directly from one via the local memory buffer of said parallel processing apparatus to the other of the parallel processing device, and one or more local memory buffers and said one of said parallel processing apparatus constituting the one of the parallel processing unit to transmit data to each of said local memory buffers belonging direct contact, the parallel processing device among the other has a said configuring the other and controller configured perform one of the parallel processing device to receive data from each of said local memory buffer connected directly GPGPU provided.It said one or more local memory buffer includes hardware-based data flow control mechanisms that enable the transport of the data between the parallel processing apparatus, GPGPU according to claim 1.It said one or more local memory buffers is hardware-based first-in-first-out buffer (FIFO), comprising at least one of the buffers which are last-in-first-out buffer (LIFO), or indexing, GPGPU according to claim 1.The control device, wherein is configured to execute the parallel processing apparatus 1 or more application programming interface for configuring to receive data from the local memory buffer to transmit data the local memory buffer (API) that, GPGPU according to claim 1.The control device, said to hold the data output from the previous processing apparatus in the processing pipeline to determine the required widths for each of the local memory buffer, said to have the determined width GPGPU according to claim 1 which is further configured to configure each of the local memory buffers.Wherein the control device executes one or more application programming interface (API), said determining the width for each of the local memory buffers, constitutes each of the local memory buffers according to the determined width, the configured to determine the respective depths of the local memory buffer, GPGPU according to claim 5.The control device, wherein is further configured to determine the respective depths of the local memory buffers, each of said local memory buffer is configurable to be the width and trade the depth to claim 5 GPGPU described.A method of processing data by a general purpose graphics processing unit (GPGPU), the method includes forming the two or more programmable parallel processing apparatus of the GPGPU to operate selectively as a stage of the processing pipeline When the contained in the pipeline, said the method comprising: configuring one or more local memory buffers of said GPGPU to hold the data for transfer between the parallel processing device, each of said local memory buffer , the processing is directly connected in the pipeline between at least two of said parallel processing apparatus, the data, the other of the parallel processing apparatus from one of said parallel processing apparatus via the local memory buffer direct received passed, configurations and configuring, the one of said parallel processing apparatus to transmit data to each of said local memory buffer, wherein one of said parallel processing apparatus is belonging direct contact comprising the method comprising, and that the other of the parallel processing device constituting the other of the parallel processing unit to receive data from said respective one of said local memory buffer connected directly ,Method.It said one or more local memory buffer includes hardware-based data flow control mechanisms that enable the transport of the data between the parallel processing apparatus, method according to claim 8.It said one or more local memory buffers is hardware-based first-in-first-out buffer (FIFO), comprising at least one of last-in-first-out buffer (LIFO) or indexed by buffer A method according to claim 8.Wherein one or more of it constituting the parallel processing device, said one or more application programming interface (API for configuring the parallel processing unit to receive data from and send data to the local memory buffer said local memory buffer ) provided to the execution method according to claim 8.It includes determining the required width for each of the local memory buffer to hold the data output from the previous processing apparatus prior Kisho physical pipe lines constituting the one or more local memory buffers, ; and a configuring each of the local memory buffer to have the determined width, the method of claim 8.Configuring said one or more local memory buffers is to execute one or more application programming interface (API), said determining the width for each of the local memory buffer, said local memory by said determined width configure each buffer comprises determining each of the depth of the local memory buffer, the method according to claim 12.Configuring said one or more local memory buffers, the further configured to determine the respective depths of the local memory buffers, each of said local memory buffer is configured to be the width and trade the depth possible, the method according to claim 12.A general-purpose graphics processing unit (GPGPU), optionally means for configuring two or more programmable parallel processing apparatus of the GPGPU to work, included in the pipeline as a stage of the processing pipeline is, and means for configuring one or more local memory buffers of said GPGPU to hold the data for transfer between the parallel processing apparatus, each of the local memory buffer in the processing pipeline which is connected parallel processing directly between at least two of the devices, the data, the other is passed directly to one of said parallel processing apparatus from one of said parallel processing apparatus via the local memory buffer, configuration means for configuring the means for the said one of the parallel processing device such that said one of said parallel processing apparatus transmits data to each of said local memory buffers belonging direct contact When, and means for forming the other of the parallel processing unit to receive data from said respective one of said local memory buffer and the other is directly connected of said parallel processing apparatus GPGPU provided.It said one or more local memory buffer includes hardware-based data flow control mechanisms that enable the transport of the data between the parallel processing apparatus, GPGPU according to claim 15.Claim, further comprising means, for executing one or more application programming interface (API) to configure the parallel processing unit to receive data from said local memory buffer to send data to the local memory buffer GPGPU described in 15.Means for determining a required width for each of the local memory buffer to hold the data output from the previous processing apparatus in the processing pipeline, the local memory buffer to have the determined width GPGPU of claim 15 further comprising, means for configuring each.By running one or more application programming interface (API), said determining the width for each of the local memory buffers, constitutes each of the local memory buffers according to the determined width, each of said local memory buffer GPGPU of claim 18, further comprising means, for determining the depth.The further comprising means for determining each of the depths of the local memory buffers, each of said local memory buffer is configurable to be the width and trade the depth, GPGPU according to claim 18.A computer-readable medium comprising instructions for processing the data by a general purpose graphics processing unit (GPGPU), at runtime, the programmable processor, the GPGPU to operate selectively as a stage of the processing pipeline 2 to configure or more programmable parallel processing apparatus, wherein included in the pipeline, by constituting one or more local memory buffers of said GPGPU to hold the data for transfer between the parallel processing device there, each of said local memory buffer, the processing is directly connected in the pipeline between at least two of said parallel processing apparatus, the data, one of said parallel processing apparatus via the local memory buffer is passed directly to the other of the parallel processing device from, it is configured, the parallel processing so as to transmit the data to each of said local memory buffer, wherein one of said parallel processing apparatus is belonging direct contact the so constituted one of the devices, the other of the parallel processing unit to receive data from said respective one of said local memory buffer in which the other of the parallel processing device is directly connected is configured, a computer readable medium.It said one or more local memory buffer includes hardware-based data flow control mechanisms that enable the transport of the data between the parallel processing apparatus, computer-readable medium of claim 21.Instructions wherein the programmable processor, wherein to execute the one or more application programming interface for configuring the parallel processing unit to receive data from and send data to the local memory buffer said local memory buffer (APIs), the computer-readable medium of claim 21, further comprising a.Said programmable processor, and instructions to determine the necessary width for each of the local memory buffer to hold the data output from the previous processing apparatus in the processing pipeline, to have the determined width the computer-readable medium of claim 21, further comprising instructions to configure each of the local memory buffers, to.The programmable processor to execute one or more application programming interface (API), the is determining the width for each of the local memory buffer, according to the determined width is configured each of the local memory buffer, the computer readable medium of claim 24, further comprising instructions, which determine the respective depth of the local memory buffer.It said programmable processor, further comprising instructions to determine the respective depths of the local memory buffers, each of said local memory buffer is configurable to be the width and trade the depth, 24. the computer-readable medium according to. |
Computing resources pipelining in general-purpose graphics processing unit[0001] The present disclosure relates to processing data, and more specifically, relates to process the data using general purpose graphics processing unit.[0002] Generic graphics processing unit (GPGPU) is a generalized version of the originally designed graphics processing unit to process the 2D and 3D graphics. GPGPU is, the high-power parallel processing of the GPU, to extend beyond the graphics processing to the general-purpose data processing applications. As an example, GPUs may be configured to process data in accordance with the OpenCL specification provide a constant application access to the graphics processor for non-graphical calculations. "OpenCL Specification, Version 1.1" was released in June 2010, it is publicly available.[0003] GPGPU includes a programmable processing unit arranged in a highly parallel structure that does not allow (highly parallel structure) synchronous or data sharing between the processing apparatus. Instead, the individual processing unit, replace only the external memory and the data set. This structure, application for GPGPU are limited to those which are essentially parallel. Since GPGPU architecture may be highly parallel processing, they prevent an efficient implementation of a pipeline-based computing. While this limited range in 2D and 3D graphics processing using the parallel processing at each processing stage, requires a pipeline of computing resources between stages.[0004] This disclosure describes techniques for extending the architecture of the general purpose graphics processing unit (GPGPU) by parallel processing device to allow efficient processing of the pipeline based applications. For example, the techniques may include configuring the local memory buffer connected to the parallel processing apparatus operating as a stage of the processing pipeline to hold the data for transfer between the parallel processing device. Local memory buffer, between the parallel processing device, on-chip, to enable low-power, direct data transfer. Local memory buffer may include a hardware-based data flow control mechanisms to enable data transfer between the parallel processing device. Thus, data is transferred directly from one parallel processing apparatus in the processing pipeline via the local memory buffer to the next parallel processing apparatus, actually converts the parallel processing apparatus to a series of pipeline stages . Local memory buffer, each of the parallel processing device in the processing pipeline performs the call to the system memory, by reducing or eliminating the need for and / or store retrieve the data, can significantly reduce the memory bandwidth usage it can.[0005] The technique, in some cases, may include configuring each of the local memory buffer to have a width required to buffer to hold the data output from the previous parallel processing device. For example, the local memory buffer may be a hardware-based buffer can be configured to exchange the width and depth. Further, in some cases, the techniques may include performing a sequencing barrier (sequencing barriers) to Preserve (The preserve) the data sequence processing pipeline. For example, data thread of a sequence of data sets are recorded on entry data set in the parallel processing apparatus, after the data set has been processed, the data thread data sets, concurrency in the same sequence as that recorded It may be released from the device.[0006] In one example, the disclosure provides a selectively operated two or more parallel processing apparatus configured as a stage in the processing pipeline, configured to hold the data for transfer between the parallel processing apparatus 1 a local memory buffer described above, each buffer is connected between at least two of the parallel processing device and the one or more local memory buffers, target GPGPU comprising.[0007] In another example, the present disclosure, it and constituting the two or more parallel processing device to operate selectively as a stage of the processing pipeline; to hold the data for transfer between the row processing unit 1 and configuring the local memory buffer or still, each buffer is connected between at least two of among the parallel processing device; directed to a method of processing a data by GPGPU comprising.[0008] In a further embodiment, the present disclosure is selectively operated to like means and for forming the two or more parallel processing apparatus as a stage of the processing pipeline; to hold the data for transfer between the parallel processing device It means for configuring one or more local memory buffer, noted, each buffer is connected between at least two of among the parallel processing device; target GPGPU comprising.[0009] In another example, the present disclosure provides a computer-readable medium comprising instructions for processing the data by GPGPU, at runtime, the programmable processor, to operate selectively as a stage of the processing pipeline 2 to configure the above parallel processing apparatus, thereby forming one or more local memory buffer to hold the data for transfer between the parallel processing apparatus still, between at least two of each of the buffers concurrent processing apparatus connected is directed to a computer-readable media.[0010] 1 above example details of which are described in the detailed description of the accompanying drawings and the following. Other features, objects, and advantages, the description and drawings, and will be apparent from the claims.Figure 1 is a block diagram illustrating a device including a general-purpose graphics processing unit (GPGPU) be implemented processing pipeline is configurable.Figure 2 is a block diagram illustrating a conventional GPGPU containing configured parallel processing apparatus to perform parallel processing.3 is a block diagram illustrating an example of GPGPU in FIG 1 in which the local memory buffer configured to implement the processing pipeline comprising a parallel processing unit.Figure 4 is a flowchart illustrating an exemplary operation of GPGPU, including local memory buffer connected to the parallel processing device for transferring data between a parallel processing device as a stage in the processing pipeline.5 is a flowchart illustrating an exemplary operation of preserve the data sequence processing pipeline implemented by the local memory buffer and parallel processing apparatus GPGPU.Detailed description[0016] This disclosure describes techniques for extending the architecture of the general purpose graphics processing unit (GPGPU) by parallel processing device to allow efficient processing of the pipeline based applications. In particular, the present technique involves configuring the local memory buffer connected to the parallel processing apparatus operating as a stage of the processing pipeline to hold the data for transfer between the parallel processing device. Local memory buffer, between the parallel processing device, on-chip, to enable low-power, direct data transfer. Local memory buffer may include a hardware-based data flow control mechanisms to enable data transfer between the parallel processing device. Thus, data is transferred directly from one parallel processing apparatus in the processing pipeline via the local memory buffer to the next parallel processing apparatus, actually converts the parallel processing apparatus to a series of pipeline stages . Local memory buffer, each of the parallel processing device in the processing pipeline performs the call to the system memory, by removing or reducing the retrieves the data and / or storage need to, can significantly reduce the memory bandwidth usage it can.[0017] Figure 1 is a block diagram illustrating a device 2 that includes a general-purpose graphics processing unit (GPGPU) 6 be implemented processing pipeline 10 is configurable. As will be described in more detail below, processing pipeline 10 of GPGPU6 is 2 and more parallel processing apparatus configured to operate as a stage of the processing pipeline 10, in order to implement the processing pipeline 10 configured to hold the data for transfer between the parallel processing apparatus comprising one or more a local memory buffer.[0018] Device 2, to send and receive data, to support a variety of data processing applications, and outputting the processed data for presentation to the user, are possible. Examples of the device 2 include, but are not limited to, a mobile radiotelephone, a personal digital assistant (PDA), video gaming devices, video gaming consoles, video conferencing equipment (video conferencing units), a laptop computer, a desktop computer, a tablet computer, TV set-top box, including digital recording device, digital media player, and similar things.[0019] In the example shown in Figure 1, device 2 includes a host processor 4, GPGPU6 with the processing pipeline 10, a display 8, speaker 10, a device memory 12, a transceiver module 14 and a user input device 16,. In other cases, for example, when the device 2 is a desktop computer, a display 8, speaker 10 and / or the user interface 16 may be externally to the device 2. The host processor 4 and GPGPU6 is a digital signal processor (DSP), general purpose microprocessors, application specific integrated circuits (ASIC), a field programmable gate array (FPGA), or other may comprise equivalent integrated or discrete logic circuitry .[0020] The host processor 4 may execute one or more applications. Examples of applications, web browsers, e-mail applications, spreadsheets, video games, audio and video editing applications, or visual and / or audio output for presentation to the user via the display 8 and / or the speaker 10 including generation and other applications, the. GPGPU6 also may perform one or more of the application. GPGPU6 can run the application to support the applications that are executed by the host processor 4. Specifically, GPGPU6 can run the application in order to prepare the data for presentation to the user via the display 8 and / or the speaker 10.[0021] GPGPU6 is a generalized version of the graphics processing unit (GPU) that extends the high-power parallel processing of GPU beyond graphics processing in a general-purpose data processing applications. As an example, GPGPU6 may be configured to process data in accordance with the OpenCL specification provide a constant application access to the GPU for non-graphical calculations. Conventional GPGPU are described in more detail below with reference to FIG. 2, it comprises a programmable processing unit which are arranged in a highly parallel structure that prevents efficient implementation of a pipeline-based applications. While this limited range in 2D and 3D graphics processing applications that use parallel processing in each processing stage, requires a pipeline of computing resources between stages.[0022] Pipeline-based application is configured such that the first stage processes the original data set, the second stage is arranged to process the output of the first stage, third stage processes the output of the third stage It is configured to require a data set to be processed in the stage to follow in the same manner for the number of stages required for the application. The most efficient implementation of the pipeline based applications is to pass from one stage in the processing pipeline receiving data directly set to the next stage. Pipeline base less efficient implementation of applications for each stage in the processing pipeline, retrieve the data processed by a previous stage from the off-chip memory, at Then, the off-chip memory for the next stage and to store the the back processed data. The less efficient implementation is still requires sequencing mechanism to ensure that the data set is processed in the correct sequence by each stage in the processing pipeline. Conventional GPGPU, the processing pipeline, or sequencing mechanisms necessary to execute the pipeline based applications even can not also be configured to implement.[0023] According to the techniques in the present disclosure, also unlike conventional GPGPU, in some cases, GPGPU6 may implement the processing pipeline 10 to execute a pipeline-based applications, including 2D and 3D graphics processing applications it is configurable to. As will be described in more detail below with reference to FIG. 3, the processing pipeline 10 of GPGPU6 includes two or more parallel processing apparatus configured to operate as a stage of the processing pipeline 10, the processing pipeline and a least one local memory buffer configured to hold data for transfer between the parallel processing device to implement 10. Local memory buffers included in the processing pipeline 10, the parallel-processing apparatus, an on-chip, allowing low power, direct data transfer. Thus, the data is converted in the processing pipeline 10 via the local memory buffer from one parallel processing apparatus is passed directly on to the next parallel processing apparatus, in practice a series of pipeline stages parallel processing apparatus to. Implementation of the processing pipeline 10, each of the parallel processing device in the processing pipeline 10 performs the call to the device memory 12, which is located in the off-chip from GPGPU6, remove or reduce the need for and / or store retrieve data it makes it possible to significantly reduce the memory bandwidth usage.[0024] The techniques of this disclosure may include configuring each of the local memory buffer in the previous within the processing pipeline 10 to have a width required to buffer to hold the data output from the parallel processing unit. For example, the local memory buffer may be a hardware-based buffer can be configured to exchange depth and width. Furthermore, the technique involves performing a sequencing barrier to preserve the data sequence in the processing pipeline 10. For example, the data thread sequence of data set is recorded when the data set to enter the parallel processing device in the processing pipeline 10, after the data set has been processed, the data thread data set is recorded It can be released in the same sequence from the parallel processing unit and.[0025] For example, when the GPGPU6 is configured to implement the processing pipeline 10, GPGPU6 the web browser, e-mail, support video editing application executed by the video game and the host processor 4, the pipeline based It may perform 2D and 3D graphics processing applications. As another example, when it is not configured to GPGPU6 to implement the processing pipeline 10, GPGPU6 is, image-based search application, image descriptor generating / extraction, radiometric image adjustment (radiometric image adjustments), audio processing, and It may run applications that operate efficiently at high parallel structures such as other operations which are commonly performed by the host processor 4.[0026] In some cases, GPGPU6 is, you can run the application to support the pipe-line-based graphics processing applications. Pipeline-based graphics processing application, by GPGPU6 itself to use processing pipeline 10, or may be performed by a separate GPU in the device 2. For example, GPGPU6 may perform image special effects application, the apex for the GPU pipeline (vertices) generation, and graphics post-processing application that uses the color buffer from the GPU pipeline.[0027] Both display 8 and the speaker 10, and an output device for the device 2. In some cases, a display 8 and the speaker 10 may be used together to provide both the visual and audio output to the user. In other cases, display 8 and the speaker 10 in order to present the output to the user, can be used separately. As an example, a display 8, a liquid crystal display (LCD), cathode ray tube (CRT) display may comprise a display device of a plasma display, or another type.[0028] User input device 16 includes one or more user input devices for the device 2. For example, the user input device 16, a track ball may include a mouse, keyboard, microphone, and / or other types of input devices. In another example, the user input device 16, a touch screen, may be incorporated as part of the display 8. The user may select one or more applications to be executed by the host processor 4 and / or GPGPU6 via the user input device 16.[0029] The host processor 4 can download data to be processed by the host processor 4 and / or GPGPU6 via transceiver module 14. The host processor 4 also can download one or more applications executed by the host processor 4 and / or GPGPU6 via transceiver module 14. The transceiver module 14 may include circuitry that enables wireless communication or wired communication, or network, between the device 2 and other devices. The transceiver module 14, a modulator, a demodulator may include other relevant circuits for amplifiers, and a wired or wireless communication.[0030] Device memory 12 stores data to be processed by the host processor 4 and / or GPGPU6, also may store processed data received from the host processor 4 and / or GPGPU6. Further, the device memory 12 may store one or more applications executed by the host processor 4 and / or GPGPU6. Device memory 12 may include one or more computer-readable storage medium. Examples of device memory 12 include, but are not limited to, random access memory (RAM), read-only memory (ROM), electronically erasable programmable read-only memory (EEPROM (registered trademark)), CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, can be used to carry or store desired program code in the flash memory or the form of instructions or data structures, and also accessed by a computer or processor it is that it is possible, any other medium.[0031] Figure 2 is a block diagram illustrating a conventional GPGPU18 containing configured parallel processor 22A-22D to perform parallel processing. In some cases, GPGPU18 may be included in the device 2 substantially similar to the devices described above with reference to FIG. GPGPU18 is, data distribution apparatus 20, parallel processing apparatus 22A-22D ( "parallel processing device 22"), and a bus 24 that connects the parallel processing device 22 to the device memory 26 external to GPGPU18.[0032] Conventional GPGPU18 is a generalized version of the GPU that the was originally designed to handle 2D and 3D graphics. GPGPU18 can extend high power concurrency GPUs, beyond the graphics processing in a general-purpose processing application. As an example, GPGPU18 may be configured to process data in accordance with the OpenCL specification. OpenCL specification, provides a constant application access to the GPU for non-graphical computing. The OpenCL terminology, data thread is referred to as work items (work item), the data set is called the Working Group (work group), the processing apparatus is called a computing device (compute units), a collection of processing device, computing called groups (compute group).[0033] Common GPU tasks are highly parallel, it does not require the exchange of information between data thread data set that is processed in a given processor. For example, the calculated values for vertices, is independent from the calculated values for the different vertices, the calculated value for the pixel, independent of the calculated values for different pixels. To mimic the parallel nature of GPU, GPGPU18 is designed to include a parallel processing unit 22 arranged at a high parallel structure.[0034] GPGPU18 architecture, the more does not allow data sharing or synchronization between concurrent processing apparatus 22 is a high parallel. In operation, data distribution device 20, to each of the parallel processing apparatus 22 allocates the data set stored in device memory 26. Data thread being processed, the assigned data set may be shared among each of the parallel processing device 22 is synchronized. However, data thread different data sets can not be shared or synchronized between parallel processing apparatus 22. Instead, each of the parallel processing apparatus 22 exchanges only the data set assigned with the device memory 26 through the bus 24. More specifically, each of the parallel processing device 22 takes out the data set assigned to the processing from the device memory 26 through the bus 24, after processing the data set, the device memory 26 through the bus 24 and stores the processed data set back to.[0035] Parallel architecture of GPGPU18 is, prevent the efficient implementation of pipeline-based applications between the parallel processing device 22. The pipeline-based application, the processing apparatus is connected as a stage in the pipeline in order to allow the data to another stage from one stage for different processing tasks are moved. Limitation on the pipeline-based applications in GPGPU18 will be extended to the 2D and 3D graphics processing applications, and it is the use of parallel processing in each processing stage, requires a pipeline of between stages.[0036] Thus, the application of GPGPU18 are limited to those which are essentially parallel. Each of the parallel processing device 22 can comprise a cluster or other configurable logic element of the arithmetic logic unit (ALU). Thus, parallel processing apparatus 22, it is possible or possible configuration program to perform different operations depending on the application executed by the GPGPU18. Will operate efficiently to applications at a high parallel structure of GPGPU18, image-based search application, image descriptor generating / extraction, radiometric image adjustment (radiometric image adjustments), audio processing, and general by a digital signal processor (DSP) may include other operations and the like to be executed. Furthermore, applications executed by GPGPU18 an image special effect generation, generation vertex for GPU pipeline, and pipe line-based graphics processing applications, such as graphics post-processing operation using the color buffer from GPU pipeline It may require interaction.[0037] 3 is a block diagram illustrating an exemplary GPGPU6 in FIG 1, includes a local memory buffer 44A-44C that are configured to implement the processing pipeline 10 and the parallel processing apparatus 42A-42D. In another example, GPGPU6 may include a small number of parallel processing device and a local memory buffer greater number or more.[0038] In the example of FIG. 3, GPGPU6, the data delivery device 40, to connect the parallel processing device 42 in parallel processing unit 42A-42D ( "parallel processing device 42") and GPGPU6 external device memory 12 (FIG. 1) Bus including a 46. Unlike conventional GPGPU (e.g., GPGPU18 in FIG. 3), GPGPU6 also includes a local memory buffer 44A-44C, which is connected between the parallel processing apparatus 42 ( "local memory buffer 44"). The combination of the local memory buffer 44 which is connected to the parallel processing apparatus 42 among the parallel processing apparatus 42 may be referred to as processing pipeline 10. GPGPU6 also includes a control device 30 and the local memory 38. Local memory 38 may comprise a cache for storing a buffer similar to the local memory buffer 44, a register, or the data of GPGPU6 temporarily. The controller 30 includes an application programming interface (API) 32, buffer manager 34 and the sequence manager 36,.[0039] Local memory buffer 44 may include a hardware-based data flow control mechanism that enables data transfer between the parallel processing apparatus 42. For example, the local memory buffer 44 may comprise a hardware-based first-in-first-out (FIFO) buffers, other types hardware-based buffers, such as the last in, first out (LIFO) buffer or indexed buffer. If the local memory buffer 44A comprises a hardware-based FIFO, for example, the local memory buffer 44A is parallel processing apparatus 42A the data to the local memory buffer 44A when there is space to write data to the buffer to send, otherwise sometimes including a data flow control mechanisms that make it possible to stall the write request. In that case, also the local memory buffer 44A, when there is data available for reading from the buffer, to receive the parallel processing apparatus 42B data from the local memory buffer 44A, possible to stall the read request to otherwise including the data flow control mechanisms that. When the local memory buffer 44 comprises a data flow control mechanism hardware-based, very efficient non software based data flow control is not needed to enable the transport of data between the parallel processing apparatus 42.[0040] Local memory buffer 44, to allow between the parallel processing device 42, on-chip, low-power, direct data transfer. Local memory buffer 44 is "local", because they are in the GPGPU6, since being located on the same chip as the processor 42. Thus, data may be passed directly to the other parallel processing apparatus 42 from one of the parallel processing apparatus 42 in the processing pipeline 10 via the local memory buffer 44. Parallel processing apparatus 42 does not need to be stored is taken out repeatedly the data on the device memory 12 from one or GPGPU6 external are disposed off-chip GPGPU6. Thus, the local memory buffer 44, converts the parallel processing unit 42 to a series of pipeline stages, implementing a processing pipeline 10 in the GPGPU6.[0041] In the illustrated example, each of the local memory buffer 44, processing pipeline 10 is purely connected directly between the two parallel processing apparatus 42 in a continuous sequence to be a series pipeline. Local memory buffer 44, it only to be accessible by the two parallel processing apparatus 42, they are not an addressable bus by any connected parallel processing apparatus 42, is "direct" connections. For example, the local memory buffer 44A is connected directly between the parallel processing devices 42A and 42B, the local memory buffer 44B is connected directly between the parallel processing device 42B and 42C, the local memory buffer 44C is, parallel processing It is connected directly between the devices 42C and 42D.[0042] In another example, each of the memory buffer 44 may also be connected directly to one or more of the parallel processing apparatus 42 not continuous order. In this case, each of the local memory buffer 44 may be connected directly to one of the parallel processing unit 42 via a crossbar connection. For example, the local memory buffer 44A, as parallel processing device 42A may transfer the data to one of the parallel processing device 42B-42D via a local memory buffer 44A, each of the parallel processing unit 42 via a crossbar connection It can be directly connected to. The use of the cross-bar connection, to enable a wide range of access than the local memory buffer 44 to the parallel processing device 42, purely to enable is not processing pipeline of implementation in the series.[0043] In the example shown processing pipeline 10 comprises a purely serial pipeline, parallel processing apparatus 42 has only permitted to write data to: local memory buffer 44 (successive one), the local memory buffer It may have only the permission to read the data from the previous one of 44. For example, parallel processing apparatus 42B is only possible to read data from the local memory buffer 44A, it may be capable of only writing data to the local memory buffer 44B. If the processing pipeline to include the crossbar connection, parallel processing apparatus 42 may have a permission to write and read in either the local memory buffer 44. For example, parallel processing apparatus 42B includes a local memory buffer 44A, also may be in the local memory buffer 44B, it is possible to write the read data.[0044] As described above, the local memory buffer 44, FIFO buffer may comprise at least one of LIFO buffer or indexed by the buffer. Type of buffer used in the local memory buffer 44 may depend on the type of hardware-based data flow control mechanism that are needed in the processing pipeline 10. Type of buffer used in the local memory buffer 44 may also depend on whether the local memory buffer 44 is connected to the parallel processing unit 42 via a one-to-one connection or crossbar connections. In addition, when the cross bar connection is used, the buffer manager 34 of the control device 30, in order to any parallel processing device 42 to manage or access to which local memory buffer 44 in a given time, some of the memory control there may be a need to perform.[0045] As described above, the local memory buffer 44 may be directly connected between at least two parallel processing apparatus 42 via any of the one-to-one or crossbar connections. However, the local memory buffer 44, may not be addressable bus by parallel processing device 42. Thus, the memory controller specified in the local memory buffer 44 may not be necessary. Specifically, the memory controller does not need to process read and write commands to the local memory buffer 44 over bus.[0046] Local memory buffer 44, each of the parallel processing unit 42 performs a call to the device memory 12 through the bus 46, by reducing or eliminating the need for and / or storing retrieving data, the memory bandwidth usage it is possible to reduce significantly. In operation, a parallel processing apparatus 42A includes a first processing unit of the processing pipeline 10 fetches the original data set from the device memory 12 via the bus 46. Data set can be assigned to the parallel processing apparatus 42A by the data distribution device 40. Further, parallel processing apparatus 42D, as the final processing apparatus processing pipeline 10, and stores the post pipeline dataset in the device memory 12 via the bus 46. Parallel processing apparatus 42B and 42C, as the intermediate processing device of the processing pipeline 10, receives a data set from the previous ones of the parallel processing unit 42 via one of the local memory buffer 44, a local memory buffer 44 out through the other to send the data set to the subsequent ones of the parallel processing device 42. Thus, intermediate processing device is not required to interact with the device memory 12 to retrieve and / or store data. In some cases, intermediate processing device can retrieve the auxiliary data from the device memory in order to perform the particular stage of the processing pipeline 10. However, the main data set for processing is passed directly along the processing pipeline 10 via the local memory buffer 44.[0047] As described above, GPGPU6 is a generalized version of the GPU to extend the high-power parallel processing of GPU beyond graphics processing in a general-purpose data processing applications. As an example, GPGPU6 may be configured to process data in accordance with the OpenCL specification provide a constant application access to the graphics processor for non-graphical calculations. The OpenCL terminology, data thread is referred to as work items (work item), the data set is called the Working Group (work group), the processing apparatus is called a computing device (compute units), a collection of processing device, computing called groups (compute group).[0048] According to the techniques of this disclosure, GPGPU6, it is configurable to implement the processing pipeline 10 to execute a pipeline-based applications, including 2D and 3D graphics processing applications. More specifically, the control unit 30 of GPGPU6 constitute a parallel processing apparatus 42 to operate as a stage in the processing pipeline. The controller 30 also to hold the data for transfer between parallel processing apparatus 42, constituting the local memory buffer 44 connected between the parallel processing apparatus 42.[0049] Parallel processing apparatus 42 may be capable or configurable program that function differently depending on the application executed by the GPGPU6. Controller 30 may constitute each of the parallel processing apparatus 42 to operate in accordance with the application. For example, each of the parallel processing device 22 can comprise a cluster or other configurable logic element of the arithmetic logic unit (ALU).[0050] Local memory buffer 44 may also be of possible programmable or configured to hold the data outputs of different types from the parallel processing unit 42, depending on the application executed by the GPGPU6. For example, the local memory buffer 44, but may include a hardware-based buffer, may include a set of configurable aspect (a set if configurable aspects). One of the configurable aspects may be the width of the local memory buffer 44 for adapting the data outputs of different types from the parallel processing unit 42. For example, the local memory buffer 44, it is may be configured to width and trade depth. Buffer manager 34 of the controller 30 may determine the width required for each of the local memory buffer 44 to hold the data output of the previous ones of the parallel processing device 42. Buffer manager 34, since recognizing the type of data output from each of the parallel processing apparatus 42 recognizes the width required by each of the local memory buffer 44 to hold the data. Buffer manager 34, at the later may constitute each of the local memory buffer 44 to have the determined width.[0051] Once the parallel processing unit 42 and the local memory buffer 44 is configured to implement the processing pipeline 10 in the GPGPU6, parallel processing device 42 may transfer the data via the local memory buffer 44. Control device 30, constitute one or more of the parallel processing device 42 to transmit data to the local memory buffer 44, constitute one or more of the parallel processing device 44 to receive data from the local memory buffer 44 sell. For example, the control unit 30, respectively, may constitute a local memory buffer 44A, 44B, and parallel processing apparatus 42A to send data to 44C, and 42B and 42C. Control device 30 is also, respectively, the local memory buffer 44A, 44B, and parallel processing apparatus 42B to data received from the 44C, may constitute 42C, and 42D.[0052] Local memory buffer 44 having a hardware-based flow control mechanism, by introducing a new API 32, may be exposed using GPGPU standard such as OpenCL standard (exposed). For example, the control unit 30 executes one or more API 32, determines the width required for each of the local memory buffer 44, at the determined width constituting each of the local memory buffer 44, a local memory buffer It may determine each of the depth of 44. Furthermore, the control device 30 may constitute the parallel processing unit 42 to receive data from the local memory buffer 44 sends the data to the local memory buffer 44 to perform one or more of the API 32. Hardware-based data flow control mechanism contained in the local memory buffer 44, a parallel processing apparatus 42, without further software based data flow control, and transmits the data to the local memory buffer 44, data from the local memory buffer 44 It makes it possible to receive.[0053] Furthermore, the control device 30 of GPGPU6 can to preserve the data sequence in the processing pipeline 10 by Preserve the data sequence in one or more of the parallel processing device 42. Pipeline-based application executed by GPGPU6, in particular 3D graphics application may require data to be processed in a certain sequence in the processing pipeline 10. When the data is processed at each stage of the processing pipeline, the data conditions may alter the sequence due to run problems as a cache hit or miss, and the like (execution issues). Sequence manager 36 of the control device 30 may perform the sequencing barrier in order to preserve the data sequence among the at least some of the parallel processing device 42. Sequencing barrier, since it is possible to slow down the processing speed of the processing pipeline 10, the sequence manager 36, sequencing in a parallel processing apparatus 42 of those that require data sequence Preser base tions for precise treatment It may perform only the barrier.[0054] Sequencing the barrier to be executed by the sequence manager 36 may include a sequence determination counter (SDC) and the sequence en forcing barrier (SEB). For example, sequencing barrier, by adding a new function call in OpenCL C language for SDC and SEB, may be exposed using a GPGPU standards such as OpenCL standard (exposed).[0055] Sequence manager 36 may execute the SDC when data set is entering any one of the parallel processing device 42. Sequence manager 36 is in its later, executes the SDC operation by recording a sequence of data thread data set is received in the local memory 38. For example, the sequence manager 36, data thread can record the index for each data thread data sets in the order received from the device memory 12.[0056] Sequence manager 36, when the data set leaves from one of the parallel processing apparatus 42 may perform SEB. Sequence manager 36, in the later, to run the SEB behavior by releasing the data thread of the data set from one of the parallel processing device 42 in the same sequence as that recorded by the SDC. For example, the sequence manager 36, and access to the recorded data thread index in the local memory 38, the index is released each data thread according to the recorded order. Thus, data thread data set will data thread data set enters the parallel processing device 42 shall in the same order as entering the current one of the subsequent parallel processing apparatus 42 of the.[0057] In one example, controller 30 may constitute a GPGPU6 to perform a pipeline-based 3D graphics processing applications. In that case, the controller 30 may constitute the parallel processing apparatus 42 to operate as a 3D graphics processing pipeline stages. For example, the control unit 30 constitutes a parallel processing apparatus 42A to operate as a vertex shader, constitute a parallel processing device 42B to operate as a triangle La Stella organizer, a parallel processing apparatus 42C to operate as a fragment shader configuration and may constitute the parallel processing device 42D to operate as a pixel blender.[0058] Controller 30 also may constitute a local memory buffer 44 by the hardware-based data flow control mechanism so as to hold the data for transfer between parallel processing apparatus 42 for implementing the 3D graphics processing pipeline 10 . For example, the control unit 30, a parallel processing apparatus 42A and a local memory buffer to hold the vertex data of the post vertex shader for the transfer between the parallel processing device 42B that acts as a triangle La Stella organizer to operate as a vertex shader It may constitute 44A. Control device 30, the configuration and the parallel processing apparatus 42B to operate as a triangle La Stella organizer, the local memory buffer 44B so as to hold the pre-fragment shader pixel data for transfer between the parallel processing apparatus 42C which operates as a fragment shader It can be. Finally, the control device 30, local to hold a post fragment shader pixel values for transfer between the parallel processing apparatus 42D that is operating as a parallel processing apparatus 42C and pixel blender running as a fragment shader It may constitute a memory buffer 44C.[0059] When you run a 3D graphics processing applications, data distribution apparatus 40 may assign the original vertex data set in parallel processing apparatus 42A that is acting as the vertex shader. Parallel processing apparatus 42A via the bus 46, from the device memory 12 to retrieve the original vertex data assigned sets. When the data set is turned on, the sequence manager 36, to run the SDC in order to record the sequence of vertex data. Parallel processing apparatus 42A executes the vertex shading operations after that, sending a vertex data of post vertex shader to the local memory buffer 44A. When the data set is out of the parallel processing apparatus 42A, the sequence manager 36, to run the SEB in order to release the vertex data in the same sequence as that recorded by the SDC. In this way, the vertex data, in the same order as containing the vertex data in parallel processing apparatus 42A to operate as a vertex shader, will reach the parallel processing apparatus 42B to operate as a triangle La Stella organizer.[0060] Parallel processing apparatus 42B to operate as a triangle La Stella organizer receives the vertex data of the post-vertex shader from the local memory buffer 44A. In some cases, parallel processing apparatus 42B may also retrieve the auxiliary data from the device memory 12 through the bus 46 to perform the triangle La stearyl rise of operation. Parallel processing unit 42B executes the Triangle La stearyl rise of the operation after the, to send a pre-fragment shader pixel data in local memory buffer 44B. In some cases, the sequence manager 36 performs the SDC when the vertex data enters the parallel processing device 42B, can preserve the data sequence by performing the SEB when the pixel data exits the parallel processing device 42B. In other cases, since the sequencing barrier is not necessary, not performed for concurrent processing apparatus 42B.[0061] Parallel processing apparatus 42C operates the fragment shader, to receive the pre-fragment shader pixel data from the local memory buffer 44B. When the data set is turned on, the sequence manager 36, to run the SDC in order to record the sequence of pixel data. In some cases, parallel processing device 42C can also be taken out of the auxiliary data from the device memory 12 through the bus 46 to perform the fragment shader operations. Parallel processor 42C performs the fragment shading operation after this, and transmits the post fragment shader pixel values in the local memory buffer 44C. When the data set is out of the parallel processing unit 42C, the sequence manager 36, to run the SEB in order to release the pixel data in the same sequence as that recorded by the SDC. Thus, the pixel data in the same order as in the parallel processing device 42C acting as a fragment shader for containing the pixel data will reach the parallel processing apparatus 42D that operates as a pixel blender.[0062] Parallel processing apparatus 42D operates as a pixel blender, receives the post fragment shader pixel value from the local memory buffer 44C. Parallel processor 44D performs a pixel blending operation, and stores the post pipeline dataset in the device memory 12 via the bus 46. In some cases, the sequence manager 36 performs the SDC when the pixel data enters the parallel processing device 42D, can preserve the data sequence by performing the SEB when the image data is out of the parallel processing device 42D. In other cases, since the sequencing barrier is not necessary, not performed for concurrent processing apparatus 42D. Above described examples of 3D graphics processing applications are merely exemplary, techniques disclosed may be used to perform various pipeline based applications in GPGPU6.[0063] Figure 4 is a flowchart illustrating an exemplary operation of GPGPU6 including local memory buffer 44 connected to the parallel processing unit 42 to transfer data between parallel processing apparatus as a stage of the processing pipeline 10. Operation illustrated will be described with reference to GPGPU6 in FIG.[0064] Control device 30 of GPGPU6 constitute a parallel processing device 42 to operate as a stage of the processing pipeline 10 (50). For example, the control unit 30 constitute a parallel processing apparatus 42 to operate as a 3D graphics processing pipeline stages. In that example, the control unit 30 constitutes a parallel processing apparatus 42A to operate as a vertex shader, constitute a parallel processing device 42B to operate as a triangle La Stella organizer, parallel processing apparatus to operate as a fragment shader 42C constitute, may constitute parallel processing device 42D to operate as a pixel blender.[0065] The controller 30 also constitutes a local memory buffer 44 so as to hold the data for transfer between parallel processing apparatus 42, results, converts the parallel processing unit 42 to the processing pipeline 10 (52). Local memory buffer 44 may include a hardware-based data flow control mechanisms to enable data transfer between the parallel processing apparatus 42. For example, the local memory buffer 44 may comprise a hardware-based FIFO, LIFO or indexed by the buffer. Local memory buffer 44 may be connected directly between at least two of the parallel processing device 42. For example, in the case of 3D graphics processing pipeline, the local memory buffer 44A includes a parallel processing device 42A that acts as a vertex shader, is connected directly between the parallel processing device 42B that acts as a triangle La Stella organizer, post vertex shader It can be configured to hold the vertex data (post-vertex shader vertex data). Local memory buffer 44B includes a parallel processing device 32B that acts as a triangle La Stella organizer, connected directly between the parallel processing apparatus 42C which operates as a fragment shader can be configured to hold the pixel data of the pre-fragment shader. Finally, the local memory buffer 44C comprises a parallel processing apparatus 42C which operates as a fragment shader, is connected directly between the parallel processing apparatus 42D that operates as a pixel blender, it can be configured to hold the post fragment shader pixel values .[0066] Further, the buffer manager 34 of the controller 30 may determine the width required for each of the local memory buffer 44 to hold the data output from the previous ones of the parallel processing unit 42 (54). Buffer manager 34, since recognizing the type of data output from each of the parallel processing apparatus 42 recognizes the width required by each of the local memory buffer 44 to hold the data. Buffer manager 34, at the later may constitute each of the local memory buffer 44 to have the determined width (56). In some cases, the local memory buffer 44 is may be a hardware-based, includes a set of configurable manner (a set of configurable aspects). For example, the local memory buffer 44, it is may be configured to width and trade depth.[0067] For example, the buffer manager 34 recognizes that parallel processing apparatus 42A operating as a vertex shader outputs the vertex data of the post vertex shader, The required width to hold the vertex data of the post vertex shader It may constitute a local memory buffer 44A to have. The buffer manager 34 also has required the width for parallel processing apparatus 42B operating as triangle La Stella organizer is aware that outputs the pre-fragment shader pixel data, to hold the pre-fragment shader pixel data It may constitute a local memory buffer 44B so. Further, the buffer manager 34 has a width parallel processing device 42C acting as a fragment shader recognizes that outputs the post fragment shader pixel values, is required to hold the post fragment shader pixel values It may constitute a local memory buffer 44C so.[0068] And configured to temporarily parallel processing apparatus 42 and the local memory buffer 44 to implement processing pipeline 10 in the GPGPU6, parallel processing device 42 may transfer the data between each other via the local memory buffer 44 (58). More specifically, the control device 30, constitute one or more of the parallel processing device 42 to transmit data to the local memory buffer 44, the parallel processing apparatus 44 to receive data from the local memory buffer 44 of which may constitute one or more. For example, the control unit 30, respectively, local memory buffers 44A, 44B, and parallel processing apparatus 42A to send data to 44C, can constitute 42B, and 42C. Control device 30 is also, respectively, the local memory buffer 44A, 44B, and parallel processing apparatus 42B to data received from the 44C, may constitute 42C, and 42D.[0069] 5 is a flowchart illustrating an exemplary operation of preserve the data sequence in a processing pipeline implemented by the parallel processor 42 and the local memory buffer 44 of GPGPU6. Control device 30 of GPGPU6 can to preserve the data sequence in the processing pipeline by Preserve the data sequence in one or more of the parallel processing device 42. Operation illustrated will be described with reference to the parallel processing apparatus 42A of GPGPU6 in FIG. Similar operations may be performed for any of the other parallel processing apparatus 42.[0070] As an example, parallel processing apparatus 42 and a local memory buffer 44 may be configured to implement the 3D graphics processing pipeline. In that example, parallel processing apparatus 42A is configured to operate as a vertex shader, parallel processing apparatus 42B is configured to operate as a triangle La Stella organizer, parallel processing apparatus 42C is to operate as a fragment shader constructed, parallel processing apparatus 42D may be configured to operate as a pixel blender.[0071] Stage of the processing pipeline 10, for example configured in parallel processing apparatus 42A to operate as a vertex shader, receiving a data set for processing (62). For example, data distribution device 40, allocates a data set of vertex data to the parallel processing apparatus 42A, the parallel processing device 42A may receive a data set assigned from the device memory 12 via the bus 46. When the data set to enter the parallel processing apparatus 42A, the sequence manager 36 of the control unit 30 executes the sequence determination counter (SDC) (64). According SDC, the sequence manager 36 records the data thread of a sequence of data sets received in the local memory 38 (66). For example, the sequence manager 36, data thread can record the index for each data thread data sets in the order received from the device memory 12.[0072] Parallel processing apparatus 42A that is configured to operate as a vertex shader, in its later, processes the data sets to produce vertex data of the post vertex shader (68). As is described above, parallel processing apparatus 42A, the transmission to transfer data set configured parallel processing device 42B to operate as a triangle La Stella organizer, the vertex data of the post vertex shader in the local memory buffer 44A It can be configured to. When the data set is out of the parallel processing apparatus 42A, the sequence manager 36, to execute a sequence en forcing barrier (SEB) (70). According to SEB, the sequence manager 36, to release the data thread of the data set from the parallel processing unit 42A in the same sequence as that recorded by the SDC (72). For example, the sequence manager 36, and access to the recorded data thread index in the local memory 38, the index is released each data thread according to the recorded order. Thus, a plurality of vertices configured parallel processing device 42B to operate as a triangle La Stella organizer in the same order as that contain multiple vertices in parallel processing apparatus 42A that is configured to operate as a vertex shader It will enter.[0073] In one or more examples, the functions described hardware, software, may be implemented in firmware, or any combination thereof. If implemented in software, the functions or operations, is stored as one or more instructions or code on a non-transitory computer-readable medium may be performed by hardware-based processing apparatus. Computer readable media, computer-readable medium corresponds to the tangible medium, such as a data storage medium or, for example, according to a communication protocol, including any medium that facilitates from one place to another transfer of a computer program communication medium, can be included. Thus, computer readable media generally may correspond to the tangible computer-readable storage medium or (2) signal or a communication medium such as a carrier wave is (1) non-transitory. Data storage medium, the instruction for implementation of the techniques described in this disclosure, the code and / or any available media that can be accessed by one or more computers or one or more processors to retrieve data structure It may be. The computer program product may include the computer-readable medium.[0074] As an example, and not limitation, such computer-readable media can, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, such as Nontoranji tri medium or, can be used to store or transport the desired program code in the form of instructions or DataStore structure, and can be accessed by a computer, it may comprise any other medium . Also, any connection is properly termed a computer-readable medium. For example, the instruction is a website, server, or a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or infrared, radio, and microwave wireless technologies other remote source using a like, when sent from the at that time, the coaxial cable, fiber optic cable, twisted pair, DSL, or infrared, radio, and wireless technologies such as microwave are included in the definition of medium. However, the computer readable storage medium and a data storage medium is connected, the carrier, but does not include a signal or other transient media, instead, should be understood that have non transient, a tangible storage medium of interest is there. As used herein, a disk (disk) and a disk (while discs), includes compact disc (compact disc) (CD), laser disc (registered trademark) (laser disc), optical disc (optical disc), digital versatile disk (digital versatile disc) (DVD), it includes a floppy disk (disk) and a Blu-ray (registered trademark) disk, "disk (disks)" are usually reproduce data magnetically, "disk (discs) "reproduces optically with a laser. Combinations of the above should also be included within the scope of computer readable media.[0075] Instruction, one or more DSP, general purpose microprocessors, ASICs, may be performed by FPGA or other one or more processors, such as the equivalent integrated or discrete logic circuitry. Accordingly, the term "processor" as used herein refers to either or any suitable other structures in the practice of the techniques described herein of the aforementioned structure. Further, in some aspects, the functionality described herein, encoded and may be provided in a dedicated hardware and / or software modules configured for decoding or incorporated into the combined codec It can be. Moreover, the techniques may be fully implemented in one or more circuits or logic elements.[0076] The techniques of this disclosure, a wireless handset, can be implemented in various different devices or devices including integrated circuits (IC) or a set of IC (e.g. a chipset). Various components, modules or devices are described in this disclosure to emphasize functional aspects of a device configured to perform the disclosed technique does not necessarily require realization by different hardware devices . Rather, as is described above, various devices are combined in a codec hardware device, or, inter cooperative hardware including one or more processors, such as those described above in conjunction with appropriate software and / or firmware It may be given by a collection of devices.[0077] Various examples have been described. These and other examples are within the scope of the appended claims. By appending invention described in the scope of the original claims of the present invention will be described below. [C1] A general-purpose graphics processing unit (GPGPU), and selectively operating two or more parallel processing apparatus configured as a stage in the processing pipeline, the data for transfer between the parallel processing device be one or more of the local memory buffers that are configured to hold, each of said buffer is connected between at least two of said parallel processing apparatus, GPGPU comprising a least one local memory buffer, the. [C2] Each of the one or more local memory buffers, said coupled at least directly with two between one of said parallel processing apparatus in the processing pipeline, GPGPU according to C1. [C3] the one or more local memory buffer includes hardware-based data flow control mechanisms that enable the transport of the data between the parallel processing apparatus, GPGPU according to C1. [C4] the one or more local memory buffers is hardware-based first-in-first-out buffer (FIFO), comprising at least one of the buffers which are last-in-first-out buffer (LIFOS) or indexing, GPGPU according to C1. [C5] constitute one or more of the parallel processing device to transmit data to said one or more local memory buffer of said parallel processing apparatus to receive data from said one or more local memory buffers 1 GPGPU according to further comprising C1 control device, which is configured to constitute at least. [C6] The control unit, the to perform one or more application programming interface for configuring the parallel processing apparatus so as to transmit the data to the local memory buffer to receive data from said local memory buffer (API) to composed, GPGPU according to C5. [C7] The to hold the data output from pre-processing unit in the processing pipeline to determine the required widths for each of the local memory buffer, said local memory to have said determined width so as to form each of the buffer, GPGPU according to C1, further comprising a control device, which is configured. [C8] The control unit executes the one or more application programming interface (API), to determine the width of each of said local memory buffer, constituting each of the local memory buffers according to the determined width and said configured to determine the respective depths of the local memory buffer, GPGPU according to C7. [C9] wherein each of the local memory buffer is configurable to be the width and trade depth, GPGPU according to C7. [C10] GPGPU according to the control device further comprises C1 to to preserve the data sequence in the processing pipeline. [C 11] The control unit, the to perform a sequence determination counter when the data set to enter the at least one of said parallel processing apparatus for recording data threads of a sequence of data sets, the sequence determination sequence ene forcing when in the same sequence as that recorded by the counter said data set from at least one of the parallel processing device from the parallel processing device to release the data thread of the data set leaves to run the barrier, composed, GPGPU described in C10. [C12] one of the parallel processing device operates as a first stage of the processing pipeline, configured to retrieve the original data set from the device memory, GPGPU according to C1. [C13] one of the parallel processing device operates as the final stage of the processing pipeline, configured to store pipeline processing data set in the device memory, GPGPU according to C1. [C14] at least one of said parallel processing apparatus operates as a middle stage of the processing pipeline, in front of one of said parallel processing apparatus in the processing pipeline via one of said local memory buffer receiving a data set from objects, said configured to transmit the data set for subsequent ones of said parallel processing apparatus in the processing pipeline via the other of the local memory buffers, GPGPU according to C1 . [C15] said at least one of said parallel processing apparatus for processing the data set, and a device memory to retrieve ancillary data, GPGPU according to C14. [C16] A method of processing data by a general purpose graphics processing unit (GPGPU), the method, and to configure two or more parallel processing device to operate selectively as a stage of the processing pipeline; the and configuring the one or more local memory buffer to hold the data for transfer between the parallel processing device, it should be noted that each of said buffer is connected between at least two of the parallel processing device; equipped with a, way. [C17] wherein each of the local memory buffer, said in the processing pipeline of said parallel processing apparatus is connected at least directly in two between A method according to C16. [Cl 8] the one or more local memory buffer includes hardware-based data flow control mechanisms that enable the transport of the data between the parallel processing apparatus, method according to C16. [C19] the one or more local memory buffers is hardware-based first-in-first-out buffer (FIFO), comprising at least one of last-in-first-out buffer (LIFOS) or indexed by buffer A method according to C16. [C20] 1 of the one or more and configuring one or more of the parallel processing device to transmit data to the local memory buffer, the parallel processing unit to receive data from said one or more local memory buffers the method according to C16, further comprising a, and configuring the above. [C21] configuring the one or more parallel processing apparatus, said one or more application programming to configure the parallel processing unit to receive data from and send data to the local memory buffer said local memory buffer provided to perform the interface (API), the method described in C20. [C22] configuring the one or more local memory buffers is to determine the required widths for each of the local memory buffer to hold the data output from the previous processing apparatus in the above processing pipeline When comprises, and configuring each of the local memory buffer to have the determined width, the method described in C16. [C23] configuring the one or more local memory buffers is to execute one or more application programming interface (API), by the determining the width for each of the local memory buffer, said determined width the configure each local memory buffer, comprises determining each of the depth of the local memory buffer, the method according to C22. [C24] wherein each of the local memory buffer is configurable to be the width and trade depth method according to C22. [C25] The method according to data sequence to Preserve, to C16, further comprising a in the processing pipeline. [C26] The possible to preserve the data sequence, and performing the sequence determination counter when the data set to enter the at least one of said parallel processing apparatus for recording data threads of a sequence of data sets , when the said data set from at least one of the parallel processing device from the parallel processing device in the same sequence as that recorded by the sequence determination counter in order to release the data thread of the data set leaves further comprising a performing a sequence ene forcing barrier, the method according to C25. [C27] to configure the two or more parallel processing apparatus, constituting one of said parallel processing apparatus operates as a first stage of the processing pipeline, to retrieve the original data set from the device memory provided that the method according to C16. [C28] to configure the two or more parallel processing apparatus, stores the one and acts as a final stage in the processing pipeline, the data sets are pipelined in the device memory of said parallel processing apparatus equipped to configure it to, the method described in C16. [C29] The means of configuring two or more parallel processing apparatus, at least one of said parallel processing apparatus operates as a middle stage of the processing pipeline, via said one of said local memory buffer in processing pipeline receiving data sets from the previous ones of the parallel processing device, the data set for subsequent ones of said parallel processing apparatus in the processing pipeline via the other of said local memory buffer provided that configured to transmit, the method described in C16. [C30] to configure at least one of said parallel processing apparatus for processing the data set, wherein the configuring at least one of the parallel processing device to retrieve ancillary data from the device memory provided that the method according to C29. [C 31] A general-purpose graphics processing unit (GPGPU), and means for configuring two or more parallel processing device to operate selectively as a stage of the processing pipeline; for transfer between the parallel processing device means for configuring one or more local memory buffer to hold the data of the still, each of said buffer is connected between at least two of among the parallel processing apparatus; GPGPU comprising a. [C32] Each of the local memory buffer, said coupled at least directly with two between one of said parallel processing apparatus in the processing pipeline, GPGPU according to C 31. [The C33] wherein one or more local memory buffer includes hardware-based data flow control mechanisms that enable the transport of the data between the parallel processing apparatus, GPGPU according to C 31. [C34] the one or more means for forming one or more of the parallel processing device to transmit data to the local memory buffer, wherein one or more of the parallel processing device to receive data from the local memory buffer GPGPU according to C31, further comprising: means, a for configuring one or more. [C35] means for said executing the one or more application programming interface for configuring the parallel processing apparatus so as to transmit the data to the local memory buffer to receive data from said local memory buffer (API), a further GPGPU according to C34 with. [C36] The so as to have a means for determining a required width for each of the local memory buffer to hold the data output from the previous processing apparatus, the determined width in the processing pipeline GPGPU according to C31, further comprising a means, a for configuring each of the local memory buffer. [C37] to execute one or more application programming interface (API), said determining the width for each of the local memory buffers, constitutes each of the local memory buffers according to the determined width, said local memory to determine each of the depth of the buffer, GPGPU according to C36, further comprising: a. |
The invention discloses a platform seal secret using a physical unclonable function (PUF) with trusted computing base (TCB) restorability. Methods and apparatus relating to providing a platform seal secret using a physical unclonable function (PUF) with trusted computing base (TCB) restorability are described. In an embodiment, a decode circuit is to decode an instruction to determine data to be cryptographically protected and a challenge for a Physical Unclonable Function (PUF) circuit. Execution circuitry executes the decoded instruction to cryptographically protect data in accordance with a key, wherein the PUF circuitry is to generate the key in response to a challenge. Other embodiments are also disclosed and claimed. |
1. An apparatus for providing platform sealed secrets, said apparatus comprising:Physical unclonable function PUF circuit;decoding circuitry for decoding an instruction having a field for an address of a memory buffer; andExecution circuitry for executing the decoded instructions to:determine the data for cryptographic protection and determine the challenge; andThe data is cryptographically protected according to a key, wherein the PUF circuit is operable to generate the key in response to the challenge.2. The apparatus of claim 1, wherein the execution circuit is configured to cryptographically protect the data according to the key and a security version number SVN.3. The apparatus of claim 1, wherein the execution circuit is to cause cryptographically protected data to be stored in the memory.4. The apparatus of claim 1, wherein the execution circuit is configured to cryptographically protect the data according to the key and a security version number SVN, wherein the execution circuit is configured to cryptographically The data protected in a learned way and the SVN are stored in memory.5. The apparatus of claim 1 , wherein the PUF circuit is operable to generate a plurality of keys in response to the challenge, wherein each key in the plurality of keys is for a different use And be used.6. The apparatus of claim 5, wherein the different usage comprises fuse protection or software-visible PUF usage.7. The apparatus of claim 1, wherein the decoding circuit is operable to decode a second instruction for determining the presence of the cryptographically protected data and a second challenge , wherein the execution circuit is configured to execute the decoded second instruction to cryptographically unprotect the protected data according to a second key, wherein the PUF circuit is configured to respond to the second challenge The second key is generated.8. The apparatus of claim 7, wherein the execution circuit is to execute the decoded second instruction to cryptographically unprotect the protected data according to the second key and the SVN.9. The apparatus of claim 8, comprising verification logic to determine integrity of deprotected data based on the SVN and a current SVN.10. The apparatus of claim 9, wherein the deprotected data is returned in response to a successful integrity verification by the verification logic.11. The apparatus of claim 9 , wherein, in response to an unsuccessful integrity verification by the verification logic, a signal is used in accordance with the strategy is generated.12. The apparatus of claim 1, wherein the data includes a key corresponding to a hardware block.13. The apparatus of claim 1, wherein the challenge is a 256-bit random value.14. The apparatus of claim 1, wherein the decoding circuit is operable to decode a second instruction for determining the presence of cryptographically protected data and a second challenge, wherein , the execution circuit for executing the decoded second instruction to cryptographically unprotect the protected data according to the second key and in response to a determination that the configuration is active, wherein the PUF circuit is for The second key is generated in response to the second challenge.15. The apparatus of claim 14, wherein the configuration is to be selected when the execution circuit is used to execute the decoded instruction.16. An apparatus for providing platform sealed secrets, the apparatus comprising:Physical unclonable function PUF circuit;decoding circuitry for decoding an instruction having a field for an address of a memory buffer; andExecution circuitry for executing the decoded instructions to:determine the data for cryptographically deprotected and determine the challenge; andThe data is cryptographically unprotected according to a key, wherein the PUF circuit is operable to generate the key in response to the challenge.17. The apparatus of claim 16, wherein the execution circuit is to cryptographically unprotect the protected data according to the key and the SVN.18. The apparatus of claim 17, comprising verification logic to determine integrity of deprotected data based on the SVN and a current SVN.19. The apparatus of claim 18, wherein the deprotected data is returned in response to a successful integrity verification by the verification logic.20. The apparatus of claim 18 , wherein, in response to an unsuccessful integrity verification by the verification logic, a signal is used to execute the decoded second instruction at the execution circuit to The strategy chosen when cryptographically protecting the data is generated.21. The apparatus of claim 16, wherein the data includes keys corresponding to hardware blocks.22. The apparatus of claim 16, wherein the challenge is a 256-bit random value.23. An apparatus comprising means for performing the method as claimed in any one of the preceding claims 1 to 22.24. Machine-readable storage comprising machine-readable instructions which, when executed, are adapted to carry out the method of any one of the preceding claims 1 to 22 or to implement the preceding claims 1 to 22 The device described in any one. |
Use physically unclonable functions with Trusted Computing Base (TCB) recoverability
(PUF) platform-sealed secrettechnical fieldThe present disclosure relates generally to the field of electronics. More specifically, embodiments relate to providing platform-sealed secrets using physically unclonable functions (PUFs) with Trusted Computing Base (TCB) recoverability.Background techniqueA physically unclonable function (PUF) generally refers to a physical object that, for a given input and condition (challenge), provides a physically defined output (response) that can serve as a unique identifier for a semiconductor device. An example PUF is an array of transistor devices whose response is based on unique physical changes that occur naturally during semiconductor fabrication. Due to this unique response, PUFs can be used to provide platform unique entropy, which in turn can be used to generate unclonable cryptographic keys. Since the entropy generated by PUF is unique to the platform, the same PUF circuit used on different platforms will generate different entropy, which in turn makes the cryptographic keys generated by PUF unclonable.Description of drawingsA detailed description is provided with reference to the accompanying figures. In the figures, the leftmost digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.Figure 1 illustrates a block diagram of a physically unclonable function (PUF) component that may be utilized in an embodiment.Figure 2 illustrates a block diagram of various components for wrapping and/or unwrapping secrets, according to one or more embodiments.Fig. 3 illustrates a flowchart of a method for software sealing/unsealing of secrets according to an embodiment.Fig. 4 illustrates a flowchart of a method for cryptographic key programming according to an embodiment.FIG. 5 illustrates exposed security values in terms of keys, according to an embodiment.Figures 6, 7 and 8 illustrate sample structure details according to some embodiments.Figure 9 illustrates a platform configuration to which wrapped blobs may be bound, according to an embodiment.Figure 10 illustrates sample 64-bit identifiers for programming, according to an embodiment.Figures 11, 12, and 13 illustrate pseudocode for various instructions, according to some embodiments.FIG. 14A is a block diagram illustrating an exemplary instruction format according to an embodiment.Figure 14B is a block diagram illustrating the fields that make up the full opcode field in an instruction format, according to one embodiment.Figure 14C is a block diagram illustrating the fields that make up the register index field in an instruction format according to one embodiment.Figure 14D is a block diagram illustrating the fields making up the extended operation field in an instruction format according to one embodiment.Figure 15 is a block diagram of a register architecture according to one embodiment.16A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline, according to an embodiment.16B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execution architecture core to be included in a processor, according to an embodiment.FIG. 17 illustrates a block diagram of a SOC (System on Chip) package according to an embodiment.Figure 18 is a block diagram of a processing system, under an embodiment.Figure 19 is a block diagram of an embodiment of a processor with one or more processor cores, according to some embodiments.FIG. 20 is a block diagram of a graphics processor according to an embodiment.detailed descriptionIn the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Furthermore, aspects of various embodiments may be implemented using various means, such as integrated semiconductor circuits ("hardware"), computer readable instructions organized into one or more programs ("software"), or some combination of hardware and software. kind of combination. For the purposes of this disclosure, references to "logic" shall mean hardware (such as logic circuits or, more generally, circuitry or circuits), software, firmware, or some combination thereof.Some embodiments provide one or more techniques for providing platform-sealed secrets using physically unclonable functions (PUFs) with trusted computing base (TCB) recoverability. For example, embodiments use PUF-derived key(s) to wrap and bind secrets to the platform while supporting TCB recoverability. As discussed herein, "wrapping" or "key wrapping" generally refers to the act of using a key or secret to protect an item through cryptographic techniques such as encryption and/or integrity protection. In at least some embodiments, one or more of the instructions discussed herein may conform to the EVEX format (such as discussed with reference to FIGS. 14A-14D ).Figure 1 illustrates a block diagram of a physically unclonable function (PUF) component 100 that may be utilized in embodiments. In general, PUFs provide platform-unique entropy that can be used to generate cryptographic keys as shown in FIG. 1 . For example, upon a platform reset or another triggering event, PUF array logic 102 generates platform unique entropy 104 (or root key as shown in FIG. 1 ). For example, in other embodiments where the PUF circuit receives an external input to start key generation, another trigger event may be provided as needed. As discussed herein, "entropy" generally refers to a (eg, random) key or object used in a cryptographic algorithm that requires a key.In an embodiment, the platform-unique entropy 104 is static, i.e., remains the same value generated across boot or trigger events, and is unique to a platform (i.e., the same PUF circuit used on different platforms will generate different entropy). Traditionally, platform secrets have been stored in fuses and are considered secure. However, recent research has shown that determined hardware attackers can scan fuses (eg, using X-rays or techniques) and thereby recover secrets. The PUF would provide protection against such scanning, and its logic could be equipped with mechanisms that could also be resistant to side-channel attacks, such as those using electromagnetic (EM) radiation.In some embodiments, root key 104 is not used directly, but is instead used to derive other keys (eg, via key derivation function (KDF) logic 106 ). In one embodiment, KDF 106 may utilize National Institute of Standards and Technology (NIST) standards to derive keys. The derived key can then serve as a root key for different uses. Accordingly, PUFs can provide enhanced security against hardware attacks and provide platform binding since the generated key(s) are based on unique physical changes that occur on each platform during manufacturing. As shown in Figure 1, the key(s), challenge(s), and response(s) may comprise 256 bits, but embodiments are not limited thereto and more or fewer bits may be used. PUFs may be used to protect platform secrets (eg, keys in fuses), and generally may not be exposed to software.In some implementations, a software-visible PUF (SV-PUF) exposes PUF functionality to software through one or more instructions (also collectively referred to herein as an ISA (Instruction Set Architecture)). At least one embodiment uses SV-PUF to wrap and bind secrets to platforms using PUF-derived keys, which may also support Trusted Computing Base (TCB) recoverability. As discussed herein, "TCB" generally refers to all components of a platform or system that are critical to its security such that a vulnerability or vulnerability in the TCB could compromise the security of the entire system. More specifically, vulnerabilities in the TCB (which may include several firmware components such as security engine firmware (involved in deriving the PUF root key), ucode or microcode (involved in wrapping and unpacking of software secrets), power management firmware, etc. may Potentially lead to revealing SV-PUF root keys or software secrets. Such vulnerabilities are fixed in the corresponding components, and update patches are released for supply to affected systems.Due to TCB recoverability, updates to the TCB version number (also known as the Security Version Number or SVN) can be transferred to software, and secrets can be migrated from the old TCB to the new TCB to allow the new TCB to protect those secrets. An attempt by the attacker to roll back to the old SVN made the secret unusable. Without any TCB recoverability and migration, an attacker can potentially cause a rollback of a vulnerable old TCB version, which could lead to revealing secrets.In view of this, embodiments use PUF-derived key(s) to wrap and bind secrets to the platform, while supporting TCB recoverability. Software can generate blobs (or more generally, (e.g., large) data, (e.g., large binary) objects, etc.) that will only work with the current TCB when it is generated, or with an old TCB The version number, works with a warning to instruct the software to re-wrap secrets with the current TCB (also known as migrating to a new TCB). Instructions for supporting recoverable sealed blobs are introduced in an embodiment.One embodiment provides software with the ability to wrap secrets using PUF-derived keys bundled to the TCB version. These secrets can be made available across boot without ever exposing them in open or unprotected memory. This is done by introducing new instructions for wrapping/unwrapping that support TCB recoverability. The wrap instruction takes a software secret as an input operand and wraps it, ie encrypts and integrity protects it, using a PUF-derived key. The generated wrapped blob is tied to a specific use. For some embodiments, blobs may simply be generated to protect secrets that software intends to retrieve at a later point in time, or blobs may be generated to protect keys that need to be programmed into the cryptographic engine. As an example, Multi-Key Total Memory Encryption (MKTME) for persistent storage can be secured using these new instructions. Similarly, Total Storage Encryption (TSE) can be secured with these new instructions.Also, to use the secret available in the wrapped blob, another embodiment provides an unpack instruction that takes the wrapped blob as an input parameter and unpacks the secret, i.e., the secret Decrypt and verify the integrity of the secret. The retrieved secret is then either returned to the software or programmed into the hardware engine, depending on the intended use (which may be indicated by the software to the ISA when packaged). The package instructions optionally allow platform and/or CPU (Central Processing Unit, also referred to herein as "processor") configurations to be included in the package. In one embodiment, the unpack instruction will allow the blob to be unpacked only if the platform and/or CPU configuration (desired at the time of unpacking) is active at the time of unpacking.In the case of a vulnerability in the TCB, a TCB update can render blobs generated with previous TCBs (with potential security holes) unusable by preventing the unpacking of recoverable blobs. Optionally, the software can also choose to generate blobs that work with old TCBs but provide warnings when the TCB version has changed (new TCBs are installed). Software can then be migrated to the new TCB by executing the package again with the new TCB. This can be done by enhancing the wrap instruction to allow the security engine that generates/manages the PUF-derived keys to return the current TCB version in the wrapped blob. The software then expects to provide the wrapped blob along with the TCB version with which it was generated to allow unpacking. As discussed earlier, unpacking then works based on software policy.Thus, PUF circuitry/logic can provide strong protection against hardware attacks, and some embodiments allow such protection to also assume software secrets. Additionally, the secret is never exposed to memory in the clear or otherwise to unprotected memory, or the secret is only exposed when it is explicitly requested by the owner software, thereby minimizing exposure to attacks. One or more embodiments provide hardware manufacturer agnostic key capabilities, ie, software keys are never known to the hardware manufacturer, and so are PUF-derived keys used to protect secrets. Such support can be undertaken while allowing TCB recoverability, thereby enhancing the security of wrapped blobs in the presence of always-occurring TCB vulnerabilities.Figure 2 illustrates a block diagram of various components for wrapping and/or unpacking using SV-PUF instruction(s), according to one or more embodiments. Initially, the software requests that a secret be wrapped using a PUF-derived key by using the wrap instruction disclosed herein (202). In addition to providing the secret to wrap, the software also provides a challenge for generating a PUF-derived key from the root PUF key. As discussed herein, "secret to be wrapped" may interchangeably refer to "data to be cryptographically protected." Software may also include policies for recoverability. Some embodiments support at least two strategies: (1) allow unpacking with old TCB versions, with warnings; and/or (2) disallow unpacking with old TCBs, and output errors.At 204, the wrapping instruction takes the input provided by the software in the memory structure and activates/triggers the PUF circuit 100 to obtain the key to use. The security engine managing the PUF engine also returns a security version number to reflect the version number of the TCB to the microcode. When retrieving the key from the PUF and retrieving the SVN, the package command can use this key to encrypt and integrity protect the secret provided by the software. In an embodiment, the wrapped blob includes the SVN for wrapping and is returned to the software in a memory location provided by the software.At 206, at a later point in time when the software plans to use the blob, the software uses an unpack instruction to do so. The unpacking instruction may include multiple instructions, one for each of the disclosed uses. For example, a first instruction takes the wrapped blob along with the TCB version used to generate the blob, retrieves the secret by checking the integrity of the blob and decrypting it. The retrieved secret is then returned back to the software (208). Another disclosed use involves programming a hardware cryptography engine with a key. As an example, persistent memory keys may be programmed into the MKTME engine using wrapped blobs. In this case, the instructions for programming the engine take the wrapped blob along with the TCB version used to generate the blob, unpack it as previously discussed (but may not return the blob to the software retrieved key(s). Alternatively, at 210, the key(s) may be programmed directly to the target hardware engine(s) via a hardware interface, thereby not exposing the key(s) in the clear in memory or otherwise exposed to unprotected of memory. In an embodiment, unpacking is only successful if the version number included with the blob is the same as the current SVN. If the TCB has been updated, unpack either errors out or gives a warning, depending on the recoverability policy chosen when unpacking. The next two sections describe the uses and instructions disclosed according to various embodiments.Encapsulation/decapsulation using SV-PUFFig. 3 illustrates a flowchart of a method 300 for software sealing/unsealing of secrets using SV-PUF according to an embodiment. At operation 302, the software claiming to protect the secret invokes a new instruction WRP to generate a PUF-derived unique key (as previously discussed, the PUF root key can be mixed with the challenge using a KDF), which will wrap the Data is passed as input operands along with challenges that are used as inputs to PUF circuits (eg, PUF block 100 discussed with reference to FIGS. 1-2 ). In an embodiment, the PUF circuit itself may provide multiple root keys for different uses. As an example, there may be one root key derived for standard platform use (e.g., protection fuses) and another root key derived for SV-PUF use, but for simplicity, this disclosure refers to one root key key. The wrap instruction obtains the PUF-derived key from the security engine of the managing/master PUF engine along with the current SVN using the challenge, and encrypts and integrity-protects the requested secret using the PUF-derived key (304). A wrapped blob (eg, including SVN for wrapping) is provided as an output of the instruction and is stored, for example, in a memory location specified by the software and provided as input to the wrap instruction (306).In an embodiment, the software keeps the blob around (e.g., in a software-defined memory location such as disk, on network storage, etc.) when the software-protected secret is not in use . At operation 308, the software executes a new instruction UNWRP (unpack) with the wrapped blob passed as an input operand, eg, when the software needs to access the secret. In an embodiment, the wrapped blob is provided with the same SVN that was returned during wrapping to allow successful unpacking. The UNWRP command uses the challenge passed along with the blob to fire/trigger the PUF circuit to retrieve the PUF-derived key that was used to wrap the blob (310). SVN is also provided to the security engine hosting the PUF to allow it to perform SVN checks. The PUF-derived key is then used to decrypt the wrapped blob and verify its integrity. If the integrity verification is successful and the current SVN is the same as the SVN at the time of wrapping, then at operation 312, the unpacked data is returned back to the requester software; otherwise, the unpacking depends on the recoverability policy selected at wrapping time Instead, generate a warning or output an error. The challenge used to fire the PUF may be a 256b random value chosen by software and provided for wrapping and unwrapping.Cryptographic key programming using SV-PUFFIG. 4 illustrates a flowchart of a method 400 for cryptographic key programming using SV-PUF, according to an embodiment.For key programming use, the software is designed to program the key into a hardware block on the platform. One example use is to program keys for persistent storage into the MKTME engine. In this use, during the provisioning phase (which can occur in an enterprise environment when a user receives a machine at an information technology center), the key used for persistent storage encryption (which can be equivalent to disk encryption) is derived using a PUF Keys are wrapped, similar to the use of wrapping described above. Operations 402, 404, 406, and 408 may use WRP/UNWRP instructions as previously discussed.In an embodiment, when the software wants to program the key (e.g., on every reboot, to set up a persistent memory key), the software invokes the instruction PCONFIG to program the key with a wrapped blob (410). The PCONFIG instruction unpacks the blob and verifies integrity as before, but in this use (instead of returning the unpacked secret back to software), the key is programmed to the target hardware engine through the hardware interface. In this way, the keys are not exposed in memory outside of the provisioning phase, which occurs only once during the life of the machine. A successful/failed programmed response is returned to the requesting software (414).Fig. 5 illustrates the security value according to the exposure of the key with SV-PUF according to an embodiment. In other words, Figure 5 shows limited exposure to provisioning with SV-PUFs. As indicated, exposure is limited to the stage of supply (eg, during manufacturing or at an information technology facility). During runtime, the keys are not exposed to unprotected memory, and are not exposed in the clear (ie, unencrypted), regardless of the toggle/reset cycle. As shown in Figure 1, N reset periods may be used, for example where N=2M, where M is the key length in bits.In at least one embodiment, the recoverability aspects for this usage are the same as described for the wrap/unwrap usage. Software needs to provide a wrapping policy when wrapping, which is then used by the unpack command to determine whether unpacking can complete successfully. Depending on the recoverability policy chosen, if the current SVN for the PUF-wrapped TCB is different than the wrapped blob's TCB, unwrapping will either provide a warning to the software or output an error.ISA support for sealing/unsealing of software/hardware cryptography enginesIn some embodiments, there are three new instructions disclosed herein:(1) Wrapping support: WRP, an instruction to allow software to wrap secret information using a wrapping key and bind it to a specified target using a recoverability policy as input;(2) Unpacking support: UNWRP, an instruction to allow conditional unpacking of wrapped blobs generated from WRP based on the current security version number or TCB version; and(3) Hardware key programming support: PCONFIG, an instruction to allow software to program keys and other target-specific information to a desired target, for example conditional on the current security version number or TCB version.In an embodiment, the package target and hardware key programming target may be defined as follows:(a) Wrap target: Software requests wrapping by specifying a target for use indicating that the software is requesting that the blob be generated. For sealing/unwrapping (also known as wrapping/unwrapping) usage, there is one goal of indicating to the ISA that the unwrapped secret is to be returned back to the software. For hardware key programming, there is a different goal of indicating to the ISA that the unwrapped secret is to be programmed into the desired hardware engine. Package targets are checked during unpack commands (UNWRP and PCONFIG).(b) Hardware Programming Target: This target reflects the hardware engine into which the key needs to be programmed. The MKTME and TSE engines are used in this disclosure as example hardware engines.In an embodiment, some sample details of the WRP command include:· Ring-0 instruction, 64bSoftware calls WRP by passing input and output memory buffersCurrently using get BIND_STRUCT (bind_struct) as input and output structures (discussed next)· Operands:RAX: operational statusRBX: The linear address of the input memory bufferRCX: Linear address of the output memory bufferAffected signs:ZF is cleared on successful unpacking, otherwise ZF is set to 1CF, PF, AF, OF, and SF are clearedAs discussed herein, RAX, RBX and RCX refer to general purpose registers. As discussed with reference to Figure 2, software initially requests wrapping of a secret using a PUF-derived key by using a WRP instruction. In addition to providing the secret to wrap, the software can also provide a challenge. As discussed herein, "secret to be wrapped" may interchangeably refer to "data to be cryptographically protected."FIG. 6 illustrates a BIND_STRUCT structure 600 according to an embodiment. As shown, WRP operates using BIND_STRUCT as an input/output structure, which allows target-specific data to be specified.According to an embodiment, the fields of the structure of FIG. 6 are described as follows:MAC: Message Authentication Code for output wrapped structures generated by WRPBTID: The target of the package. There are three objects for the usage disclosed in this invention: WRAP_DATA_CPU (Wrap_Data_CPU), MKTME_ENGINE_SVPUF (MKTME_ENGINE_SVPUF) and TSE_ENGINE_SVPUF (TSE_ENGINE_SVPUF)SEQID: Authenticated encrypted initialization vector for execution by the instructionBTENCDATA: This field carries the secret that the software wants to wrapBTDATA: This field carries information such as the challenge to fire/trigger the PUF, the configuration vector to indicate the platform to the command, and the CPU configuration that needs to be included for the package. Additionally, this field can carry the recoverability policy to use. In the example implementation, two strategies are supported: if the SVN used to generate the blob does not match the current SVN, output an error when unpacking; or give a warning to the software to allow it to perform the conversion from the old TCB to the new Migration of TCBs. This field may also carry the SVN when wrapped, and include it as integrity protected in the wrapped blob.Figure 7 shows further details of the BTENCDATA field from Figure 6, according to an embodiment. As shown, BTENCDATA can be a single 64B field that software can fill in as desired to carry keys or other secrets that the encryption software wants to protect. As an example, for MKTME/TSE key programming, this field carries two keys - data to be encrypted using AES in XTS (XEX-based fine-tunable codebook mode with ciphertext stealing) mode and Fine-tuning (tweak) keys. Each key can be up to 256b in size. In an embodiment, software can use a key to cryptographically protect any amount of data, and then use the SV-PUF ISA to protect the key, thereby allowing arbitrarily large amounts of data to be protected with SV-PUF.Figure 8 illustrates a sample table for the BTDATA field of Figure 6, according to an embodiment. This field carries other subfields that control wrapping using PUF-derived keys. One embodiment introduces RECOVERABILITY_POLICY as a new field, in addition to the challenge to generate the PUF-derived key and the bit vector to carry the platform/CPU configuration to bind to. Configuration for binding to and the mechanisms for doing so are discussed next.FIG. 9 illustrates platform/CPU configurations to which wrapped blobs may be bound, according to an embodiment. The WRP instruction microcode can use this bit vector when wrapping and bind the blob to the configuration by simply including it in the message authentication code (MAC) generated on the output BIND_STRUCT. In general, WRP can perform no checks, the unpack command will check the configuration and only allow unpacking if the configuration expected by the software is active. Thus, the software will check the current configuration on the machine before requesting a bind to ensure that it does not bind the secret to a configuration that is not active on the platform. Binding such configurations will make the blob impossible to unpack to retrieve the secret. As an example, if Boot Guard is not enabled and assuming that the boot guard's software-requested binding is enabled, the UNWRP instruction will check if Boot Guard is enabled, and not if the software requests a configuration when unpacking that is not present when unpacking. Allows unpacking of blobs.In Figure 9, VM stands for Virtual Machine, SMEP stands for Hypervisor Mode Execution Blocking, SMAP stands for Hypervisor Mode Access Blocking, UEFI stands for Unified Extensible Firmware Interface, TPM stands for Trusted Platform Module, and PTT stands for Platform Trusted technology, DGR stands for Devil's Gate Rock, NR stands for Nifty Rock, TXT stands for Trusted Execution Technology, OEM stands for Original Equipment Manufacturer, Boot Guard is used to prevent replacement before secure boot begins firmware to protect optional processor features of the system.As another example of configuration, software identities (eg, process identities, enclave measurements, VM/TD (virtual machine/trusted domain) measurements) are allowed to be wrapped. The WRP instruction, if requested to bind to an identity in software, picks up that identity from the hardware and includes it in the generated MAC. When unpacking, the unpack instruction uses the identity from the hardware to verify the MAC. If the software unpacking the blob does not own the blob, the unpacking will fail, thereby binding the software identity. Also, in an embodiment, only the software that originally wrapped the blob can use it to recover the unwrapped secret, since the blob is bound to that software's identity (or measurement).In an embodiment, for recoverability, the WRP instruction obtains the current SVN from the PUF management engine (eg, hardware/firmware) in addition to obtaining the PUF-derived key (based on the challenge provided). This SVN includes the SVN of TCB components such as microcode and any other firmware (e.g., security engine firmware, power management firmware) that has access to the PUF-derived key or the root key used to derive the key ). The WRP command will integrity protect the SVN after it is fetched, along with other fields in the output blob.In an embodiment, the UNWRP instruction fetches the wrapped blob for sealing/unwrapping use, where the software has returned the secret after unwrapping. If a differently used blob (indicated by the BTID field in Figure 6) is passed to UNWRP, unpacking will fail. Note that the BTID is included as part of the MAC when wrapping, and thus untrusted software cannot just change the BTID to use a blob for one use for another. In other words, the WRP command ensures binding to the target/use.In an example, some sample details of the UNWRP command include:· Ring-0 instruction, 64bSoftware calls UNWRAP by passing a wrapped blob generated using WRP and a pointer to an output buffer to receive the unpacked dataAs long as the correct challenge is provided and the current SVN known to the PUF manager is the same as the SVN provided in the wrapped blob (SVN at the time of wrapping), the blob is successfully unwrapped.· Operands:RAX: operational stateRBX: Enter the linear address of the wrapped BIND_STRUCTRCX: Linear address of output buffer to receive unpacked dataAffected signs:ZF is cleared on successful unpacking, otherwise ZF is set to 1CF, PF, AF, OF, and SF are clearedIn terms of the PCONFIG.MKTME_KEY_PROGRAM_SVPUF (PCONFIG.MKTME_KEY_PROGRAM_SVPUF) leaf, the PCONFIG instruction can initially have been used with MKTME to program the key into the MKTME engine: (a) the software sets the MKTME key in EAX Leaf values are programmed to call appropriate functions; (b) RBX, RCX and RDX have leaf-specific usage; and (c) operational status is indicated in EAX. Thus, only one leaf function (MKTME_KEY_PROGRAM) can be supported with this version of PCONFIG.In an embodiment, SV-PUF introduces a new PCONFIG leaf to support MKTME key programming using wrapped blobs. Although an embodiment proposes an additional leaf for the PCONFIG instruction, this could be made more general as a new instruction. Additionally, although the MKTME engine may be cited herein as an example, a similar flow may also be used for the Total Storage Encryption (TSE) engine as a new leaf or instruction for PCONFIG. A new leaf or new instruction may target the TSE engine used for programming, and is expected to utilize TES's wrapped blobs as targets.In one embodiment, the PCONFIG leaf for MKTME programming using PUF-wrapped blobs is executed with the following parameters: (1) EAX: MKTME_KEY_PROGRAM_SVPUF(MKTME_KEY_PROGRAM_SVPUF); (2) RBX : KEYID_CTRL (Key ID_Control) (shown in Figure 10, for example it may be the same as defined for MKTME); (3) RCX: Linear of WRAPPED_KEY_PROGRAM_STRUCT (Wrapped_Key_Program_Struct) address.More specifically, Figure 10 illustrates a sample 64-bit KEYID_CTRL for MKTME programming, according to an embodiment. Restorative actions can be the same as described for the previous usage. In FIG. 10 , KEYID (key ID) means a key identifier, and ENC_ALG (encryption_algorithm) means an encryption algorithm (for use with the key ID).FIG. 11 shows sample pseudocode 1100 for WRP instructions, according to an embodiment. Figure 12 shows sample pseudocode 1200 for the UNWRP instruction, according to an embodiment. Figure 13 shows sample pseudocode 1300 for the PCONFIG.MKTME_KEY_PROGRAM_SVPUF instruction, according to an embodiment.Referring to Figure 11, Figure 12 and Figure 13, one or more of WRP, UNWRP and PCONFIG.MKTME_KEY_PROGRAM_SVPUF leaves are enumerated in the extended feature in CPUID (CPU identifier), for example, when 0, WRP and UNWRP will be #UD (or undefined opcode) and the PCONFIG.MKTME_KEY_PROGRAM_SVPUF leaf will be #GP(0) (ie, general protection fault). Terms used in the pseudocode are referenced herein with reference to other figures.Also, while some embodiments use PUFs as an example of platform-unique persistent entropy, embodiments are not limited thereto and any other source of persistent entropy may be utilized. As an example, the platform root key may be stored in a fuse, or may be derived from the fuse at each boot. However, alternative implementations using other sources of persistent entropy may have different security profiles (eg, with fuse-based keys, may be less defensive against hardware attacks).Additionally, some embodiments may find application in computing systems that include one or more processors such as those discussed with reference to FIG. may include one or more processor cores), the computing system includes, for example, a desktop computer, workstation, computer server, server blade, or mobile computing device. Mobile computing devices may include smartphones, tablets, UMPCs (Ultra Mobile Personal Computers), laptops, Ultrabook™ computing devices, wearable devices such as smart watches, smart rings, smart bracelets, or smart glasses, etc. .Instruction SetAn instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, position of bits) to specify the operation to be performed (e.g., opcode) and the operand(s) and/or (multiple) additional data fields (eg, masks), etc. Some instruction formats are broken down further by the definition of instruction templates (or subformats). For example, an instruction template for a given instruction format may be defined to have a difference in the fields of that instruction format (included fields are generally in the same order, but at least some fields have different bit positions because fewer fields are included). subset, and/or defined to have a given field interpreted differently. Thus, each instruction of the ISA is expressed using a given instruction format (and, if defined, by a given one of the instruction templates for that instruction format), and includes a field. For example, an exemplary ADD (addition) instruction has a specific opcode and instruction format that includes an opcode field for specifying the opcode and for selecting operands (source 1/destination and source 2) and the presence of the ADD instruction in the instruction stream will cause specific content in the operand field to select a specific operand. A set of SIMD extensions known as Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) encoding scheme have been introduced and/or released (see e.g. Software for 64 and IA-32 Architectures September 2014 Developer's Manual; and see the October 2014 Advanced Vector Extensions Programming Reference).Exemplary Instruction FormatEmbodiments of the instruction(s) described herein can be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may execute on such systems, architectures and pipelines, but are not limited to those systems, architectures and pipelines detailed.Although an embodiment will be described in which the vector friendly instruction format supports the following cases: 64-byte vector operand length (or size) with 32-bit (4 bytes) or 64-bit (8-byte) data element width (or size) ( And thus, a 64-byte vector consists of 16 doubleword-sized elements, or alternatively 8 quadword-sized elements); a 64-byte vector operand length (or size) is the same as 16 bits (2 bytes ) or 8-bit (1 byte) data element width (or size); 32-byte vector operand length (or size) and 32-bit (4-byte), 64-bit (8-byte), 16-bit (2-word section) or 8-bit (1 byte) data element width (or size); and 16-byte vector operand length (or size) with 32-bit (4 2 bytes), or 8-bit (1 byte) data element width (or size); but alternative embodiments may support larger, smaller, and/or different vector operand sizes (e.g., 256-byte vector operands ) with a larger, smaller, or different data element width (for example, a 128-bit (16-byte) data element width).FIG. 14A is a block diagram illustrating an exemplary instruction format according to an embodiment. Figure 14A shows an instruction format 1400 that is specific in the sense that it specifies the location, size, interpretation and order of the fields, and the values of some of those fields. The instruction format 1400 can be used to extend the x86 instruction set, and thus some of the fields are similar or identical to those used in the existing x86 instruction set and its extensions (eg, AVX). The format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate field of the existing x86 instruction set with extensions.EVEX prefix (bytes 0-3) 1402 - Encoded in four bytes.Format Field 1482 (EVEX Byte 0, Bits [7:0]) - The first byte (EVEX Byte 0) is the Format Field 1482, and it contains 0x62 (in one embodiment, for distinguishing vector-friendly unique value in the instruction format).The second-fourth bytes (EVEX bytes 1-3) include a number of bit fields providing specific capabilities.REX field 1405 (EVEX byte 1, bits [7-5]) - composed of the EVEX.R bit field (EVEX byte 1, bits [7]–R), the EVEX.X bit field (EVEX byte 1, bits [7] 6]–X) and (1457BEX byte 1, bit[5]–B). The EVEX.R, EVEX.X, and EVEX.B bitfields provide the same functionality as the corresponding VEX bitfields and are encoded using 1's complement, ie, ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of these instructions encode the lower three bits of the register index (rrr, xxx, and bbb) as known in the art, so that by adding EVEX.R, EVEX.X, and EVEX.B to form Rrrr, Xxxx and Bbbb.REX' field 1410 - This is the EVEX.R' bit field (EVEX byte 1, bit [4] - R') which is used for the upper 16 or lower of the extended set of 32 registers 16 to encode. In one embodiment, this bit is stored in bit-reversed format along with the other bits indicated below to distinguish (in known x86's 32-bit mode) from BOUND instructions whose real opcode bytes are 62, but in MOD The value 11 in the MOD field is not accepted in the R/M field (described below); the alternate embodiment does not store this bit, and others indicated below, in inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRRs from other fields.Opcode Mapping Field 1415 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes the implied leading opcode byte (OF, OF 38, or OF 3).Data Element Width Field 1464 (EVEX byte 2, bits [7] - W) - represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of a data type (32-bit data element or 64-bit data element). This field is optional in the sense that it is not required if only one data element width is supported and/or if an aspect of the opcode is used to support multiple data element widths.EVEX.vvvv 1420 (EVEX byte 2, bits [6:3]-vvvv) - The role of EVEX.vvvv may include the following: 1) EVEX.vvvv pairs the first source specified in inverted (1's complement) form register operand encoding and is valid for instructions with two or more source operands; 2) EVEX.vvvv encodes a destination register operand specified in 1's complement for a particular vector shift; or 3) EVEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. Thus, the EVEX.vvvv field 1420 encodes the 4 low order bits of the first source register specifier stored in inverted (1's complement) form. Depending on the instruction, an additional different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 1468 class field (EVEX byte 2, bit[2]-U) - if EVEX.U = 0, it indicates class A (support merge - write mask) or EVEX.U0; if EVEX.U = 1, it indicates Class B (supports zeroing and merging - write masking) or EVEX.U1.Prefix encoding field 1425 (EVEX byte 2, bits [1:0]-pp) - Provides additional bits for the base operation field. In addition to providing support for legacy SSE instructions in EVEX prefix format, this also has the benefit of compressing SIMD prefixes (EVEX prefixes require only 2 bits instead of bytes to express SIMD prefixes). In one embodiment, to support legacy SSE instructions using SIMD prefixes (66H, F2H, F3H) both in legacy format and in EVEX prefixed format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime The PLA is expanded to a legacy SIMD prefix before being provided to the decoder (so, without modification, the PLA can execute these legacy instructions in both legacy and EVEX formats). While newer instructions may use the contents of the EVEX prefix encoding field directly as an opcode extension, for consistency, certain embodiments extend in a similar fashion, but allow for different meanings specified by these legacy SIMD prefixes. An alternate embodiment could redesign the PLA to support 2-bit SIMD prefix encoding, and thus not require extensions.Alpha field 1453 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.writemask control, and EVEX.N; also illustrated in alpha)— - Its content distinguishes which of the different augmentation operation types is to be performed.β field 1455 (EVEX byte 3, bits [6:4] - SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also illustrated in βββ ) - distinguishes which of the operations of the specified type are to be performed.REX' field 1410 - This is the remainder of the REX' field and is the EVEX.V' bitfield (EVEX byte 3 , bits [3]–V'). This bit is stored in bit-reversed format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Writemask field 1471 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of the register in the writemask register. In one embodiment, the specific value EVEX.kkk=000 has special behavior that implies no write mask is used for the specific instruction (this can be achieved in various ways, including using a write mask hardwired to be all ones or wrapping implemented by masking the hardware). When combined, a vector mask allows any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base and augmentation operations); in another embodiment, keep where the corresponding mask bits have 0 The old value of each element of the destination. Conversely, when zeroed, a vector mask allows any set of elements in the destination to be zeroed during execution of any operation (specified by the base and augmentation operations); in one embodiment, the elements of the destination are in the corresponding mask Bits are set to 0 when they have a value of 0. A subset of this functionality is the ability to control the vector length (i.e., the span from the first to the last element being modified) of the operation being performed; however, the elements being modified do not have to be contiguous. Thus, writemask field 1471 allows partial vector operations, including loads, stores, arithmetic, logic, etc. Although described where the content of writemask field 1471 selects one of a plurality of writemask registers that contains the writemask to use (and thus indirectly identifies the mask), but alternate embodiments instead or in addition allow the contents of the mask write field 1471 to directly specify the mask to be performed.The real opcode field 1430 (byte 4) is also referred to as the opcode byte. The portion of the opcode is specified in this field.MOD R/M field 1440 (byte 5 ) includes MOD field 1442 , register index field 1444 , and R/M field 1446 . The content of the MOD field 1442 distinguishes memory access operations from non-memory access operations. The role of the register index field 1444 can be reduced to two cases: to encode either a destination register operand or a source register operand; or to be treated as an opcode extension and not used to encode any instruction operands. The contents of the register index field 1444 specify the location of the source and destination operands in a register or in memory, either directly or through address generation. These fields include a sufficient number of bits to select N registers from a PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register file. Although N may be up to three source registers and one destination register in one embodiment, alternative embodiments may support more or fewer source and destination registers (e.g., may support up to two sources, where One of these sources also serves as the destination; up to three sources can be supported where one of the sources also serves as the destination; up to two sources and one destination can be supported).The role of the R/M field 1446 may include the following: encoding an instruction operand that references a memory address; or encoding a destination register operand or a source register operand.Scale, Index, Base (SIB) byte (byte 6) - the content of the scale field 1450 allows for the index field to be used for memory address generation (e.g., for address generation using 2 scale*index+base) The proportional scaling of the content. SIB.xxx 1454 and SIB.bbb 1456 - The contents of these fields have been previously referenced for register indices Xxxx and Bbbb.Displacement Field 1463A (Bytes 7-10) - Bytes 7-10 are the Displacement Field 1463A when the MOD Field 1442 contains 10, and it works like a traditional 32-bit displacement (disp32), and at byte granularity . This can be used as part of memory address generation (eg, for address generation using 2 scale*index+base+displacement).Displacement Factor Field 1463B (Byte 7) - When the MOD field 1442 contains 01, Byte 7 is the Displacement Factor field 1463B. The location of this field is the same as that of the legacy x86 instruction set 8-bit displacement ( disp8 ) that works at byte granularity. Since disp8 is sign-extended, it can only be addressed between -128 and 127 byte offsets; in terms of 64-byte cache lines, disp8 usage can be set to only four really useful values -128 8 bits for , -64, 0, and 64; since a larger range is often required, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 1463B is a reinterpretation of disp8; when the displacement factor field 1463B is used, the actual displacement is determined by multiplying the contents of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (single byte for displacement, but with much larger range). Such compressed displacements are based on the assumption that the effective displacement is a multiple of the granularity of the memory access, and thus the redundant low-order bits of the address offset need not be encoded. In other words, the displacement factor field 1463B replaces the traditional x86 instruction set 8-bit displacement. Thus, the displacement factor field 1463B is encoded in the same way as an x86 instruction set 8-bit displacement (thus, no change in the ModRM/SIB encoding rules), with the only exception that disp8 is overloaded to disp8*N. In other words, there is no change in the encoding rules or encoding length, only in the interpretation of the displacement value by the hardware (which requires scaling the displacement to the size of the memory operand to obtain a byte-wise address offset).Immediate field 1472 allows specification of an immediate value. This field is optional in the sense that it is absent in implementations of the general vector friendly format that do not support immediates and is absent in instructions that do not use immediates.full opcode fieldFigure 14B is a block diagram illustrating the fields in the instruction format 1400 that make up the full opcode field 1474, according to one embodiment. Specifically, the full opcode field 1474 includes a format field 1482, a base operation field 1443, and a data element width (W) field 1463. The base operation field 1443 includes a prefix encoding field 1425 , an opcode mapping field 1415 and a real opcode field 1430 .register index fieldFigure 14C is a block diagram illustrating the fields in format 1400 that make up register index field 1445, according to one embodiment. Specifically, register index field 1445 includes REX field 1405, REX' field 1410, MODR/M.reg field 1444, MODR/M.r/m field 1446, VVVV field 1420, xxx field 1454 and bbb field 1456.Extended Action FieldFigure 14D is a block diagram illustrating the fields in the instruction format 1400 that make up the extended operation field, according to one embodiment. When the Class (U) field 1468 contains 0, it indicates EVEX.U0 (Class A 1468A); when it contains 1, it indicates EVEX.U1 (Class B 1468B). When U=0 and MOD field 1442 contains 11 (indicating no memory access operation), alpha field 1453 (EVEX byte 3, bits [7] - EH) is interpreted as rs field 1453A. When the rs field 1453A contains 1 (round 1453A.1), the beta field 1455 (EVEX byte 3, bits [6:4] - SSS) is interpreted as the round control field 1455A. Rounding control field 1455A includes a one-bit SAE field 1496 and a two-bit rounding operation field 1498 . When the rs field 1453A contains 0 (data transform 1453A.2), the beta field 1455 (EVEX byte 3, bits [6:4] - SSS) is interpreted as the three-bit data transform field 1455B. When U=0 and the MOD field 1442 contains 00, 01, or 10 (indicating a memory access operation), the alpha field 1453 (EVEX byte 3, bits [7] - EH) is interpreted as the eviction hint (EH) field 1453B and the beta Field 1455 (EVEX byte 3, bits [6:4] - SSS) is interpreted as a three-bit data manipulation field 1455C.When U=1, the alpha field 1453 (EVEX byte 3, bits [7] - EH) is interpreted as the writemask control (Z) field 1453C. When U=1 and MOD field 1442 contains 11 (indicating no memory access operation), part of β field 1455 (EVEX byte 3, bits [4] - S0) is interpreted as RL field 1457A; 1457A.1), the remainder of the β field 1455 (EVEX byte 3, bits [6-5]–S2-1) is interpreted as the rounding operation field 1459A, while the RL field 1457A contains 0 (VSIZE1457.A2 ), the rest of the β field 1455 (EVEX byte 3, bits [6-5] - S2-1) is interpreted as the vector length field 1459B (EVEX byte 3, bits [6-5] - L1-0) . When U=1 and the MOD field 1442 contains 00, 01, or 10 (indicating a memory access operation), the β field 1455 (EVEX byte 3, bits [6:4] - SSS) is interpreted as the vector length field 1459B (EVEX word section 3, bits[6-5]–L1-0) and broadcast field 1457B (EVEX byte 3, bits[4]–B).Exemplary Register ArchitectureFigure 15 is a block diagram of a register architecture 1500 according to one embodiment. In the illustrated embodiment, there are thirty-two 512-bit wide vector registers 1510; these registers are referenced as ZMM0 through ZMM31. The lower order 256 bits of the lower 16 ZMM registers are overlaid on registers YMM0-16. The lower order 128 bits of the lower 16 ZMM registers (lower order 128 bits of the YMM registers) overlay registers XMM0-15. In other words, vector length field 1459B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the previous length; and instruction templates without vector length field 1459B Operate with maximum vector length. Furthermore, in one embodiment, the class B instruction templates of the instruction format 1400 operate on packed or scalar single/double precision floating point data as well as packed or scalar integer data. Scalar operations are operations performed on the lowest order data element locations in a ZMM/YMM/XMM register; higher order data element locations remain the same as before the instruction or are zeroed, depending on the embodiment.Write Mask Registers 1515 - In the illustrated embodiment, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the size of the write mask register 1515 is 16 bits. In some embodiments, the vector mask register k0 cannot be used as a writemask; when the encoding that normally indicates k0 is used for a writemask, it selects the hardwired writemask 0xFFFF, effectively acknowledging that instruction Disable write masking.General Purpose Registers 1525 - In the illustrated embodiment, there are sixteen 64-bit general purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referred to by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar Floating Point Stack Register File (x87 Stack) 1545, on top of which is overlaid the MMX Packed Integer Flat Register File 1550 - in the illustrated embodiment, the x87 stack is used to use x87 instruction set extensions for 32/64 An eight-element stack for performing scalar floating-point operations on 80-bit floating-point data; while MMX registers are used to perform operations on 64-bit packed integer data and to hold operands for some operations performed between MMX and XMM registers.Alternative embodiments may use wider or narrower registers. Additionally, alternative embodiments may use more, fewer or different register files and registers.Exemplary core architecture, processor and computer architectureProcessor cores can be implemented in different ways, for different purposes, and in different processors. For example, implementations of such cores may include: 1) general-purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores intended for general-purpose computing; 3) intended primarily for graphics and / or dedicated cores for scientific (throughput) computing. Implementations of different processors may include: 1) a CPU (Central Processing Unit) that includes one or more general-purpose in-order cores intended for general-purpose computing and/or one or more general-purpose out-of-order cores; and 2) a co-processor comprising one or more dedicated cores intended primarily for graphics and/or scientific (throughput). Such different processors result in different computer system architectures, which may include: 1) a coprocessor on a separate chip from the CPU; 2) a coprocessor in the same package as the CPU but on a separate die 3) coprocessors on the same die as the CPU (in which case such coprocessors are sometimes referred to as dedicated logic or dedicated cores, such as integrated graphics and and/or scientific (throughput) logic); and 4) a system-on-chip that can integrate the described CPU (sometimes referred to as application core(s) or application processor(s), co-processing controller and additional functions are included on the same die. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.Exemplary Core Architecture16A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline, according to an embodiment. 16B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execution architecture core to be included in a processor, according to an embodiment. The solid-lined boxes in FIGS. 16A-16B illustrate in-order pipelines and in-order cores, while the optional addition of dashed-lined boxes illustrate register-renaming, out-of-order issue/execution pipelines and cores. Considering that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 16A, processor pipeline 1600 includes fetch stage 1602, length decode stage 1604, decode stage 1606, allocate stage 1608, rename stage 1610, dispatch (also called dispatch or issue) stage 1612, register read/memory Read stage 1614 , execute stage 1616 , writeback/memory write stage 1618 , exception handling stage 1622 and commit stage 1624 .FIG. 16B shows processor core 1690 that includes front-end unit 1630 coupled to execution engine unit 1650 , and both front-end unit 1630 and execution engine unit 1650 are coupled to memory unit 1670 . Core 1690 may be a Reduced Instruction Set Computing (RISC) core, a Complex Instruction Set Computing (CISC) core, a Very Long Instruction Word (VLIW) core, or a hybrid or alternate core type. As yet another option, cores 1690 may be special purpose cores such as, for example, network or communication cores, compression engines, coprocessor cores, general purpose computing graphics processing unit (GPGPU) cores, graphics cores, and the like.Front-end unit 1630 includes branch prediction unit 1632 coupled to instruction cache unit 1634 coupled to instruction translation lookaside buffer (TLB) 1636 coupled to instruction translation lookaside buffer 1636 Fetch unit 1638 , which is coupled to decode unit 1640 . Decode unit 1640 (or decoder) may decode an instruction and generate one or more micro-operations, microcode entry points, micro-ops, decoded from, or otherwise reflective of, or derived from, the original instruction. commands, other commands, or other control signals as output. The decoding unit 1640 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read-only memories (ROMs), and the like. In one embodiment, core 1690 includes a microcode ROM or other medium (eg, in decode unit 1640, or otherwise within front end unit 1630) that stores microcode for certain macroinstructions. Decode unit 1640 is coupled to rename/allocator unit 1652 in execution engine unit 1650 .Execution engine unit 1650 includes a rename/allocator unit 1652 coupled to a retirement unit 1654 and a set 1656 of one or more scheduler units. Scheduler unit(s) 1656 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler unit(s) 1656 are coupled to physical register file unit(s) 1658 . Each physical register file unit of physical register file unit(s) 1658 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar floating point , packed integer, packed floating point, vector integer, vector floating point, state (for example, an instruction pointer that is the address of the next instruction to execute), etc. In one embodiment, physical register file unit(s) 1658 include vector register units, write mask register units, and scalar register units. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit(s) 1658 are overlaid by retirement unit(s) 1654 to illustrate the various ways in which register renaming and out-of-order execution can be achieved (e.g., using reorder buffer(s) and retirement register(s) heap; use future file(s), history buffer(s), and retirement register file(s; use register map and register pool, etc.). Retirement unit(s) 1654 and physical register file unit(s) 1658 are coupled to execution cluster(s) 1660 . Execution cluster(s) 1660 includes a set 1662 of one or more execution units and a set 1664 of one or more memory access units. Execution unit 1662 may perform various operations (eg, shift, add, subtract, multiply) and on various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler unit(s) 1656, physical register file unit(s) 1658, and execution cluster(s) 1660 are shown as potentially multiple, as certain embodiments create separate pipelines (e.g., scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or each with its own scheduler unit, physical register file unit(s), and/or Execution cluster's memory access pipeline - and in case of separate memory access pipelines, some embodiments are implemented where only the pipeline's execution cluster has memory access unit(s 1664). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the remaining pipelines may be in-order.Set of memory access units 1664 is coupled to memory unit 1670, which includes a data TLB unit 1672, which is coupled to a data cache unit 1674, which is coupled to a level two (L2) high-speed Cache unit 1676. In one exemplary embodiment, the memory access unit 1664 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1672 in the memory unit 1670 . Instruction cache unit 1634 is also coupled to a level two (L2) cache unit 1676 in memory unit 1670 . The L2 cache unit 1676 is coupled to one or more other levels of cache and, ultimately, to main memory.As an example, an exemplary register-renaming, out-of-order issue/execution core architecture may implement pipeline 1600 as follows: 1) instruction fetch 1638 executes fetch stage 1602 and length decode stage 1604; 2) decode unit 1640 executes decode stage 1606 3) rename/allocator unit 1652 executes allocation stage 1608 and rename stage 1610; 4) scheduler unit(s) 1656 executes dispatch stage 1612; 5) physical register file unit(s) 1658 and memory unit 1670 Execute register read/memory read stage 1614; execute cluster 1660 execute execute stage 1616; 6) memory unit 1670 and physical register file unit(s) 1658 execute writeback/memory write stage 1618; 7) each unit may involve Exception handling stage 1622; and 8) Retirement unit 1654 and physical register file unit(s) 1658 execute commit stage 1624.Core 1690 may support one or more instruction sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set from MIPS Technologies, Sunnyvale, Calif.; The ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings Inc., which includes the instruction(s) described herein. In one embodiment, core 1690 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing packed data to be used to perform operations used by many multimedia applications.FIG. 17 illustrates a block diagram of a SOC package according to an embodiment. As illustrated in FIG. 17 , SOC 1702 includes one or more central processing unit (CPU) cores 1720, one or more graphics processor unit (GPU) cores 1730, input/output (I/O) interfaces 1740, and memory Controller 1742. The various components of the SOC package 1702 may be coupled to interconnects or buses such as those discussed herein with reference to other figures. Additionally, SOC package 1702 may include more or fewer components, such as those discussed herein with reference to other figures. Further, each component of SOC package 1702 may include one or more other components, eg, as discussed with reference to other figures herein. In one embodiment, SOC package 1702 (and components thereof) are provided on one or more integrated circuit (IC) die, eg, packaged into a single semiconductor device.As illustrated in FIG. 17 , SOC package 1702 is coupled to memory 1760 via memory controller 1742 . In an embodiment, memory 1760 (or portions thereof) may be integrated on SOC package 1702 .I/O interface 1740 may be coupled to one or more I/O devices 1770, eg, via interconnects and/or buses such as those discussed herein with reference to other figures. I/O device(s) 1770 may include one or more of the following: keyboard, mouse, touchpad, display, image/video capture device (such as a camera or camcorder/video recorder), touch screen, speakers, etc. Wait.Figure 18 is a block diagram of a processing system 1800, under an embodiment. In various embodiments, system 1800 includes one or more processors 1802 and one or more graphics processors 1808, and may be a single-processor desktop system, a multi-processor workstation system, or have a large number of processors 1802 or processors Core 1807 server system. In one embodiment, system 1800 is a processing platform incorporated into a system-on-chip (SoC or SOC) integrated circuit for use in a mobile device, handheld device, or embedded device.Embodiments of the system 1800 may include or may be incorporated within a server-based gaming platform, a gaming console (including gaming and media consoles), a mobile gaming console, a handheld gaming console, or Online game console. In some embodiments, system 1800 is a mobile phone, smart phone, tablet computing device, or mobile Internet device. The data processing system 1800 may also include, be coupled with, or be integrated within a wearable device, such as a smart watch wearable device, smart glasses device, augmented reality device, or virtual reality device. In some embodiments, data processing system 1800 is a television or set-top box device having one or more processors 1802 and a graphical interface generated by one or more graphics processors 1808 .In some embodiments, the one or more processors 1802 each include one or more processor cores 1807 for processing instructions that, when executed, perform tasks for the system and Operation of User Software. In some embodiments, each of the one or more processor cores 1807 is configured to process a particular set of instructions 1809 . In some embodiments, the instruction set 1809 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computation via Very Long Instruction Word (VLIW). Multiple processor cores 1807 may each process a different instruction set 1809, which may include instructions to facilitate emulation of other instruction sets. Processor core 1807 may also include other processing devices, such as digital signal processors (DSPs).In some embodiments, processor 1802 includes cache memory 1804 . Depending on architecture, processor 1802 may have a single internal cache or multiple levels of internal cache. In some embodiments, cache memory is shared among various components of processor 1802 . In some embodiments, the processor 1802 also uses an external cache (e.g., third level (L3) cache or last level cache (LLC)) (not shown), which may use known cache coherence techniques This external cache is shared between processor cores 1807 . Additionally included in processor 1802 is a register file 1806, which may include different types of registers (eg, integer registers, floating point registers, status registers, and instruction pointer registers) for storing different types of data. Some registers may be general purpose registers, while others may be specific to the processor 1802 design.In some embodiments, processor 1802 is coupled to processor bus 1810 to transfer communication signals (such as addresses, data) or control signals between processor 1802 and other components in system 1800 . In one embodiment, system 1800 uses an exemplary “hub” system architecture that includes a memory controller hub 1816 and an input-output (I/O) controller hub 1830 . Memory controller hub 1816 facilitates communication between memory devices and other components of system 1800, while I/O controller hub (ICH) 1830 provides connections to I/O devices via a local I/O bus. In one embodiment, the logic of the memory controller hub 1816 is integrated within the processor.Memory device 1820 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device having suitable capabilities to serve as process memory. In one embodiment, memory device 1820 may operate as system memory for system 1800 to store data 1822 and instructions 1821 for use when one or more processors 1802 execute applications or processes. Memory controller hub 1816 is also coupled to optional external graphics processor 1812, which may communicate with one or more graphics processors 1808 in processors 1802 to perform graphics and media operations.In some embodiments, ICH 1830 enables peripheral devices to connect to memory device 1820 and processor 1802 via a high-speed I/O bus. I/O peripherals include, but are not limited to, audio controller 1846, firmware interface 1828, wireless transceiver 1826 (e.g., Wi-Fi, Bluetooth), data storage 1824 (e.g., hard drive, flash memory, etc.), and Legacy (eg, Personal System 2 (PS/2)) devices are coupled to the system's legacy I/O controller 1840 . One or more Universal Serial Bus (USB) controllers 1842 connect input devices such as a combination keyboard and mouse 1844 . A network controller 1834 may also be coupled to the ICH 1830 . In some embodiments, a high performance network controller (not shown) is coupled to processor bus 1810 . It will be appreciated that the system 1800 shown is exemplary and not limiting, as other types of data processing systems configured in different ways may also be used. For example, I/O controller hub 1830 may be integrated within one or more processors 1802, or memory controller hub 1816 and I/O controller hub 1830 may be integrated into a separate external graphics processor that An external graphics processor such as external graphics processor 1812.FIG. 19 is a block diagram of an embodiment of a processor 1900 having one or more processor cores 1902A-1902N, an integrated memory controller 1914 , and an integrated graphics processor 1908 . Those elements of FIG. 19 having the same reference numerals (or names) as elements of any other figure herein may operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor 1900 may include additional cores up to and including additional cores 1902N represented by dashed boxes. Each of processor cores 1902A-1902N includes one or more internal cache units 1904A-1904N. In some embodiments, each processor core also has access to one or more shared cache units 1906 .Internal cache units 1904A through 1904N and shared cache unit 1906 represent a cache memory hierarchy within processor 1900 . The cache memory hierarchy may include at least one level of instruction and data caches and one or more levels of shared mid-level caches within each processor core, such as second level (L2), third level (L3 ), fourth level (L4), or other levels of cache, where the highest level of cache before external memory is classified as LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 1906 and 1904A-1904N.In some embodiments, the processor 1900 may also include a set 1916 of one or more bus controller units and a system agent core 1910 . One or more bus controller units 1916 manage a set of peripheral buses, such as one or more peripheral component interconnect buses (eg, PCI, PCI Express (PCI Express)). The system agent core 1910 provides management functions for each processor component. In some embodiments, system agent core 1910 includes one or more integrated memory controllers 1914 for managing access to various external memory devices (not shown).In some embodiments, one or more of processor cores 1902A through 1902N includes support for simultaneous multi-threaded operation. In such embodiments, system agent core 1910 includes components for coordinating and operating cores 1902A-1902N during multi-threaded processing. System agent core 1910 may additionally include a power control unit (PCU), which includes logic and components for regulating the power states of processor cores 1902A through 1902N and graphics processor 1908 .In some embodiments, processor 1900 additionally includes a graphics processor 1908 for performing graphics processing operations. In some embodiments, the graphics processor 1908 is coupled with a set of shared cache units 1906 and a system agent core 1910 that includes one or more integrated memory controllers 1914 . In some embodiments, display controller 1911 is coupled to graphics processor 1908 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 1911 may be a separate module coupled to the graphics processor via at least one interconnect, or may be integrated within graphics processor 1908 or system agent core 1910 .In some embodiments, a ring-based interconnect unit 1912 is used to couple the internal components of the processor 1900 . However, alternative interconnect elements may be used, such as point-to-point interconnects, switched interconnects, or other technologies, including those known in the art. In some embodiments, graphics processor 1908 is coupled to ring interconnect 1912 via I/O link 1913 .Exemplary I/O link 1913 represents at least one of a wide variety of I/O interconnects, including those that facilitate communication between various processor components and high-performance embedded memory modules 1918, such as eDRAM (or embedded DRAM) modules) package I/O interconnects for communication between modules. In some embodiments, each of processor cores 1902A-1902N and graphics processor 1908 uses embedded memory module 1918 as a shared last level cache.In some embodiments, processor cores 1902A through 1902N are homogeneous cores executing the same instruction set architecture. In another embodiment, processor cores 1902A through 1902N are heterogeneous with respect to instruction set architecture (ISA), wherein one or more of processor cores 1902A through 1902N executes a first set of instructions while other cores 1902A through 1902N At least one of the cores executes a subset of the first instruction set or a different instruction set. In one embodiment, processor cores 1902A through 1902N are heterogeneous in terms of microarchitecture, with one or more cores having relatively higher power consumption and one or more cores having lower power consumption nuclear coupling. Additionally, the processor 1900 may be implemented on one or more chips, or as a SoC integrated circuit having the illustrated components, among other components.FIG. 20 is a block diagram of a graphics processor 2000, which may be a discrete graphics processing unit, or may be a graphics processor integrated with multiple processing cores. In some embodiments, the graphics processor communicates via a memory-mapped I/O interface to registers on the graphics processor and with commands placed into processor memory. In some embodiments, graphics processor 2000 includes a memory interface 2014 for accessing memory. Memory interface 2014 may be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, the graphics processor 2000 also includes a display controller 2002 for driving display output data to the display device 2020 . Display controller 2002 includes hardware for compositing one or more overlay planes of the display and multiple layers of video or user interface elements. In some embodiments, the graphics processor 2000 includes a device for encoding media to, decoding media from, or between one or more media encoding formats. A video codec engine 2006 that transcodes media to one or more media encoding formats including, but not limited to: Moving Pictures Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats (such as H.264/MPEG-4 AVC), and Society of Motion Picture and Television Engineers (SMPTE) 321M/VC-1, and Joint Photographic Experts Group (JPEG) formats (such as JPEG, and Motion JPEG (MJPEG) formats).In some embodiments, graphics processor 2000 includes a block image transfer (BLIT) engine 2004 for performing two-dimensional (2D) rasterizer operations, including, for example, bit boundary block transfers. However, in one embodiment, 3D graphics operations are performed using one or more components of Graphics Processing Engine (GPE) 2010 . In some embodiments, graphics processing engine 2010 is a computing engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 2010 includes a 3D pipeline 2012 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that operate on 3D primitive shapes (eg, rectangles, triangles, etc.). The 3D pipeline 2012 includes programmable and fixed-function elements that perform various tasks within the element and/or generate threads of execution to the 3D/media subsystem 2015. While the 3D pipeline 2012 may be used to perform media operations, embodiments of the GPE 2010 also include a media pipeline 2016 dedicated to performing media operations, such as video post-processing and image enhancement.In some embodiments, media pipeline 2016 includes fixed function or programmable logic units for performing one or more specialized media operations in place of, or on behalf of, video codec engine 2006, such as video decoding acceleration, video deinterlacing , and video encoding acceleration. In some embodiments, the media pipeline 2016 additionally includes a thread generation unit to generate threads for execution on the 3D/media subsystem 2015 . The generated threads perform computations for media operations on one or more graphics execution units included in the 3D/media subsystem 2015 .In some embodiments, 3D/media subsystem 2015 includes logic for executing threads generated by 3D pipeline 2012 and media pipeline 2016 . In one embodiment, the pipeline sends thread execution requests to the 3D/media subsystem 2015, which includes thread dispatch logic for arbitrating and dispatching various requests for available thread execution resources. Execution resources include an array of graphics execution units for processing 3D threads and media threads. In some embodiments, 3D/media subsystem 2015 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, for sharing data between threads and for storing output data.In the following description, numerous specific details are set forth in order to provide a clearer understanding. It will be apparent, however, to one skilled in the art that the embodiments described herein may be practiced without one or more of these specific details. In other instances, well-known features have not been described so as not to obscure the details of the current embodiments.The following examples relate to further embodiments. Example 1 includes an apparatus comprising: a physically unclonable function (PUF) circuit; a decoding circuit for decoding an instruction having a field for an address of a memory buffer; and an execution circuit, The execution circuit is for executing the decoded instructions to: determine data for cryptographic protection and determine a challenge; and cryptographically protect said data according to a key, wherein the PUF circuit is for responding to the challenge Generate that key. Example 2 includes the apparatus of example 1, wherein the execution circuit is to cryptographically protect the data according to the key and a security version number (SVN). Example 3 includes the apparatus of example 1, wherein the execution circuitry is to cause the cryptographically protected data to be stored in the memory. Example 4 includes the apparatus of Example 1, wherein the execution circuitry is configured to cryptographically protect the data based on a key and a Security Version Number (SVN), wherein the execution circuitry is configured to cryptographically protect The data and SVN are stored in memory. Example 5 includes the apparatus of example 1, wherein the PUF circuit is to generate a plurality of keys in response to the challenge, wherein each key of the plurality of keys is to be utilized for a different use. Example 6 includes the apparatus of example 5, wherein the different usage includes fuse protection or software-visible usage of the PUF. Example 7 includes the apparatus of example 1, wherein the decoding circuit is to decode a second instruction for determining the presence of the cryptographically protected data and the second challenge, wherein the execution circuit for executing the decoded second instructions to cryptographically unprotect the protected data according to a second key, wherein the PUF circuit is for generating the second key in response to the second challenge. Example 8 includes the apparatus of example 7, wherein the execution circuit is to execute the decoded second instruction to cryptographically unprotect the protected data according to the second key and the SVN. Example 9 includes the apparatus of Example 8, including verification logic to determine integrity of the deprotected data based on the SVN and the current SVN. Example 10 includes the apparatus of example 9, wherein in response to a successful integrity verification by the verification logic, the deprotected data is returned. Example 11 includes the apparatus of example 9, wherein, in response to an unsuccessful integrity verification by the verification logic, the signal is generated according to a strategy for selecting when the execution circuit is used to execute the decoded instruction . Example 12 includes the apparatus of example 1, wherein the data includes keys corresponding to hardware blocks. Example 13 includes the apparatus of Example 1, wherein the challenge is a 256-bit random value. Example 14 includes the apparatus of Example 1, wherein the decode circuitry is to decode a second instruction for determining the presence of the cryptographically protected data and the second challenge, wherein the execution circuitry for executing the decoded second instruction to cryptographically unprotect the protected data according to the second key and in response to a determination that the configuration is active, wherein the PUF circuit is operable to generate the second key. Example 15 includes the apparatus of example 14, wherein the configuration is to be selected when the execution circuit is used to execute the decoded instruction.Example 16 includes an apparatus comprising: a physically unclonable function (PUF) circuit; a decoding circuit for decoding an instruction having a field for an address of a memory buffer; and an execution circuit, The execution circuit is for executing the decoded instructions to: determine the data for cryptographically unprotected and determine the challenge; and cryptographically unprotect the data according to the key, wherein the PUF circuit is for responding to challenge to generate the key. Example 17 includes the apparatus of example 16, wherein the execution circuit is to cryptographically unprotect the protected data according to the key and the SVN. Example 18 includes the apparatus of Example 17, including verification logic to determine integrity of the deprotected data based on the SVN and the current SVN. Example 19 includes the apparatus of example 18, wherein in response to a successful integrity verification by the verification logic, the deprotected data is returned. Example 20 includes the apparatus of example 18, wherein, in response to an unsuccessful integrity verification by the verification logic, the signal is used to cryptographically The policy is generated based on the policy selected when protecting the data. Example 21 includes the apparatus of example 16, wherein the data includes keys corresponding to hardware blocks. Example 22 includes the apparatus of example 16, wherein the challenge is a 256-bit random value.Example 23 includes one or more non-transitory computer-readable media comprising one or more instructions that, when executed on a processor, configure the processor to perform One or more operations of: decoding an instruction having a field for an address of a memory buffer; and executing the decoded instruction to: determine for cryptographically protected data and determine a challenge; and The data is cryptographically protected against a key that a Physical Unclonable Function (PUF) circuit is used to generate in response to a challenge. Example 24 includes the one or more non-transitory computer-readable media of Example 23, further comprising, when executed on at least one processor, configuring the at least one processor to perform one or more of One or more instructions for multiple operations: cause data to be cryptographically protected according to a key and a security version number (SVN). Example 25 includes the one or more non-transitory computer-readable media of Example 15, further comprising, when executed on at least one processor, configuring the at least one processor to perform one or more operations to One or more instructions that cause cryptographically protected data to be stored in memory.Example 26 includes an apparatus comprising means for performing a method as set forth in any preceding example. Example 27 includes a machine-readable storage comprising machine-readable instructions that, when executed, implement any of the methods set forth in the preceding examples or implement any of the means set forth in the preceding examples .In various embodiments, one or more operations discussed with reference to FIG. 1 et seq. may be performed by one or more components discussed with reference to any of the figures (interchangeably referred to herein as " Logic") executes.In various embodiments, operations discussed herein (e.g., with reference to FIG. 1 and following figures) may be implemented as hardware (e.g., logic circuits), software, firmware, or a combination thereof, which may be provided as a computer program product , including, for example, one or more tangible (e.g., non-transitory) machine-readable or computer-readable media having stored thereon instructions (or software programs) for operating a computer Program to perform the procedures discussed in this article. The machine-readable medium may include storage devices such as those discussed with reference to the figures.In addition, such computer-readable media can be downloaded as a computer program product, where the program can be downloaded from a remote computer (e.g., e.g., server) to the requesting computer (e.g., client).Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, and/or characteristic described in connection with the embodiment can be included in at least one implementation. Appearances of the phrase "in one embodiment" in various places in this specification may or may not all refer to the same embodiment.Also, in the specification and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments, "connected" may be used to mean that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but yet still cooperate or interact with each other.Thus, although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter. |
A semiconductor device (100) contains a Zener-triggered transistor (104) having a Zener diode (113) vertically integrated in a first current node (106) of the Zener-triggered transistor (104). The Zener diode (113) includes an n-type cathode (114) contacting the first current node (106), and a p-type anode (115) contacting the n-type cathode (114). |
What is claimed is:1. A semiconductor device, comprising:a substrate including a p type semiconductor material, the substrate having a component surface; anda Zener-triggered transistor contacting the component surface, the Zener triggered transistor including:a first current node of n-type semiconductor material contacting the p-type semiconductor material;a second current node of n-type semiconductor material contacting the p-type semiconductor material; anda Zener diode in the substrate, the Zener diode including:an n-type cathode contacting the first current node; and a p-type anode contacting the n-type cathode and contacting the p-type semiconductor material, the n type cathode being located between the p-type anode and the component surface, wherein a breakdown potential of the Zener diode is lower than a breakdown potential between the first current node and the second current node.2. The semiconductor device of claim 1, wherein the n-type cathode is laterally surrounded by the first current node.3. The semiconductor device of claim 1, wherein the p-type anode has an average p-type dopant density of 2x 1018cm3to 1 x 1019cm3.4. The semiconductor device of claim 1, wherein the p-type anode has a width of less than 1 micron.5. The semiconductor device of claim 1, wherein:the Zener diode is a first Zener diode;the n-type cathode is a first n-type cathode;the p-type anode is a first p-type anode; andthe Zener-triggered transistor further includes a second Zener diode, the second Zener diode including:a second n-type cathode contacting the first current node; anda second p-type anode contacting the second n-type cathode and contacting the
p-type semiconductor material, wherein a breakdown potential of the second Zener diode is lower than the breakdown potential between the first current node and the second current node.6. The semiconductor device of claim 1, wherein:the Zener-triggered transistor is a lateral NPN bipolar junction transistor;the first current node is a collector of the lateral NPN bipolar junction transistor, the collector being located in the substrate and extending to the component surface of the substrate; the second current node is an emitter of the lateral NPN bipolar junction transistor, the emitter being located in the substrate and extending to the component surface of the substrate; andthe p-type semiconductor material provides a base of the lateral NPN bipolar junction transistor.7. The semiconductor device of claim 1, wherein:the Zener-triggered transistor is a vertical NPN bipolar junction transistor;the first current node is a collector of the vertical NPN bipolar junction transistor, the collector being located in the substrate and extending to the component surface of the substrate; the p-type semiconductor material provides a base of the vertical NPN bipolar junction transistor; andthe second current node is an emitter of the vertical NPN bipolar junction transistor, the emitter being located in the substrate so that the base of the vertical NPN bipolar junction transistor is between the collector and the emitter in a direction perpendicular to the component surface of the substrate.8. The semiconductor device of claim 1, wherein:the Zener-triggered transistor is a grounded gate n-channel metal oxide semiconductor (GGNMOS) transistor;the first current node is a drain of the GGNMOS transistor, the drain being located in the substrate and extending to the component surface of the substrate;the p-type semiconductor material provides a body of the GGNMOS transistor; and the second current node is a source of the GGNMOS transistor, the source being located in the substrate and extending to the component surface of the substrate.9. The semiconductor device of claim 1, further comprising a lateral diffused n-channel
metal oxide semiconductor (LDNMOS) transistor, the LDNMOS transistor including a p-type body located in the substrate, the p-type body having a same p-type dopant species as the p-type anode of the Zener diode, wherein the p-type body has an average p-type dopant density between substantially equal to an average p-type dopant density of the p-type anode and twice the average p-type dopant density of the p-type anode.10. The semiconductor device of claim 1, further comprising metal silicide at the component surface of the substrate, the metal silicide contacting the first current node, wherein the metal silicide is laterally separated from the n-type cathode of the Zener diode, laterally being in a direction parallel to the component surface.11. The semiconductor device of claim 1, wherein:the Zener-triggered transistor is a first Zener-triggered transistor;the Zener diode is a first Zener diode;the n-type cathode is a first n-type cathode;the p-type anode is a first p-type anode;the semiconductor device further comprises a second Zener-triggered transistor, including:a third current node of n-type semiconductor material contacting the p-type semiconductor material; anda fourth current node of n-type semiconductor material contacting the p-type semiconductor material; andthe semiconductor device further comprises a second Zener diode in the substrate, the second Zener diode including:a second n-type cathode contacting the third current node; anda second p-type anode contacting the second n-type cathode and contacting the p-type semiconductor material, wherein:a first lateral width of the first p-type anode is greater than a second lateral width of the second p-type anode; anda first average p-type dopant density of the first p-type anode is greater than a second average p-type dopant density of the second p-type anode.12. The semiconductor device of claim 1, further comprising:a ground node electrically coupled to the second current node; and
an input/output (I/O) node electrically coupled to the first current node.13. The semiconductor device of claim 1, further comprising a snubber circuit, wherein the first current node is electrically coupled to an output port of the snubber circuit.14. A method of forming a semiconductor device, comprising:providing a substrate including a p-type semiconductor material, the p-type semiconductor material including more than half silicon;forming an implant mask over the substrate, the implant mask exposing an area for a Zener diode of a Zener-triggered transistor;implanting boron ions into the substrate where exposed by the implant mask;implanting n-type dopant ions into the substrate where exposed by the implant mask; removing the implant mask;annealing the substrate to diffuse and activate the boron ions in the area for the Zener diode to form a p-type anode of the Zener diode and to diffuse and activate the n-type dopant ions in the area for the Zener diode to form an n-type cathode of the Zener diode; andforming a first current node of the Zener-triggered transistor, the first current node including n-type semiconductor material contacting the p-type semiconductor material of the substrate, wherein the n-type cathode contacts the n-type semiconductor material of the first current node and the p-type anode contacts the n-type cathode and contacts the p-type semiconductor material of the substrate.15. The method of claim 14, wherein a lateral width of the area for the Zener diode is less than 500 nanometers, the lateral width being a shorter of two perpendicular lateral dimensions, parallel to a surface of the substrate, of the area exposed by the implant mask.16. The method of claim 14, wherein:the implant mask exposes an area for a p-type body and an n-type source of a lateral diffused n-channel metal oxide semiconductor (LDNMOS) transistor; andannealing the substrate diffuses and activates the boron ions in the area for the p-type body to form the p-type body in the substrate, and diffuses and activates the n-type dopant ions in the area for the n-type source to form the n-type source in the substrate.17. The method of claim 16, wherein a first lateral width of the area for the Zener diode is less than a second lateral width of the area for a p-type body, the first lateral width being a shorter of two perpendicular lateral dimensions, parallel to a surface of the substrate, of the area
for the Zener diode exposed by the implant mask, and the second lateral width being a shorter of two perpendicular lateral dimensions, parallel to a surface of the substrate, of the area for the p-type body exposed by the implant mask.18. The method of claim 14, wherein:the boron ions are implanted at an implant dose of 1 x 1014cm2to 1 x 1015cm2; and the n-type dopant ions are implanted at an implant dose of l x lO14cm2to 1.5x l015cm2.19. The method of claim 14, wherein:the Zener-triggered transistor is a first Zener-triggered transistor;the Zener diode is a first Zener diode;the p-type anode is a first p-type anode;the n-type cathode is a first n-type cathode;the implant mask exposes an area for a second Zener diode of a second Zener-triggered transistor; andannealing the substrate diffuses and activates the boron ions in the area for the second Zener diode to form a second p-type anode of the second Zener diode in the substrate, and diffuses and activates the n-type dopant ions in the area for the second Zener diode to form a second n-type cathode of the second Zener diode in the substrate; andfurther comprising forming a first current node of the second Zener-triggered transistor, the first current node of the second Zener triggered transistor including n-type semiconductor material contacting the p-type semiconductor material of the substrate, wherein the second n-type cathode contacts the n-type semiconductor material of the first current node of the second Zener triggered transistor and the second p-type anode contacts the second n-type cathode and contacts the p-type semiconductor material of the substrate.20. The method of claim 14, further comprising forming metal silicide on the first current node, wherein the metal silicide is laterally separated from the n-type cathode of the Zener diode, laterally being in a direction parallel to a surface of the component surface contacting the metal silicide. |
ZENER-TRIGGERED TRANSISTOR WITH VERTICALLY INTEGRATED ZENER DIODE[0001] This description relates to the field of semiconductor devices. More particularly, this description relates to transistors in semiconductor devices.BACKGROUND[0002] Semiconductor devices frequently include transistors to protect against electrostatic discharge (ESD) events. These transistors often rely on junction breakdown to turn on. In some cases the breakdown potential is too high to protect the internal circuit, which may result in device degradation or failure. External trigger circuits are sometimes added to reduce the breakdown potential, but the trigger circuit undesirably increases the area of the semiconductor device.SUMMARY[0003] This description introduces a semiconductor device having a Zener-triggered transistor which includes a Zener diode integrated in a first current node of the Zener-triggered transistor. The first current node includes n-type semiconductor material contacting a p-type semiconductor material in a substrate of the semiconductor device. The Zener diode includes an n-type cathode contacting the first current node, and a p-type anode contacting the n-type cathode and contacting the p-type semiconductor material.[0004] The semiconductor device may be formed by forming an implant mask over the substrate, the implant mask having an opening for the Zener diode. Boron and arsenic are implanted into the substrate in an area exposed by the opening in the implant mask. The substrate is subsequently heated to diffuse and activate the implanted boron and arsenic. The implanted boron provides p-type dopants for the p-type anode of the Zener diode, and the implanted arsenic provides n-type dopants for the n-type cathode of the Zener diode.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 is a cross section of an example semiconductor device which includes a Zener- triggered transistor.[0006] FIG. 2A and FIG. 2B are cross sections of a semiconductor device which includes a Zener-triggered transistor, depicted in stages of an example method of formation.
[0007] FIG. 3 is a cross section of another example semiconductor device which includes a Zener-triggered transistor.[0008] FIG. 4 is a cross section of a further example semiconductor device which includes a Zener-triggered transistor.[0009] FIG. 5A through FIG. 5C are cross sections of a semiconductor device which includes a Zener-triggered transistor, depicted in stages of another example method of formation.[0010] FIG. 6 is a circuit diagram of an example semiconductor device including a Zener- triggered transistor in an application.[0011] FIG. 7 is a circuit diagram of an example semiconductor device including a Zener- triggered transistor in another application.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0012] This description refers to the attached figures. The figures are not drawn to scale and they are provided merely to illustrate the description. Several aspects of the description are described below with reference to example applications for illustration. Numerous specific details, relationships, and methods are set forth to provide an understanding of the description. This description is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in accordance with this description. In addition, although some of the embodiments illustrated herein are shown in two dimensional views with various regions having depth and width, these regions are illustrations of only a portion of a device that is actually a three dimensional structure. Accordingly, these regions will have three dimensions, including length, width, and depth, when fabricated on an actual device.[0013] FIG. 1 is a cross section of an example semiconductor device which includes a Zener-triggered transistor. The semiconductor device 100 includes a substrate 101. The substrate 101 may be a portion of a semiconductor wafer, for example. The substrate 101 includes a p-type semiconductor material 102. The p-type semiconductor material 102 may include primarily silicon, by way of example. Other semiconductor materials for the p-type semiconductor material 102, such as silicon with some germanium or carbon, are within the scope of this example. The substrate 101 has a component surface 103. The p-type semiconductor material 102 may extend to the component surface 103 in locations in the
semiconductor device 100.[0014] The semiconductor device 100 of this example includes a Zener-triggered transistor 104 contacting the component surface 103, and a lateral diffused n-channel metal oxide semiconductor (LDNMOS) transistor 105. For the purposes of this description, the terms “lateral” and“laterally” refer to directions parallel to the component surface 103, and similarly in subsequent examples herein. The Zener-triggered transistor 104 of this example is manifested as a lateral NPN bipolar junction transistor 104. The Zener-triggered transistor 104 includes a first current node 106 of n-type semiconductor material. In this example, the first current node 106 is manifested as a collector 106 of the lateral NPN bipolar junction transistor 104. The first current node 106 may be located in the substrate 101, as depicted in FIG. 1. The Zener-triggered transistor 104 includes a second current node 107 of n-type semiconductor material. In this example, the second current node 107 is manifested as an emitter 107 of the lateral NPN bipolar junction transistor 104. The p-type semiconductor material 102 provides a base 108 of the lateral NPN bipolar junction transistor 104. The semiconductor device 100 may include p-type base contact regions 109 having higher dopant densities than the p-type semiconductor material 102, to provide low resistance electrical connections to the base 108 of the lateral NPN bipolar junction transistor 104. The collector 106, the emitter 107, and the base 108 may be laterally separated at the component surface 103 by field oxide 110. The Zener-triggered transistor 104 may be electrically isolated in a vertical direction by an n-type buried layer (NBL) 111. For the purposes of this description, the terms “vertical” and “vertically” refer to directions perpendicular to the component surface 103, and similarly in subsequent examples herein. The Zener-triggered transistor 104 may further be electrically isolated in lateral directions by an isolation structure 112 extending from the component surface 103 to the NBL 111. The isolation structure 112 may be manifested as a deep trench with a silicon dioxide liner, or by n-type regions, sometimes referred to as sinkers. During operation of the semiconductor device 100, the NBL 111 may be biased with respect to the p-type semiconductor material 102 to reduce leakage current from the p-type semiconductor material 102. In another version of this example, the NBL 111 may be connected to the collector 106, to provide both vertical and lateral current flow through the Zener-triggered transistor 104.[0015] The Zener-triggered transistor 104 includes a Zener diode 113 that is vertically integrated into the first current node 106. The Zener diode 113 includes an n-type cathode 114
that contacts the first current node 106, and includes a p-type anode 115 that contacts the n-type cathode 114 and the p-type semiconductor material 102 in the base 108. The n-type cathode 114 is laterally surrounded by the first current node 106, and the p-type anode 115 is located under the n-type cathode 114, so that the n-type cathode 114 is between the p-type anode 115 and the component surface 103. The n-type cathode 114 has an average n-type dopant density of 1 x 1019cm3to 5x l019cm3. N-type dopants in the n-type cathode 114 may include primarily arsenic. The p-type anode 115 has an average p-type dopant densitycm to 1 x 10 cm . P-type dopants in the p-type anode 115 may include primarily boron.[0016] The LDNMOS transistor 105 includes a p-type body 116 in the substrate 101. The p-type body 116 has an average p-type dopant density between a density that is substantially equal to the average p-type dopant density of the p-type anode 115 and a density that is twice the average p-type dopant density of the p-type anode 115. For the purposes of this description, the term“substantially equal” includes dopant densities that are equal within fabrication tolerances of processes, such as ion implant processes, used to fabricate the semiconductor device 100, and similarly in subsequent examples herein. The term“substantially equal” also includes dopant densities that are equal within measurement tolerances encountered in techniques used to measure dopant densities in the semiconductor device 100. The p-type body 116 includes the same species of p-type dopants as the p-type anode 115. The LDNMOS transistor 105 includes an n-type source 117 in the substrate 101. The p-type body 116 extends under and laterally around the n-type source 117, as indicated in FIG. 1. The n-type source 117 has an average n-type dopant density between a density that is substantially equal to the average n-type dopant density of the n-type cathode 114 and a density that is twice the average n-type dopant density of the n-type cathode 114. The n-type source 117 includes the same species of n-type dopants as the n-type cathode 114.[0017] The LDNMOS transistor 105 includes an n-type drain 118 in the substrate 101. The LDNMOS transistor 105 further includes a gate dielectric layer 119 on the component surface 103, partially overlapping the p-type body 116 and the n-type source 117, and optionally extending partway over the n-type drain 118, as depicted in FIG. 1. The LDNMOS transistor 105 includes a gate 120 on the gate dielectric layer 119. Gate sidewall spacers 121 of silicon nitride, silicon dioxide, or silicon oxynitride, may be disposed on sides of the gate 120, as depicted in FIG. 1. An n-type drain contact region 122 may be disposed in the n-type drain 118;
the n-type drain contact region 122 has a higher density of n-type dopants to provide a low resistance electrical connection to the n-type drain 118.[0018] The semiconductor device 100 may include metal silicide 123 at the component surface 103 to provide low resistance electrical connections to elements in the substrate 101. The metal silicide 123 may include titanium silicide, platinum silicide, cobalt silicide, or nickel silicide, by way of example. The metal silicide 123 may be disposed on the first current node 106, the second current node 107, and the p-type base contact regions 109 of the Zener-triggered transistor 104, and on the p-type body 116, the n-type source 117, and the n-type drain contact region 122 of the LDNMOS transistor 105. The metal silicide 123 on the first current node 106 may be laterally separated from the Zener diode 113 by a silicide block layer 124. The metal silicide 123 on the n-type drain contact region 122 may be laterally separated from the gate 120 by the silicide block layer 124.[0019] The semiconductor device 100 may have a dielectric layer 125 over the component surface 103. The dielectric layer 125 may include one or more sub-layers of silicon dioxide, silicon nitride, phosphosilicate glass (PSG), borophosphosilicate glass (BPSG), or similar dielectric materials. Contacts 126 of the semiconductor device 100 are disposed through the dielectric layer 125 to make electrical connections to the elements in the substrate 101, through the metal silicide 123, if present. The contacts 126 may include tungsten on a titanium-containing liner, for example. The semiconductor device 100 further includes interconnects 127 on the dielectric layer 125, making electrical connections to the contacts 126. The interconnects 127 may include primarily aluminum, with an adhesion layer on the dielectric layer 125, or may include copper on a diffusion barrier.[0020] A positive electrical pulse on the first current node 106 with respect to the second current node 107 may induce breakdown in the Zener diode 113, inducing current through the Zener diode 113 to turn on the Zener-triggered transistor 104. A breakdown potential of the Zener diode 113 is lower than a breakdown potential between the first current node 106 and the second current node 107. The Zener diode 113 may have a breakdown potential of 5 to 10 volts, by way of example. Thus, the Zener-triggered transistor 104 may be advantageously used as a protective component to reduce transient potentials on components connected to the first current node 106. Moreover, the potential difference at which the Zener diode 113 breaks down may be more repeatable than the potential difference at which the pn junction between the first current
node 106 and the base 108 breaks down, in large numbers of the semiconductor device 100 which are fabricated in semiconductor fabrication facilities, advantageously providing a more uniform protective component. Having the Zener diode 113 vertically integrated in the first current node 106 may advantageously reduce an area of the semiconductor device 100 compared to a semiconductor device having a Zener diode separate from a Zener-triggered transistor. Laterally separating the metal silicide 123 on the collector 106 from the Zener diode 113 may provide electrical resistance in the collector 106 that may advantageously reduce current crowding through the Zener diode 113. This may be especially advantageous in versions of the Zener-triggered transistor 104 having more than one Zener diode 113 vertically integrated in the collector 106, allowing additional instances of the Zener diode 113 to break down after one of the Zener diodes 113 breaks down.[0021] FIG. 2A and FIG. 2B are cross sections of a semiconductor device which includes a Zener-triggered transistor, depicted in stages of an example method of formation. Referring to FIG. 2A, the semiconductor device 200 includes a substrate 201. The substrate 201 may be implemented as a semiconductor wafer, for example. The substrate 201 includes a p-type semiconductor material 202 which includes primarily (e.g., more than half) silicon. The p-type semiconductor material 202 may extend to a component surface 203 in locations in the semiconductor device 200. Field oxide 210 may be formed at the component surface 203 to laterally separate elements of the semiconductor device 200. The field oxide 210 may be formed by a shallow trench isolation (STI) process or by a local oxidation of silicon (LOCOS) process, for example. A silicon dioxide layer 228 may be formed on the component surface 203 to protect the p-type semiconductor material 202 during subsequent process steps. The silicon dioxide layer 228 may be 5 to 25 nanometers thick, for example, and may be formed by a thermal oxidation process.[0022] The semiconductor device 200 includes an area for a Zener-triggered transistor 204 and an area for an LDNMOS transistor 205. The Zener-triggered transistor 204 may be electrically isolated in a vertical direction by an NBL 211. The Zener-triggered transistor 204 may further be electrically isolated in lateral directions by an isolation structure 212, manifested as a deep trench, for example, extending from the component surface 203 to the NBL 211.[0023] An implant mask 229 is formed over the silicon dioxide layer 228. The implant mask 229 exposes a first area for a Zener diode 213 in the area for the Zener-triggered transistor 204,
and exposes a second area for a p-type body 216 and an n-type source 217 in the area for the LDNMOS transistor 205. The implant mask 229 may include photoresist, and may be formed by a photolithographic process. The implant mask 229 may optionally include anti -reflection layers such as a bottom anti -reflection coating (BARC). The implant mask 229 may have a thickness of 400 nanometers to 700 nanometers, for example. The first area exposed by the implant mask 229 for the Zener diode 213 may have a lateral width 230 that is less than 500 nanometers, for example, 200 nanometers to 400 nanometers. The second area exposed by the implant mask 229 for the p-type body 216 and the n-type source 217 may have a lateral width 231 that is greater than 500 nanometers. The lateral width 230 of the first area exposed by the implant mask 229 for the Zener diode 213 is a lateral dimension that is a shorter of two perpendicular lateral dimensions of the first area exposed by the implant mask 229. The lateral width 231 of the second area exposed by the implant mask 229 for the p-type body 216 and the n-type source 217 is a lateral dimension that is a shorter of two perpendicular lateral dimensions of the second area exposed by the implant mask 229.[0024] Boron ions 232 are implanted through the silicon dioxide layer 228 into the substrate 201 in the first area exposed by the implant mask 229 and in the second area exposed by the implant mask 229, to form a Zener anode implanted region 233 in the substrate 201 under the first area and to form a body implanted region 234 in the substrate 201 under the second area. The boron ions 232 may be implanted at an implant dose of l x lO14cm2to l x lO15cm2, at an implant energy of 10 kilo-electron volts (keV) to 30 keV, by way of example.[0025] The boron ions 232 may be implanted at an angle from a perpendicular direction to the component surface 203, to reduce channeling of the boron ions 232 in a crystal lattice of the substrate 201. For example, the boron ions 232 may be implanted at an angle of 4 degrees to 7 degrees from a perpendicular direction to the component surface 203, possibly in two or four implant steps rotated around the perpendicular direction to the component surface 203, to reduce directional shadowing of the boron ions 232 by the implant mask 229. Implanting the boron ions 232 at an angle from the perpendicular direction to the component surface 203, in combination with the first area exposed by the implant mask 229 for the Zener diode 213 having the lateral width 230 less than 500 nanometers, may result in a lower effective dose of the boron ions 232 in the substrate 201 in the first area exposed by the implant mask 229 than in the second area exposed by the implant mask 229 for the p-type body 216, having the lateral width 231 that is
greater than 500 nanometers. The effective dose of the boron ions 232 in the first area exposed by the implant mask 229 is the number of boron ions 232 in the substrate 201 in the first area exposed by the implant mask 229, divided by an area of the first area exposed by the implant mask 229. Similarly, the effective dose of the boron ions 232 in the second area exposed by the implant mask 229 is the number of boron ions 232 in the substrate 201 in the second area exposed by the implant mask 229, divided by an area of the second area exposed by the implant mask 229. Due to the lateral width 231 of the second area being greater than 500 nanometers, the effective dose of the boron ions 232 in the second area may be close to the implant dose of the implanted boron ions 232, the implant dose being an unobstructed dose of the boron ions 232 received at the substrate 201. The implant dose of the boron ions 232 may be selected to provide a desired threshold potential for the LDNMOS transistor 205. The lateral width 230 of the first area may be selected to provide a desired effective dose of the boron ions 232 to attain a desired breakdown potential for the Zener diode 213.[0026] N-type dopant ions 235, which are implemented as arsenic ions 235 in this example, are implanted through the silicon dioxide layer 228 into the substrate 201 in the first area exposed by the implant mask 229 and in the second area exposed by the implant mask 229, to form a Zener cathode implanted region 236 in the substrate 201 under the first area and to form a source implanted region 237 in the substrate 201 under the second area. The arsenic ions 235 may be implanted at an implant dose of l x lO14cm2to 1.5>< 1015cm2, at an implant energy of 10 kilo-electron volts (keV) to 40 keV, by way of example. The Zener anode implanted region 233 may extend further into the substrate 201 from the component surface 203 than the Zener cathode implanted region 236. Similarly, the body implanted region 234 may extend further into the substrate 201 from the component surface 203 than the source implanted region 237. The arsenic ions 235 may also be implanted at an angle from the perpendicular direction to the component surface 203, resulting in a similar reduction in an effective dose of the arsenic ions 235 in the first area exposed by the implant mask 229 compared to the implant dose of the arsenic ions 235. In other versions of this example, the n-type dopant ions 235 may include antimony ions. In further versions of this example, the n type dopant ions 235 may include phosphorus ions.[0027] The implant mask 229 is removed after the boron ions 232 and the arsenic ions 235 are implanted. The implant mask 229 may be removed, for example, by an oxygen plasma process
or an ozone process, followed by a wet clean process using an aqueous mixture of sulfuric acid and hydrogen peroxide.[0028] Referring to FIG. 2B, the substrate 201 is heated by an anneal process 238 to activate and diffuse the implanted boron and implanted arsenic. The anneal process 238 may be a furnace process or a radiant heating process, for example. The substrate 201 is heated to a temperature of 800 °C to 1100 °C, for 10 minutes to 60 minutes, by way of example. The anneal process 238 may be implemented with oxygen gas, so that the silicon dioxide layer 228 increases in thickness due to thermal oxidation of silicon in the substrate 201. The implanted boron diffuses into the substrate 201 and becomes activated to form a p-type anode 215 of the Zener diode 213 in the area for the Zener-triggered transistor 204, and to form the p-type body 216 in the area for the LDNMOS transistor 205. The implanted arsenic diffuses into the substrate 201 and becomes activated to form an n-type cathode 214 of the Zener diode 213 in the area for the Zener-triggered transistor 204, and to form the n-type source 217 in the area for the LDNMOS transistor 205. The p-type anode 215 extends further into the substrate 201 from the component surface 203 than the n-type cathode 214, in part due to boron having a higher diffusion coefficient than arsenic, at the temperature of the substrate 201 during the anneal process 238.[0029] Forming the p-type anode 215 and the n-type cathode 214 of the Zener diode 213 concurrently with the p-type body 216 and the n-type source 217 of the LDNMOS transistor 205 may advantageously reduce fabrication cost and fabrication complexity of the semiconductor device 200.[0030] FIG. 3 is a cross section of another example semiconductor device which includes a Zener-triggered transistor. The semiconductor device 300 includes a substrate 301, which includes a p-type semiconductor material 302. The p-type semiconductor material 302 may extend to a component surface 303 in locations in the semiconductor device 300. The semiconductor device 300 of this example includes a Zener-triggered transistor 304, contacting the component surface 303, and an LDNMOS transistor 305.[0031] The Zener-triggered transistor 304 of this example is manifested as a vertical NPN bipolar junction transistor 304. The Zener-triggered transistor 304 includes a first current node 306 of n-type semiconductor material, manifested as a collector 306 of the vertical NPN bipolar junction transistor 304, located in the substrate 301, and extending to the component surface 303. In this example, the first current node 306 includes a first segment 306a and a second segment
306b which is separate from the first segment 306a, as depicted in FIG. 3. The Zener-triggered transistor 304 includes a second current node 307 of n-type semiconductor material, manifested as an emitter 307 of the vertical NPN bipolar junction transistor 304. The emitter 307 includes at least a portion of an NBL 311 located in the substrate 301 below the collector 306. The semiconductor device 300 may include n-type regions 339, sometimes referred to as n-type sinkers 339, extending from the emitter 307 to the component surface 303, to provide an electrical connection to the emitter 307. The p-type semiconductor material 302 provides a base 308 of the vertical NPN bipolar junction transistor 304, located between the collector 306 and the emitter 307. The semiconductor device 300 may include p-type base contact regions 309, similar in function to the p-type base contact regions 109 of FIG. 1. The semiconductor device 300 may include field oxide 310 at the component surface 303 to laterally separate components of the semiconductor device 300.[0032] The Zener-triggered transistor 304 includes a first Zener diode 313a that is vertically integrated into the first segment 306a of the first current node 306, and a second Zener diode 313b that is vertically integrated into the second segment 306b of the first current node 306. The first Zener diode 313a includes a first n-type cathode 314a that contacts the first segment 306a, and includes a first p-type anode 315a that contacts the first n-type cathode 314a and the p-type semiconductor material 302 in the base 308. The first n-type cathode 314a is laterally surrounded by the first segment 306a, and the first p-type anode 315a is located under the first n- type cathode 314a, so that the first n-type cathode 314a is between the first p-type anode 315a and the component surface 303. The second Zener diode 313b includes a second n-type cathode 314b that contacts the second segment 306b, and includes a second p-type anode 315b that contacts the second n-type cathode 314b and the p-type semiconductor material 302 in the base 308. The second n-type cathode 314b is laterally surrounded by the second segment 306b, and the second p-type anode 315b is located under the second n-type cathode 314b, so that the second n-type cathode 314b is between the second p-type anode 315b and the component surface 303. The first n-type cathode 314a, the first p-type anode 315a, the second n-type cathode 314b, and the n second p-type anode 315b have the dopant species and densities described in reference to the Zener diode 113 of FIG. 1.[0033] The LDNMOS transistor 305 includes an n-type source 317 in the substrate 301 and a p-type body 316 in the substrate 301, extending under and laterally around the n-type source 317,
as indicated in FIG. 3. The n-type source 317 and the p-type body 316 have the dopant species and densities described in reference to the LDNMOS transistor 105 of FIG. 1. The LDNMOS transistor 305 may further include an n-type drain 318 in the substrate 301, a gate dielectric layer 319 on the component surface 303, a gate 320 on the gate dielectric layer 319, with gate sidewall spacers 321 on sides of the gate 320, and an n-type drain contact region 322 in the n-type drain 318; similar to corresponding elements of the LDNMOS transistor 105 of FIG. 1.[0034] The semiconductor device 300 may include metal silicide 323 at the component surface 303 on the collector 306, the n-type regions 339, and the p-type base contact regions 309 of the Zener-triggered transistor 304, and on the p-type body 316, the n-type source 317, and the n-type drain contact region 322 of the LDNMOS transistor 305. The metal silicide 323 on the first segment 306a of the first current node 306 may be laterally separated from the first Zener diode 313a by a silicide block layer 324, and similarly, the metal silicide 323 on the second segment 306b of the first current node 306 may be laterally separated from the second Zener diode 313b by the silicide block layer 324. The metal silicide 323 on the n-type drain contact region 322 may be laterally separated from the gate 320 by the silicide block layer 324. The semiconductor device 300 may have a dielectric layer 325 over the component surface 303, with contacts 326 disposed through the dielectric layer 325, and interconnects 327 on the dielectric layer 325, making electrical connections to the contacts 326.[0035] A breakdown potential of the first Zener diode 313a and a breakdown potential of the second Zener diode 313b are both lower than a breakdown potential between the first current node 306 and the second current node 307. When a positive electrical pulse is applied to the first current node 306 with respect to the second current node 307, the first Zener diode 313a may break down, inducing current through the first segment 306a, and through the base 308 and the emitter 307. Having the metal silicide 323 on the first segment 306a laterally separated from the first Zener diode 313a by the silicide block layer 324 may provide resistance in the first segment 306a, so that a potential difference between the second segment 306b and the second current node 307 does not drop below the breakdown potential of the second Zener diode 313b before the second Zener diode 313b can break down, inducing current through the second segment 306b, and through the base 308 and the emitter 307. A similar process may occur if the second Zener diode 313b breaks down first. Thus, laterally separating the metal silicide 323 from the first Zener diode 313a and the second Zener diode 313b may advantageously reduce current
crowding through the Zener-triggered transistor 304.[0036] FIG. 4 is a cross section of a further example semiconductor device which includes a Zener-triggered transistor. The semiconductor device 400 includes a substrate 401, which includes a p-type semiconductor material 402. The p-type semiconductor material 402 may extend to a component surface 403 in locations in the semiconductor device 400. The semiconductor device 400 of this example includes a Zener-triggered transistor 404, contacting the component surface 403, and an LDNMOS transistor 405. The semiconductor device 400 may include field oxide 410 at the component surface 403 to laterally separate components of the semiconductor device 400.[0037] The Zener-triggered transistor 404 of this example is manifested as a grounded gate n-channel metal oxide semiconductor (GGNMOS) transistor 404. The Zener-triggered transistor 404 includes a first current node 406 of n-type semiconductor material, manifested as a drain 406 of the GGNMOS transistor 404. The Zener-triggered transistor 404 includes a second current node 407 of n-type semiconductor material, manifested as a source 407 of the GGNMOS transistor 404. The p-type semiconductor material 402 provides a body region 441 of the GGNMOS transistor 404. The body region 441 laterally separates the drain 406 from the source 407. The semiconductor device 400 may include p-type contact regions 409 to provide low resistance connections to the body region 441. The Zener-triggered transistor 404 also includes a gate dielectric layer 442 on the component surface 403 over the body region 441, extending partway over the drain 406 and the source 407. The Zener-triggered transistor 404 further includes a gate 443 on the gate dielectric layer 442 over the body region 441; the gate 443 may extend partway over the drain 406 and the source 407. Gate sidewall spacers 421 may be disposed on sides of the gate 443.[0038] The Zener-triggered transistor 404 may be electrically isolated in a vertical direction by an NBL 411. The Zener-triggered transistor 404 may further be electrically isolated in lateral directions by n-type sinkers 439 extending from the NBL 411 to the component surface 403. The NBL 411 may be biased with respect to the p-type semiconductor material 402 to reduce leakage current from the p-type semiconductor material 402.[0039] The Zener-triggered transistor 404 includes a Zener diode 413 that is vertically integrated into the first current node 406. The Zener diode 413 includes an n-type cathode 414 that contacts the first current node 406, and includes a p-type anode 415 that contacts the n-type
cathode 414 and the p-type semiconductor material 402 in the body region 441. The n-type cathode 414 is laterally surrounded by the first current node 406, and the p-type anode 415 is located under the n-type cathode 414, so that the n-type cathode 414 is between the p-type anode 415 and the component surface 403. The n-type cathode 414 and the p-type anode 415 have the dopant species and densities described in reference to the Zener diode 113 of FIG. 1.[0040] The LDNMOS transistor 405 includes an n-type source 417 in the substrate 401 and a p-type body 416 in the substrate 401, extending under and laterally around the n-type source 417, as indicated in FIG. 4. The n-type source 417 and the p-type body 416 have the dopant species and densities described in reference to the LDNMOS transistor 105 of FIG. 1. The LDNMOS transistor 405 may further include an n-type drain 418 in the substrate 401, a gate dielectric layer 419 on the component surface 403, a gate 420 on the gate dielectric layer 419, with the gate sidewall spacers 421 on sides of the gate 420, and an n-type drain contact region 422 in the n-type drain 418; similar to corresponding elements of the LDNMOS transistor 105 of FIG. 1.[0041] The semiconductor device 400 may include metal silicide 423 at the component surface 403 on the drain 406, the source 407, the gate 443, the p-type contact regions 409, and the n-type sinkers 439 of the Zener-triggered transistor 404, and on the p-type body 416, the n-type source 417, and the n-type drain contact region 422 of the LDNMOS transistor 405. The semiconductor device 400 may have a dielectric layer 425 over the component surface 403, with contacts 426 disposed through the dielectric layer 425, and interconnects 427 on the dielectric layer 425, making electrical connections to the contacts 426. In this example, the source 407, the body region 441 and the gate 443 are electrically coupled together through the metal silicide 423, the contacts 426 and the interconnects 427.[0042] A breakdown potential of the Zener diode 413 is lower than a breakdown potential between the first current node 406 and the second current node 407. A positive electrical pulse applied to the first current node 406 with respect to the second current node 407 may induce breakdown in the Zener diode 413, inducing current through the Zener diode 413 to turn on a parasitic bipolar transistor in parallel to the GGNMOS transistor 404. The first current node 406 of the Zener-triggered transistor 404 provides a collector of the parasitic bipolar transistor, the body region 441 of the Zener-triggered transistor 404 provides a base of the parasitic bipolar transistor, and the second current node 407 of the Zener-triggered transistor 404 provides an emitter of the parasitic bipolar transistor. In one version of this example, the NBL 411 may be
connected to the source 407 of the GGNMOS transistor 404, to provide an extended second current node 407 of the Zener triggered transistor 404. Connecting the NBL 411 to the source 407 of the GGNMOS transistor 404 may enable vertical current flow through the Zener triggered transistor 404 to provide additional current capacity. Having the Zener diode 413 vertically integrated in the first current node 406 may accrue the advantages described in reference to FIG. 1[0043] FIG. 5A through FIG. 5C are cross sections of a semiconductor device which includes a Zener-triggered transistor, depicted in stages of another example method of formation. Referring to FIG. 5 A, the semiconductor device 500 includes a substrate 501, such as a semiconductor wafer. The substrate 501 includes a p-type semiconductor material 502 which includes primarily silicon, extending to a component surface 503 in locations in the semiconductor device 500. Field oxide 510 may be formed at the component surface 503 to laterally separate elements of the semiconductor device 500. A silicon dioxide layer 528 may be formed on the component surface 503 to protect the p-type semiconductor material 502 during subsequent process steps.[0044] The semiconductor device 500 includes an area for a first Zener-triggered transistor 504 and an area for a second Zener-triggered transistor 544. An implant mask 529 is formed over the silicon dioxide layer 528. The implant mask 529 exposes a first area for a first Zener diode 513 in the area for the first Zener-triggered transistor 504, and exposes a second area for a second Zener diode 545 in the area for the second Zener-triggered transistor 544. The first area exposed by the implant mask 529 for the first Zener diode 513 may have a first lateral width 530 that is less than 500 nanometers, for example, 400 nanometers to 500 nanometers, to provide a first effective dose of subsequently -implanted boron ions 532 and arsenic ions 535. The second area exposed by the implant mask 529 for the second Zener diode 545 may have a second lateral width 546 that is less than the first lateral width 530, for example, 250 nanometers to 350 nanometers, to provide a second effective dose of the subsequently-implanted boron ions 532 and arsenic ions 535. The first lateral width 530 is the shorter of two perpendicular lateral dimensions of the first area exposed by the implant mask 529, and the second lateral width 546 is the shorter of two perpendicular lateral dimensions of the second area exposed by the implant mask 529.[0045] The boron ions 532 are implanted through the silicon dioxide layer 528 into the substrate 501 in the first area exposed by the implant mask 529 and in the second area exposed
by the implant mask 529, to form a first Zener anode implanted region 533 in the substrate 501 under the first area and to form a second Zener anode implanted region 547 in the substrate 501 under the second area. The arsenic ions 535 are implanted through the silicon dioxide layer 528 into the substrate 501 in the first area exposed by the implant mask 529 and in the second area exposed by the implant mask 529, to form a first Zener cathode implanted region 536 in the substrate 501 under the first area, and to form a second Zener cathode implanted region 548 in the substrate 501 under the second area. The second effective dose of the boron ions 532 in the second Zener anode implanted region 547 may be less than the first effective dose of the boron ions 532 in the first Zener anode implanted region 533, as a result of the second lateral width 546 being less than the first lateral width 530. Similarly, the second effective dose of the arsenic ions 535 in the second Zener cathode implanted region 548 may be less than the first effective dose of the arsenic ions 535 in the first Zener cathode implanted region 536. The implant mask 529 is removed after the boron ions 532 and the arsenic ions 535 are implanted.[0046] Referring to FIG. 5B, the substrate 501 is heated by an anneal process 538 to activate and diffuse the implanted boron and implanted arsenic. The implanted boron diffuses into the substrate 501 and becomes activated to form a first p-type anode 515 of the first Zener diode 513 in the area for the first Zener-triggered transistor 504, and to form a second p-type anode 549 of the second Zener diode 545 in the area for the second Zener-triggered transistor 544. The implanted arsenic diffuses into the substrate 501 and becomes activated to form a first n-type cathode 514 of the first Zener diode 513 in the area for the first Zener-triggered transistor 504, and to form a second n-type cathode 550 of the second Zener diode 545 in the area for the second Zener-triggered transistor 544. The first p-type anode 515 extends further into the substrate 501 from the component surface 503 than the first n-type cathode 514, and the second p-type anode 549 extends further into the substrate 501 from the component surface 503 than the second n-type cathode 550, in part due to boron having a higher diffusion coefficient than arsenic, at the temperature of the substrate 501 during the anneal process 538. Forming the first p-type anode 515 and the first n-type cathode 514 of the first Zener diode 513 concurrently with the second p-type anode 549 and the second n-type cathode 550 of the second Zener diode 545 may advantageously reduce fabrication cost and fabrication complexity of the semiconductor device 500.[0047] Referring to FIG. 5C, formation of the semiconductor device 500 is continued to form
the first Zener-triggered transistor 504 and the second Zener-triggered transistor 544. In this example, the first Zener-triggered transistor 504 and the second Zener-triggered transistor 544 are implemented as lateral NPN bipolar junction transistors. The first Zener-triggered transistor 504 includes a first current node 506 of n-type semiconductor material formed in the substrate 501, extending to the component surface 503, and a second current node 507 of n-type semiconductor material formed in the substrate 501, extending to the component surface 503. The first current node 506 is implemented as a collector 506 of the first Zener-triggered transistor 504, and the second current node 507 is implemented as an emitter 507 of the first Zener-triggered transistor 504. A portion of the p-type semiconductor material 502 under the first current node 506 and the second current node 507 provides a first base 508 of the first Zener-triggered transistor 504. The semiconductor device 500 may include first p-type base contact regions 509 contacting the first base 508.[0048] The first Zener diode 513 is vertically integrated in the first current node 506 of the first Zener-triggered transistor 504, so that the first n-type cathode 514 contacts the first current node 506, and the first p-type anode 515 contacts the first n-type cathode 514 and the first base 508. The first p-type anode 515 has a first anode lateral width 551, which may be less than 1 micron, due to the first lateral width 530 of FIG. 5A being less than 500 nanometers.[0049] The second Zener-triggered transistor 544 includes a first current node 552 of n-type semiconductor material formed in the substrate 501, extending to the component surface 503, and a second current node 553 of n-type semiconductor material formed in the substrate 501, extending to the component surface 503. The first current node 552 is implemented as a collector 552 of the second Zener-triggered transistor 544, and the second current node 553 is implemented as an emitter 553 of the second Zener-triggered transistor 544. A portion of the p-type semiconductor material 502 under the first current node 552 and the second current node 553 provides a second base 554 of the second Zener-triggered transistor 544. The semiconductor device 500 may include second p-type base contact regions 555 contacting the second base 554.[0050] The second Zener diode 545 is vertically integrated in the first current node 552 of the second Zener-triggered transistor 544, so that the second n-type cathode 550 contacts the first current node 552, and the second p-type anode 549 contacts the second n-type cathode 550 and the second base 554. The second p-type anode 549 has a second anode lateral width 556, which is less than the first anode lateral width 551, due to the second lateral width 546 of FIG. 5A
being less than the first lateral width 530 of FIG. 5A.[0051] The second Zener diode 545 may have a second breakdown potential that is higher than a first breakdown potential of the first Zener diode 513, as a result of the second effective dose of the boron ions 532 of FIG. 5 A in the second Zener anode implanted region 547 of FIG. 5 A being less than the first effective dose of the boron ions 532 in the first Zener anode implanted region 533 of FIG. 5A, and the second effective dose of the arsenic ions 535 of FIG. 5A in the second Zener cathode implanted region 548 of FIG. 5A being less than the first effective dose of the arsenic ions 535 in the first Zener cathode implanted region 536 of FIG. 5A. Thus, by adjusting the first lateral width 530 and second lateral width 546 in the implant mask 529 of FIG. 5 A, the breakdown potentials of the first Zener diode 513 and the second Zener diode 545 may be provided with desired values for specific application in the semiconductor device 500.[0052] FIG. 6 is a circuit diagram of an example semiconductor device including a Zener-triggered transistor in an application. The semiconductor device 600 includes a ground node 657 and an input/output (I/O) node 658. The ground node 657 may be manifested as a semiconductor material in a substrate of the semiconductor device 600, for example, corresponding to the p-type semiconductor material 102 of FIG. 1. The I/O node 658 may be manifested as a wire bond pad or bump bond pad of the semiconductor device 600. The semiconductor device 600 includes the Zener-triggered transistor 604, which may be manifested as an NPN bipolar junction transistor 604 as indicated in FIG. 6. Other manifestations of the Zener-triggered transistor 604, such as an NMOS transistor, are within the scope of this example. The Zener-triggered transistor 604 includes a first current node 606 and a second current node 607. A Zener diode 613 is vertically integrated into the first current node 606, for example as described in any of the examples herein. In this example, an n-type cathode 614 of the Zener diode 613 contacts the first current node 606, and a p-type anode 615 of the Zener diode 613 contacts a base 608 of the Zener-triggered transistor 604.[0053] A positive electrical pulse on the I/O node 658 with respect to the ground node 657 may induce breakdown in the Zener diode 613, inducing current through the Zener diode 613 to turn on the Zener-triggered transistor 604. The Zener-triggered transistor 604 may thus prevent voltage transients on the I/O node 658 significantly above a breakdown potential of the Zener diode 613, and so protect components in the semiconductor device 600 that are electrically coupled to the I/O node 658.
[0054] FIG. 7 is a circuit diagram of an example semiconductor device including a Zener-triggered transistor in another application. The semiconductor device 700 includes a snubber circuit 759 having a switch 760 coupled between an input port 761 of the snubber circuit 759 and a filter 762 of the snubber circuit 759. The filter 762 may be manifested as a resistor-capacitor (RC) low-pass filter 762, as indicated in FIG. 7. The filter 762 is coupled between the switch 760 and an output port 763 of the snubber circuit 759. The snubber circuit 759 includes the Zener-triggered transistor 704 coupled between the switch 760 and the output port 763. The Zener-triggered transistor 704 may be manifested as a GGNMOS transistor 704, as indicated in FIG. 7. Other manifestations of the Zener-triggered transistor 704, such as an NPN bipolar junction transistor, are within the scope of this example. The Zener-triggered transistor 704 includes a first current node 706 and a second current node 707. The first current node 706 is coupled to the output port 763, and the second current node 707 is coupled to the input port 761. A Zener diode 713 is vertically integrated into the first current node 706, for example as described in any of the examples herein. In this example, an n-type cathode 714 of the Zener diode 713 contacts the first current node 706, and a p-type anode 715 of the Zener diode 713 contacts a body 741 of the Zener-triggered transistor 704.[0055] A positive electrical pulse on the output port 763 with respect to the input port 761 may induce breakdown in the Zener diode 713, inducing current through the Zener diode 713 to turn on a parasitic bipolar transistor of the Zener-triggered transistor 704. The Zener-triggered transistor 704 may thus prevent voltage transients on the output port 763 significantly above a breakdown potential of the Zener diode 713, and so protect components in the semiconductor device 700 that are electrically coupled to the input port 761.[0056] Various features of the examples described herein may be combined in other manifestations of example semiconductor devices. In one example, the Zener-triggered transistors of FIG. 1 and FIG. 4 may have segmented first current nodes with separate Zener diodes, as described in reference to FIG. 3. In another example, the structure of FIG. 4 may include a silicide block layer similar to those shown in FIG. 1 or FIG. 3. Conversely, the structures of FIG. 1 or FIG. 3 may be free of the silicide block layer 124 or 324, respectively. In a further example, the structure of FIG. 5A through FIG. 5C may include an NBL similar to those shown in FIG. 1, FIG. 3, or FIG. 4. Conversely, the structures of FIG. 1 or FIG. 4 may be free of the NBL 111 or 411, respectively.
[0057] While various embodiments of this description have been described above, these embodiments have been presented by way of example only and not limitation. Numerous changes to the described embodiments can be made in accordance with this description without departing from the spirit or scope of the description. Thus, the breadth and scope of the present invention should not be limited by any of the above described embodiments. Rather, the scope of the description should be defined in accordance with the following claims and their equivalents. |
Techniques for enhancing machine learning (ML) model execution. The technique includes determining an amount of memory (604) used to process layers (602) of a machine learning network having multiple layers, smoothing (652) the amount of memory used to process the layers of the machine learning network based on a number of layers, identifying change layers (654) where the smoothed amount of memory used changes more than a memory change threshold amount, grouping the layers of the machine learning network into a first layer grouping based on the identified change layers, and outputting the first layer grouping. |
CLAIMSWhat is claimed is:1. A method comprising: determining an amount of memory used to process layers of a machine learning network having multiple layers; smoothing the amount of memory used to process the layers of the machine learning network based on a number of layers; identifying change layers where the smoothed amount of memory used changes more than a memory change threshold amount; grouping the layers of the machine learning network into a first layer grouping based on the identified change layers; and outputting the first layer grouping.2. The method of claim 1, further comprising: modeling the machine learning network based on the first layer grouping; associating a first cost with the first layer grouping; generating a second layer grouping by adjusting a group boundary of the first layer grouping; modeling the machine learning network based on the second layer grouping; associating a second cost with the second layer grouping; and outputting a lower cost layer grouping based on a comparison between the first cost and the second cost.3. The method of claim 2, wherein the first and second costs are based on at least one of expected number of memory accesses or processing cycles.4. The method of claim 2, wherein the group boundary is adjusted within a predefined range of values around the group boundary.5. The method of claim 1, wherein the first layer grouping comprises a first set of layers and a second set of layers.6. The method of claim 5, wherein a first number of layers of the first set of layers differs from a second number of layers of the second set of layers.7. The method of claim 1, further comprising: determining a minimum number of tiles for the layers of the first layer grouping based on the amount of memory used by the layers;
determining a number of tiles for a last layer of the first layer grouping based on the minimum number of tiles; and determining the number of tiles for other layers of the first layer grouping based on the number of tiles for the last layer.8. A non-transitory program storage device comprising instructions stored thereon to cause one or more processors to: determine an amount of memory used to process layers of a machine learning network having multiple layers; smooth the amount of memory used to process the layers of the machine learning network based on a number of layers; identify change layers where the smoothed amount of memory used changes more than a memory change threshold amount; group the layers of the machine learning network into a first layer grouping based on the identified change layers; and output the first layer grouping.9. The non-transitory program storage device of claim 8, wherein the instructions further cause the one or more processors to: model the machine learning network based on the first layer grouping; associate a first cost with the first layer grouping; generate a second layer grouping by adjusting a group boundary of the first layer grouping; model the machine learning network based on the second layer grouping; associate a second cost with the second layer grouping; and output a lower cost layer grouping based on a comparison between the first cost and the second cost.10. The non-transitory program storage device of claim 9, wherein the first and second costs are based on at least one of expected number of memory accesses or processing cycles.11. The non-transitory program storage device of claim 9, wherein the group boundary is adjusted within a predefined range of values around the group boundary.12. The non-transitory program storage device of claim 8, wherein the first layer grouping comprises a first set of layers and a second set of layers.13. The non-transitory program storage device of claim 12, wherein a first number of layers of the first set of layers differs from a second number of layers of the second set of layers.14. The non-transitory program storage device of claim 8, wherein the instructions further cause the one or more processors to: determine a minimum number of tiles for the layers of the first layer grouping based on the amount of memory used by the layers; determine a number of tiles for a last layer of the first layer grouping based on the minimum number of tiles; and determine the number of tiles for other layers of the first layer grouping based on the number of tiles for the last layer.15. A device, comprising: a memory; and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute non-transitory instructions causing the one or more processors to: determine an amount of memory used to process layers of a machine learning network having multiple layers; smooth the amount of memory used to process the layers of the machine learning network based on a number of layers; identify change layers where the smoothed amount of memory used changes more than a memory change threshold amount; group the layers of the machine learning network into a first layer grouping based on the identified change layers; and output the first layer grouping.16. The device of claim 15, wherein the instructions further cause the one or more processors to: model the machine learning network based on the first layer grouping; associate a first cost with the first layer grouping; generate a second layer grouping by adjusting a group boundary of the first layer grouping; model the machine learning network based on the second layer grouping; associate a second cost with the second layer grouping; and output a lower cost layer grouping based on a comparison between the first cost and the
second cost.17. The device of claim 16, wherein the first and second costs are based on at least one of expected number of memory accesses or processing cycles.18. The device of claim 16, wherein the group boundary is adjusted within a predefined range of values around the group boundary.19. The device of claim 15, wherein the first layer grouping comprises a first set of layers and a second set of layers.20. The device of claim 15, wherein the instructions further cause the one or more processors to: determine a minimum number of tiles for the layers of the first layer grouping based on the amount of memory used by the layers; determine a number of tiles for a last layer of the first layer grouping based on the minimum number of tiles; and determine the number of tiles for other layers of the first layer grouping based on the number of tiles for the last layer. |
ANALYTIC TECHNIQUES FOR IMPROVED SUPER TILING MACHINE LEARNINGPROCESSINGBACKGROUND[0001] Machine learning (ML) is becoming an increasingly important part of the computing landscape. Machine learning is a type of artificial intelligence (AI) and ML helps enable a software system to learn to recognize patterns from data without being directly programmed to do so. Neural networks (NN) are a type of ML which utilize a set of linked and layered functions (e.g., node, neuron, etc.) which are weighted to evaluate input data. In some NNs, sometimes referred to as convolution neural networks (CNNs), convolution operations may be performed in NN layers based on inputs received and weights. A convolution operation is a mathematical transformation applied to two functions to produce a third function which expresses how the shape of one function is modified by the second function . Examples of CNNs include deconvoiutiona! neural networks, pooling neural netwOrks, up-sample neural networks, deep neural networks, etc. CNNs are often used in a wide array of applications typically for recognition and classification, such as image recognition and classification, prediction and recommendation systems, speech and language recognition and translation, etc.[0002] As ML becomes increasingly useful, there is a desire to execute complex ML techniques, such as NNs and CNNs, efficiently in devices with relatively limited compute and memory resources, such as embedded, or other low-power devices. To help efficiently run a given ML model on target hardware resources, the ML model may be analyzed and optimized to run using super tiling to tailor the ML model for the target hardware resources to be used.SUMMARY[0003] This disclosure relates to a technique for enhancing ML model execution. The technique includes determining an amount of memory used to process layers of a machine learning network having multiple layers, smoothing the amount of memory used to process the layers of the machine learning network based on a number of layers, identifying change layers where the smoothed amount of memory used changes more than a memory change threshold amount, grouping the layers of the machine learning network into a first layer grouping based on the identified change layers, and
outputting the first layer grouping.[0004] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to: determine an amount of memory used to process layers of a machine learning network having multiple layers, smooth the amount of memory used to process the layers of the machine learning network based on a number of layers, identify change layers where the smoothed amount of memory used changes more than a memory change threshold amount, group the layers of the machine learning network into a first layer grouping based on the identified change layers, and output the first layer grouping.[0005] Another aspect of the present disclosure relates to device, comprising: a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute non-transitory instructions causing the one or more processors to: determine an amount of memory used to process layers of a machine learning network having multiple layers, smooth the amount of memory used to process the layers of the machine learning network based on a number of layers, identify change layers where the smoothed amount of memory used changes more than a memory change threshold amount, group the layers of the machine learning network into a first layer grouping based on the identified change layers, and output the first layer grouping. BRIEF DESCRIPTION OF THE DRAWINGS[0006] For a detailed description of various examples, reference will now be made to the accompanying drawings in which:[0007] FIG. 1 illustrates a dataflow through an example CNN, in accordance with aspects of the present di sclosure.[0008] FIG. 2 illustrates tiling for a tensor, in accordance with aspects of the present disclosure. [0009] FIG. 3 A is a block diagram illustrating super tile processing, in accordance with aspects of the present disclosure.[0010] FIG. 3B is a block diagram illustrating super tile processing resource usage, in accordance with aspects of the present disclosure.[0011] FIG. 4 illustrates super tile processing for multiple super tile passes, in accordance with aspects of the present disclosure.[0012] FIGs. 5A and 5B illustrate super tile processing for multiple super tile passes across multiple super tile groups, in accordance with aspects of the present disclosure.[0013] FIG. 6A is a line graph plotting the total volume of memory used for each layer of a CNN,
in accordance with aspects of the present disclosure.[0014] FIG. 6B is a line graph plotting a windowed total volume of memory for layers of a CNN, in accordance with aspects of the present disclosure.[0015] FIGs. 7A and 7B are flowcharts illustrating group boundary determination, in accordance with aspects of the present disclosure.[0016] FIG. 8 is a flow diagram illustrating a technique for determining a layer grouping, in accordance with aspects of the present disclosure.[0017] FIG. 9 is a block diagram of an example of a computing device, in accordance with aspects of the present disclosure.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0018] FIG. 1 illustrates a dataflow through an example CNN 100, in accordance with aspects of the present disclosure. The CNN 100 shown here includes two layers, first layer 102 and second layer 104. While this example CNN includes two layers, it may be understood that other CNNs can include any number of layers. The layers represent a mathematical function performed for an input tensor and result in an output tensor. Examples of the mathematical functions include convolution/deconvolution functions, pooling, elementwise add, concatenate, etc. The tensors are generalized matrices of N dimensions and include one or more nodes, which contain values. As an example, for an image, a node may describe a pixel and may include values for an x and y coordinate of the pixel as well as values for the R, G, and B channels describing the color of the pixel. The tensor may have a height axis, here represented by HI, H2, H3 and width axis W1, W2, and W3 corresponding to the dimensions of the image, as well as a channel axis, represented by C1, C2, and C3, corresponding to the color channel information (RGB information). In this example, a first tensor 106 is input into the first layer 102 along with a set of operational parameters 108 to produce a second tensor 110. Similarly, the second tensor 110 may be input into the second layer 104, processed based on operation parameters 112 and output a third tensor 114. The operational parameters 108 and 112 may include, for example, weights to apply to the processing of a given layer. Generally, the initial tensor, such as the first tensor 106 is the input into the CNN 100, and the last tensor, here the third tensor 114, is the output from the CNN 100. Tensors in between the input and output tensor, here the second tensor 110, may he referred to as intermediate tensor.[0019] In certain cases, a tensor may be split into tiles for processing, as shown in tensor 200 of FIG. 2, where the tiles may be sized based, for example, on the pipeline design of the processor.
For example, a tile may include one or more nodes based on a number of parallel pipelines available on a processor. Of note, going forward, tensors are shown as two-dimensional structures for the sake of clarity. In common implementations, all tiles of a given tensor are processed by a particular layer before processing starts on the next tensor and layer. For example, referring back to FIG 1, processing of the first tensor 106 in the first layer 102 may be completed for the entire first tensor 106 and output to the second tensor 110 before processing of the second tensor 110 in the second layer 104.[0020] Generally, it is advantageous to be able to store as much information required to execute a CNN in a memory' as close as possible to the processor to help performance. Generally, memory close to a processer may be referred to as on-chip memory, while memory' that is relatively further from the processor may be referred to as system memory', main memory', or random-access memory (RAM), and even further memory may be referred to as storage, disk, or hard disk. Examples of on-chip memory include static random-access memory (SRAM) and cache memory'. Cache memory may further be divided into levels, such as level 1 (L1), level 2 (L2), and level 3 (L3), with higher numbers generally indicating that the cache is further away (e.g., slower to access) from the processor. As an example of processing an intermediate input tensor in a corresponding layer, the input tensor may be stored in a level 3 (L3) memory cache, while weights, CNN model, and input tile and output information are stored in a level 2 (L2) cache. As portions of the tensor are processed, output may be stored temporarily in L2 cache and then output to another intermediate tensor, for example, in L3 cache as the input tensor is processed. Outputting the next tensor into the L3 cache helps prepare the system to process the next layer. In certain eases, the initial input tensor and final output may be stored in system memory, Storing and accessing intermediate tensors entirely in cache helps reduce the need to access external memory, such as system memory, like double data rate (DDR) memory, which can take a number of clock cycles (e.g., processing cycles) and reduce processing efficiency as the processor may need to stall while waiting for data.[0021] While the size of a memory may be fixed, the size required by an intermediate tensor can vary. For example, a CNN may have a half megabyte (MB) sized input tensor and may be associated with two intermediate tensors of 5 MB and 12 MB, respectively. If, for example, a near processor memory such as a L3 cache is only 8 MB, the 12 MB intermediate tensor will not be able to entirely fit within the L3 cache and a portion of the 12 MB intermediate tensor wall likely
be sto access be bot [0022 aspect proces as a s tensor tile 30 secon secon multip the he the ho 304 is proces [0023] proces resour on-chi includ interm in a first layer 330 in conjunction with first ML network information 332 with model and/or weight information to produce a first layer output 334. The first output 334 is written back into the on-chip memory 322, overwriting portions of the on-chip memory 322 which were storing the first portion 328 to obtain a second portion 336 of a second tensor. In certain cases, the second portion 336 may be a different size than the first portion 328. When the second portion 336 is smaller in size as compared to the first portion 328, the remaining portions 338 of the first portion 328 may be discarded. In certain cases, output from the first layer 332 may be dynamically written over corresponding parts of the first portion 328 in the on-chip memory 322 as the output is generated. Once generated, the second portion 336 is processed in a second layer 340 in conjunction with second
ML network information 342 to produce a second layer output 344, which is written back into the on-chip memory 322, overwriting portions of the on-chip memory' 322 which were storing the second portion 336 to obtain a third portion 346 of a third tensor.[0024] FIG. 4 illustrates super tile processing for multiple super tile passes 400, in accordance with aspects of the present disclosure. This example includes a layer group with at least the four intermediate tensors, a first tensor 402A-402D, second tensor 404A-404D, third tensor 406A- 406D, and fourth tensor 408A-40D, which are shown here in a single dimension with 20 tiles, with other dimensions omitted for clarity. In this example, the layers have also been omitted. Of note, as the tensors 402-408 in this example are intermediate tensors, the first tensor 402 is an output tensor from a separate input tensor (not shown) and corresponding layer. As before, the first tensor 402 is input into a first layer to generate the second tensor 404, which is input into a second layer to generate the third tensor 406, which is input into a third layer to generate the fourth tensor 408. Four super tile passes are used to generate the complete fourth tensor 408, which may be input into another layer, for example, another layer outside of this layer group.[0025] Each of the layers discussed in this example are 3x3 convolution layers. In a 3x3 convolution layer, each tile is processed along with one neighboring tile in each dimension for the layer. Each tensor includes two zero pads, represented by the -1 and 20 entries. These zero pads may be used as neighboring tiles when processing tiles on the edge of a given tensor. Here at the end of each super tile pass, the fourth tensor 408 has five completed tiles 410. As each layer is a 3x3 convolution layer, tile 5 of the third tensor 406A is used to generate tile 4 of the fourth tensor 408 A. likewise, tile 6 of the second tensor 404A is used to generate tile 5 of the third tensor 406 A, and so forth. After the first super tile pass is completed, the second super tile pass is performed. As with the first super tile pass, five completed tiles 412 are generated after the second super tile pass the completed. As discussed in conjunction with FIG. 4, there may be overlapping areas as between the super tile passes. For example, tiles 4 and 5 for the third tensor 406B may be used to generate the five completed tiles 412 of the fourth tensor 408B. Tiles 4 and 5 of the third tensor 4Q6B were previously computed in the first super tile pass and stored. When generating the third tensor 406B, tiles 4 and 5 of the third tensor 406B are reloaded rather than being recomputed. Similarly, tiles 5 and 6 of the second tensor 404B and tiles 6 and 7 of first tensor 402B may also be reloaded. In certain cases, a number of tiles included within a super tile may vary? across super tile passes. For example, for the fourth super tile pass, the first tensor 402D may have two tiles, rather than eight tiles as in the
other super tile passes, in cases where the size of the tensors varies across the layer group, the size of the largest tensor may be used as a part of determining a size for the super tiles. In this example, as each prior layer requires more tiles to be calculated than the next, the size, and hence memory space required to calculate the tiles of the first tensor 402A for the first pass, would be a limiting factor to the size of the overall super tile. That is, the size of the super tile (e.g., tile height) may be selected to allow7the calculations needed for the first tensor 402A in the first pass to fit into a memory, such as the L3 cache.[0026] FiGs. 5 A and 5B illustrate super tile processing 500 for multiple super tile passes across multiple super tile groups, in accordance with aspects of the present disclosure. Generally, a CNN may have any number of layers and in some eases, a particular CNN may have more layers than can be practically run as a single super tile. For example, CNNs with a relatively large input tensors and relatively small output tensors, it may be beneficial to execute the layers of the CNN in multiple super tiles, rather than a single super tile. In some cases, the layers of the CNN may be grouped into super tile groups 502A and 502B (collectively 502) with one or more layers grouped into each super tile group 502.[0027] Each super tile group may be associated with certain super tile group properties. These super tile group properties may include properties such as a number of layers in the super tile group, tile heights associated with the layers, and a context memory. In this example, the number of layers in a first super tile group 502A includes four layers 504, here layers 1, 2, 3, and 4. A second super tile group 502B, in this example, also includes four layers 518, here layers 5, 6, 7, and 8. It may be understood that each super tile group may have a different number of layers. Each layer may be associated with one or more tile heights. In some cases, each layer may be associated with a first tile height, a normal tile height, and a last tile height. The first tile height may indicate a number of tiles for each layer during the first run. In some eases, the first run may be a virtual or prewarming super tile pass, here labeled as pass 0 506. The virtual super tile pass may not produce a completed tile in the last tensor of the layer group. Rather, the virtual super tile pass computes a set of tiles which overlaps with tiles of the next, normal super tile pass and stores these (e.g., backed up) computed tiles for the next pass. In this example, the first tile height, for the first layer is 3, the second layer is 2, the third layer is 1, and the fourth layer is 0.[0028] The normal tile height may indicate a number of tiles for each layer during a steady state am of the super tile passes, here labeled as pass 1 508, pass 2 510, and pass 3 512. In this example,
the normal tile height for all of the layers is 5. It may be understood that the normal tile height for each layer may be different. The last tile height indicates a number of tiles for each layer for the last pass, here pass 4 514, of the super tile run. In this example, the last tile height, for the first layer is 2, the second layer is 3, the third layer is 4, and the fourth layer is 5.[0029] The context memory super tile group property refers to the stored or backed up tiles 516 for the passes. In this example, the context memory size is six tiles.[0030] Super tile groups and associated super tile group properties may be defined for a CNN to help tailor the execution of the CNN for certain hardware resources. Each CNN may have a unique combination of a number of layers, tensor dimensions for each layer, and what each layer may be doing. For example, certain layers, such as layers performing a pooling function, convolution function, etc., may be associated with a down-sampling property where the layer takes an input tensor of a certain dimension and outputs a tensor with reduced dimensions. Other layers, such as layers performing a resizing function, deconvolution function, etc., may be associated with an upsampling property where the layer takes an input tensor of a certain dimension and outputs a tensor with increased dimensions.[0031] To help tailor the execution of the CNN for a given hardware resource, the CNN may be modeled to determine a total volume of memory (e.g. an amount of memory) needed for each layer of the CNN. This total volume of memory may include all memory' needed to execute the layer of the CNN, including memory needed for the input tensor(s), output tensor(s), backed up tiles, operational parameters needed for the layer, etc. Super tile groups may be defined based on this total volume of memory'.[0032] FIG. 6A is a line graph 600 plotting the total volume of memory used for each layer of a CNN, in accordance with aspects of the present disclosure. In FIG. 6 A, 64 layers 602 of a CNN are shown on the X-axis and a total value of memory used 604 per layer, in megabytes, are shown on the Y-axis. In this example, the total volume of memory used by layers of the CNN may vary quite a bit. as between layers. In accordance with aspects of the present disclosure, this local noise may be addressed by smoothing out the total value of memory used across layers within a window.[0033] FIG. 6B is a, line graph 650 plotting a windowed total volume of memory' for layers of a CNN, in accordance with aspects of the present disclosure. Windowing is performed across the layers of the CNN to generate the windowed total volume data shown by plot 652. In some cases, a windowed total value for a layer i may be a maximum total volume from layer i to layer i + W, where
W is a window size. For example, in FIG. 650, the window size may be set. to 8 and thus the windowed total volume of layer 1 is the maximum total value for layers 1 through 9. Referring back to line graph 600, layer 5 has the maximum total value for layers 1 through 9, at 25 MB, so the windowed total volume of layer 1 is 25 MB. As another example, at layer 6, the windowed total volume of layer 6 is the maximum total value for layers 6 through 14, or about 9 MB based on layers 8, 9, and 12. In some eases, W may be a predetermined value. For example, W may be coded default value, received from a user, etc. In some cases, IF may be dynamically determined based on one or more factors, for example, as a function of a total number of layers in the CNN, the types of layers (e.g., convolutional, deconvolutional, pooling, etc.), as a function of a number of certain types of layers, layer ordering, determined based on a cost function and modeling, etc.[0034] Based on the windowed total volume data, points wiiere the total volume changes by a certain amount, which may be referred to as a volume change factor, may be identified. These identified points may be used to determine initial boundaries for the super tiling groups. In the example line graph 650, points may be identified between layers 5 and 6, layers 12 and 13, layers 24 and 35, and layers 49 and 50. While in this example there is a total volume change between layers 33 and 34 and layers 54 and 55, the total volume change at these points may be below the volume change factor and thus these points are not identified. Thus, five super tiling groups may be defined as including layers [1:5], [6:12], [13:24], [25:49], and [50:64], If a relatively smaller volume change factor had been used, additional super tiling groups may be defined, such as [1 :5], [6:12], [13:24], [25:49], [50:54], [55:64] or [1:5], [6:12], [13:24], [25:33], [34:49], [50:54], [55:64], In certain cases, the volume change factor may be predetermined, for example, as a default value, received from a user, etc. In other cases, the volume change factor may be determined based on one or more factors, for example, based on a cache or memory size, a maximum total volume across all layers, ratio of maximum total value to minimum total value, etc. The volume change factor may be chosen to balance noise reduction and a number of points identified. In some cases, multiple volume change factors may he used to determine multiple sets of super tiling groups for comparison, for example, via performance simulations (e.g., modeling).[0035] After the super tiling groups are identified, the super tiling groups may be refined. In some cases, super tiling groups may be refined based on a cost minimization performed across super tiling group variants. For example, an initial super tiling group variant may be the super tiling groups as identified based on the total volume changes. A cost factor may be determined and associated with
this initial super tiling group variant. This cost factor may he determined based on performance simulations (e.g., modeling) of the CNN being executed using the initial super tiling group variant. The performance simulations may account for memory access latencies, processing speed, and power consumption for a target hardware resource (e.g., the hardware resource CNN execution is being optimized for). The cost factor is then associated with the initial super tiling group variant, A variant of the super tiling group is then determined by moving one or more group boundaries of the super tiling group within a refinement range N of the initial group boundary. In some cases, the refinement range may be both positive and negative and this range may be relatively small. As an example, an initial group boundary' 654 may be identified between layers 24 and 25 between initial super tiling groups [13:24], [25:33], and a refinement, range of N=1. The two determined variants of the initial group boundary then may be [13, 23], [24, 33], and [13, 25], [26, 33], These determined variants may then be evaluated via performance simulations and associated with a cost factor. The variant with the relatively smallest cost factor may be selected as a final super tiling group confi guration. In some cases, each group boundary of the initial group boundaries may be refined. In some cases, one group boundaries with a total volume change over or under a certain threshold size may be refined. In some cases, such as when two super tiling groups are within the refinement range of each other, the two super tiling groups may be merged. In some cases, different step sizes for the refinement range may be used, for example, adjusting the group boundary by two layers rather than one layer.[0036] In accordance with aspects of the present disclosure, a tile height and number of tiles may be configured for a super tiling group. In some cases, this determination may be based on back propagation from a tile height for the last layer of the super tiling group, such as layer 4 in the example shown in FIG. 5. To determine the tile height via back propagation, the volume of memory needed for each layer may be determined. Based on the volume of memory needed for each layer and an amount of memory' available on the target hardware resource, a minimum number of tiles (e.g., passes) needed to process the layer while keeping memory usage of the tile within the amount, of memory available on the target hardware resource may be determined. Once minimum number of tiles are determined for each layer, a largest number of the minimum number of tiles for the layers is identified. In some cases, the number of tiles for layers of the group may be constant, except for the first and last pass. Based on this largest number of the minimum number of tiles, tile heights for the last layer may be determined for the first pass, pass, and normal passes. Based on the tile heights
for the last layer, tile heights for the layer before the last layer can be determined. This process is then repeated until tile heights for the first layer are determined.[0037] FiGs. 7A and 7B are flowcharts illustrating group boundary determination, in accordance with aspects of the present disclosure. At block 702, a window size is determined. In some cases, the window size may be predetermined and retrieved, for example, from a memory. In some cases, the window size may be determined based on one or more factors, such as the total number of layers of a CNN, cost function, etc. At block 704, windowed total volume of the layers of the CNN may be determined based on the window' size. For example, a layer may have a window'ed total volume based on a maximum total value of other layers within the window' number of the layer. At block 706, a change in the windowed total volume as between a layer and a next layer are compared to a volume change factor. If the window'ed total volume change is less than the volume change factor, at block 708, then the next layer, and layer after the next layer, are evaluated at bock 706. If the windowed total volume change is greater than the volume change factor, at block 710, the boundary between the layers is marked as an initial super tile group boundary. At block 712, if there are additional layers, the additional layers are looped through. At block 714, if there are additional volume change factors to consider, the layers of the CNN are looped through again using the additional volume change factors. At block 716, one or more sets of marked initial super tile group boundaries may be output.[0038] At block 718, if there are sets of super tile groups that have not been refined, at block 720, the CNN may be modeled to determine cost factor for a super tile group boundary' within a refinement range. For example, a CNN may be modeled by executing the CNN with simulated inputs and using a super tile grouping being modeled. The modeling may use simulated target hardware, such as by using a virtual machine, and record operational information, such as memory usage, latencies of the memories being used, processor usage, power consumptions, etc. In some cases, each variant of a super tile group boundary' within a refinement range may be simulated and a cost factor associated with the variant. At block 722, the variant with the lowest cost factor of the variants of the super tile group boundary' within the refinement range may be selected as the super tile group boundary. At block 724, if there are additional super tile group boundaries to evaluate, execution returns to 720 to evaluate those additional super tile group boundaries. If there are no more super tile group boundaries to evaluate, execution returns to 718, If there are no additional sets of super tile groups to evaluate at block 718, then, if there are multiple sets of refined super tile groups, at block
726, cost, factors across the multiple sets of refined super tile groups are compared to select a set of refined super tile groups with a lowest cost factor at block 728. Otherwise, the refined super tile groups are output at block 730.[0039] FIG. 8 is a flow diagram illustrating a technique 800 for determining a layer grouping, in accordance with aspects of the present disclosure. At block 802, an amount of memory used to process the layers of a machine learning network having multiple layers are determined. For example, a CNN may be executed with simulated inputs to determine memory usage by layers of the CNN. At block 804, the amount of memory used to process the layers of the machine learning network may be smoothed based on a number of layers. For example, the amount of memory used to process the lay ers of the CNN may smoothed using a window. The window may have a window size indicating a number of layers included in the window. In some cases, the smoothed amount of memory may be based on the largest amount of memory used by any layers within the rolling window. At block 806, layers where the smoothed amount of memory used changes more than a memory change threshold amount are identified. For example, points where the smoothed amount of memory used changes by more than a volume change factor may be identified as boundaries. At block 808, the layers of the machine learning network may be grouped into a first layer grouping based on the identified layers. For example, super tiling groups may be defined based on the identified boundaries. At block 810, the first layer grouping is output.[0040] As illustrated in FIG. 9, device 900 includes a processing element such as processor 905 that contains one or more hardware processors, where each hardware processor may have a single or multiple processor cores. Examples of processors include but are not limited to a central processing unit (CPU) or a microprocessor. Although not illustrated in FIG. 9, the processing elements that make up processor 905 may also include one or more other types of hardware processing components, such as graphics processing units (GPUs), application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or digital signal processors (DSPs). In certain cases, processor 905 may be configured to perform the tasks described in conjunction with Figs. 7-8.[0041] The processor 905 is operatively and communicatively coupled to on-chip memory 925, such as a cache memory, SRAM, registers, etc. With respect to cache memory, cache memory may include one or more L1 caches, one or more L2 caches, and one or more L3 caches. The L1 cache may be integrated in a package with the processor 905. The L2 and/or L3 caches may also be
integrated in the processor package or may be in a package separate from the processor package. In certain cases, the L2 and/or L3 caches, or portions thereof may be integrated with a memory controller, which helps manage memory traffic to the processor 905.[0042] FIG. 9 illustrates that memory 910 may be operatively and communicatively coupled to processor 905. Memory 910 may be a non-transitory computer readable storage medium (e.g., non- transitory program storage device) configured to store various types of data. For example, memory 910 may include one or more volatile devices such as random-access memory (RAM). In certain cases, the SRAM and circuits as described in FIGs. 4-8 may be part of the memory 910. Non-volatile storage devices 920 (e.g., non-transitory program storage device) can include one or more disk drives, optical drives, solid-state drives (SSDs), tap drives, flash memory, electrically programmable read only memory (EEPROM), and/or any other type memory designed to maintain data for a duration of time after a power loss or shut down operation. The non-volatile storage devices 920 may also be used to store programs that are loaded into the RAM when such programs are executed. [0043] Persons of ordinary skill in the art are aware that software programs may be developed, encoded, and compiled in a variety of computing languages for a variety of software platforms and/or operating systems and subsequently loaded and executed by processor 905. In one example, the compiling process of the software program may transform program code written in a programming language to another computer language such that the processor 905 is able to execute the programming code. For example, the compiling process of the software program may generate an executable program that operates a ML network.[0044] After the compiling process, the encoded instructions may then be loaded as computer executable instructions or process steps to processor 905 from storage 920, from memory 910, and/or embedded within processor 905 (e.g., via a cache or on-board ROM). Processor 905 may be configured to execute the stored instructions or process steps in order to perform instructions or process steps to transform the computing device into a non-generic, particular, specially programmed machine or apparatus. Stored data, e.g., data stored by a storage device 920, may be accessed by processor 905 during the execution of computer executable instructions or process steps to instruct one or more components within the computing device 900. Storage 920 may be partitioned or split into multiple sections that may be accessed by different software programs. For example, storage 920 may include a section designated for specific purposes, such as storing program instructions or data for updating software of the computing device 900. In one example, the software to be updated
includes the ROM, or firmware, of the computing device. In certain cases, the computing device 900 may include multiple operating systems. For example, the computing device 900 may include a general-purpose operating system which is utilized for normal operations. The computing device 900 may also include another operating system, such as a bootloader, for performing specific tasks, such as upgrading and recovering the general-purpose operating system, and allowing access to the computing device 900 at a level generally not available through the general-purpose operating system. Both the general-purpose operating system and another operating system may have access to the section of storage 920 designated for specific purposes.[0045] The one or more communications interfaces may include a radio communications interface for interfacing with one or more radio communications devices. In certain cases, elements coupled to the processor may be included on hardware shared with the processor. For example, the communications interfaces 925, storage, 920, and memory 910 may be included, along with other elements such as the digital radio, in a single chip or package, such as in a system on a chip (SOC). Computing device may also include input and/or output devices, not shown, examples of which include sensors, cameras, human input devices, such as mouse, keyboard, touchscreen, monitors, display screen, tactile or motion generators, speakers, lights, etc.[0046] In this description, the term “couple” may cover connections, communications, or signal paths that enable a functional relationship consistent with this description. For example, if device A generates a signal to control device B to perform an action: (a) in a first example, device A is coupled to device B by direct connection; or (b) in a second example, device A is coupled to device B through intervening component C if intervening component C does not alter the functional relationship between device A and device B, such that device B is controlled by device A via the control signal generated by device A.[0047] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. |
Various embodiments include devices and methods for controlling a robotic vehicle. Each electronic speed controller (ESC) of the robotic vehicle may receive open loop flight control information from a flight controller or another processing device of the robotic vehicle. In some embodiments, each ESC may store the provided open loop flight control information in a memory. In response to detecting a loss of control signals from the flight controller, each ESC may access the stored open loop flight control information and perform control of a motor associated with each ESC based on the open loop flight control information. The open loop flight control information may be a sequence of motor control instructions to be performed over a period of time, or parameterized information or vehicle state information that enables each ESC to generate a sequence of motor control instructions. |
CLAIMSWhat is claimed is:1. A method for controlling a robotic vehicle, comprising:receiving, by each electronic speed controller (ESC) of the robotic vehicle, open loop flight control information; andperforming, by each ESC, control of a motor associated with each ESC based on the open loop flight control information in response to detecting a loss of control signals from a flight controller.2. The method of claim 1, wherein performing, by each ESC, control of a motor associated with each ESC based on the open loop flight control information comprises: adjusting, by each ESC, a closed-loop control of the motor associated with the ESC based on the open loop flight control information.3. The method of claim 1, wherein:the open loop flight control information received by each ESC comprises a sequence of motor control instructions; andperforming control of a motor associated with each ESC based on the open loop flight control information comprises executing the sequence of motor control instructions.4. The method of claim 1, wherein:the open loop flight control information for each ESC comprises aparameterization of a time sequence of motor control instructions; andperforming, by each ESC, control of a motor associated with each ESC based on the open loop flight control information comprises each ESC:determining a sequence of motor control instructions based upon the parameterization of the time sequence of motor control instructions; andexecuting the sequence of motor control instructions.5. The method of claim 1, wherein performing open loop flight control of the motor associated with each ESC based on the open loop flight control information comprises: determining, by each ESC, motor control instructions for the motor associated with the ESC based on the open loop flight control information; andexecuting, by each ESC, the determined motor control instructions.6. The method of claim 5, wherein:the open loop flight control information received by each ESC comprises vehicle state information; anddetermining motor control instructions for the motor associated with the ESC based on the open loop flight control information comprises determining, by each ESC, motor control instructions for executing an appropriate response to a loss of control signals event based on the vehicle state information.7. The method of claim 6, further comprising periodically receiving, from the flight controller, the vehicle state information by each ESC.8. The method of claim 1, further comprising:determining, by a processing device of the robotic vehicle, vehicle state information;determining, by the processing device, the open loop flight control information for each ESC based on the vehicle state information; andproviding, by the processing device, the determined open loop flight control information to each ESC.9. The method of claim 8, wherein receiving, by each ESC of the robotic vehicle, open loop flight control information from the flight controller comprises each ESC:receiving the open loop flight control information from the processing device; andstoring the received open loop flight control information in memory accessible by the ESC.10. The method of claim 8, wherein:providing, by the processing device, the determined open loop flight control information to each ESC comprises storing, by the processing device, the determined open loop flight control information in memory that is accessible by each ESC; and receiving, by each ESC, open loop flight control information comprises each ESC having access to the memory in which the processing device stores the determined open loop flight control information.11. The method of claim 8, wherein determining, by the processing device, the open loop flight control information for each ESC based on the vehicle state information comprises:determining, by the processing device, an appropriate response of the robotic vehicle to a loss of control signals event based on the vehicle state information;determining, by the processing device, a sequence of motor control instructions for each ESC based on the determined appropriate response; andproviding, by the processing device, the respective sequence of motor control instructions to each ESC,wherein receiving open loop flight control information by each ESC comprises receiving the respective sequence of motor control instructions from the processing device.12. The method of claim 11, wherein determining a sequence of motor control instructions for each ESC based on the determined appropriate response comprises performing, by the processing device, a forward simulation of robotic vehicle behavior in response to a sequence of ESC instructions beginning from the vehicle state information to determine a sequence of motor control instructions that will bring the robotic vehicle to an orientation that will enable the robotic vehicle to achieve a controlled landing or minimize damage to the robotic vehicle.13. A robotic vehicle, comprising:a memory;a processing device; andat least one electronic speed controller (ESC) coupled to the memory and the processing device and configured to control at least one motor, wherein each ESC is configured to:receive open loop flight control information; and perform control of a motor associated with each ESC based on the open loop flight control information in response to detecting a loss of control signals from the flight controller.14. The robotic vehicle of claim 13, wherein each ESC is further configured to perform control of the at least one motor associated with each ESC based on the open loop flight control information by adjusting a closed-loop control of the motor associated with the ESC based on the open loop flight control information.15. The robotic vehicle of claim 13, wherein:the open loop flight control information received by each ESC comprises a sequence of motor control instructions; andeach ESC is further configured to perform control of the at least one motor associated with each ESC based on the open loop flight control information by executing the sequence of motor control instructions.16. The robotic vehicle of claim 15, wherein:the open loop flight control information for each ESC comprises aparameterization of a time sequence of motor control instructions; andeach ESC is further configured to perform control of the at least one motor associated with each ESC based on the open loop flight control information by:determining a sequence of motor control instructions based upon the parameterization of the time sequence of motor control instructions; andexecuting the sequence of motor control instructions.17. The robotic vehicle of claim 13, wherein each ESC is further configured to perform open loop flight control of the at least one motor associated with each ESC based on the open loop flight control information by:determining motor control instructions for the at least one motor based on the open loop flight control information; andexecuting the determined motor control instructions.18. The robotic vehicle of claim 17, wherein: the open loop flight control information received by each ESC comprises vehicle state information; andeach ESC is further configured to determine motor control instructions for the at least one motor based on the open loop flight control information by determining motor control instructions for executing an appropriate response to a loss of control signals event based on the vehicle state information.19. The robotic vehicle of claim 13, wherein the processing device is a flight controller of the robotic vehicle and the at least one ESC is configured to periodically receive vehicle state information from the flight controller.20. The robotic vehicle of claim 13, wherein the processing device is configured to: determine vehicle state information;determine the open loop flight control information for each ESC based on the vehicle state information; andprovide the determined open loop flight control information to each ESC.21. The robotic vehicle of claim 20, wherein each ESC is further configured to:receive the determined open loop flight control information from the processing device; andstore the received open loop flight control information in memory accessible by the ESC.22. The robotic vehicle of claim 20, wherein:the processing device is further configured to provide the determined open loop flight control information to each ESC by storing the determined open loop flight control information in memory that is accessible by the ESC; andeach ESC is further configured to receive the open loop flight control information by having access to the memory in which the processing device stores the determined open loop flight control information.23. The robotic vehicle of claim 20, wherein the processing device is further configured to: determine an appropriate response of the robotic vehicle to a loss of control signals event based on the vehicle state information;determine a sequence of motor control instructions for each ESC based on the determined appropriate response; andprovide the respective sequence of motor control instructions to each ESC, and wherein the open loop flight control information received by each ESC comprises the respective motor control instructions.24. The robotic vehicle of claim 23, wherein the processing device is further configured to determine a sequence of motor control instructions for each ESC based on the determined appropriate response by performing a forward simulation of robotic vehicle behavior in response to a sequence of ESC instructions beginning from the vehicle state information to determine a sequence of motor control instructions that will bring the robotic vehicle to an orientation that will enable the robotic vehicle to achieve a controlled landing or minimize damage to the robotic vehicle.25. An electronic speed controller (ESC) for use in a robotic vehicle configured to: receive open loop flight control information; andperform control of a motor associated with the ESC based on the open loop flight control information in response to detecting a loss of control signals from a flight controller from a robotic vehicle.26. The ESC of claim 25, wherein the ESC is further configured to save received open loop flight control information in a memory accessible by the ESC.27. The ESC of claim 25, wherein:the open loop flight control information received by the ESC comprises a parameterization of a time sequence of motor control instructions; andthe ESC is further configured to perform control of a motor associated with each ESC based on the open loop flight control information by:determining a sequence of motor control instructions based upon the parameterization of the time sequence of motor control instructions; andexecuting the sequence of motor control instructions.28. The ESC of claim 25, wherein:the open loop flight control information received by the ESC comprises vehicle state information; andthe ESC is further configured to perform control of a motor associated with the ESC based on the open loop flight control information by determining motor control instructions for executing an appropriate response to a loss of control signals event based on the vehicle state information.29. A processing device for use in a robotic vehicle configured to:determine an appropriate response of the robotic vehicle to a loss of control signals based upon current vehicle state information;generate for each electronic speed controller (ESC) of the robotic vehicle a sequence of motor control instructions which when executed will cause the robotic vehicle to perform the appropriate response to a loss of control signals; andprovide to each ESC the sequence of motor control instructions.30. The processing device of claim 29, wherein the processing device is further configured to generate for each ESC a sequence of motor control instructions which when executed will cause the robotic vehicle to perform the appropriate response to a loss of control signals by performing a forward simulation of robotic vehicle behavior based on the vehicle state information and a sequence of ESC instructions to determine a sequence of predicted next vehicle states and a sequence of motor control instructions that will bring the robotic vehicle to an orientation that will enable of the robotic vehicle to achieve a controlled landing or minimize damage to the robotic vehicle. |
CONTROLLING A ROBOTIC VEHICLE FOLLOWING FLIGHTCONTROLLER SIGNAL LOSSCLAIM OF PRIORITY[0001] The present Application for Patent claims priority to U.S. Non-Provisional Patent Application No. 16/053,117, entitled“CONTROLLING A ROBOTIC VEHICLE FOLLOWING FLIGHT CONTROLLER SIGNAL LOSS” filed on August 2, 2018, assigned to the assignee hereof and hereby expressly incorporated by reference herein.BACKGROUND[0002] Robotic vehicles (e.g.,“UAVs” or“drones”) are typically controlled by a powerful main processor that handles numerous functions of the robotic vehicle, such as flight control and navigation, processing sensor data (e.g., input from cameras, sonar, gyroscope, accelerometer, etc.), receiving and processing GPS signals, controlling radios for communication, and the like.[0003] Rotorcraft-type robotic vehicles (i.e., robotic vehicles propelled by“helicopter” style rotors) are in increasingly wide use. The main processor of rotorcraft robotic vehicles includes a flight controller that handles flight operations, among other things. However, many multi-rotor aerial robotic vehicles are dynamically unstable, and during normal operation such robotic vehicles rely on active control by the main (flight) controller to stabilize and close the loop on attitude, position, and velocity control using a variety of onboard sensors such as accelerometers, gyroscopes, barometers, GPS receivers, magnetometers, and other suitable sensors. In order to achieve closed-loop flight control, using a combination of the sensor data the main processor continuously computes and transmits motor behavior or state information (e.g., power, revolutions per minute (RPM), or other suitable information) to electronic speed controllers (ESCs). The ESCs receive this motor behavior or state information and apply closed-loop motor control techniques to achieve a desired state for each motor, thereby performing the final step in the closed loop flight control.SUMMARY[0004] Various embodiments include methods that may be implemented within processing devices and electronic speed controller (ESC) of a robotic vehicle to enable open loop control of the robotic vehicle, for example, following loss of control signals from a flight controller. Various embodiments may include receiving, by each ESC of the robotic vehicle, open loop flight control information, and performing, by each ESC, control of a motor associated with each ESC based on the open loop flight control information in response to detecting a loss of control signals from the flight controller.In some embodiments, performing, by each ESC, control of a motor associated with each ESC based on the open loop flight control information may include adjusting, by each ESC, a closed-loop control of the motor associated with the ESC based on the open loop flight control information.[0005] In some embodiments, the open loop flight control information received by each ESC may include a sequence of motor control instructions and performing control of a motor associated with each ESC based on the open loop flight control information may include executing the sequence of motor control instructions.[0006] In some embodiments, the open loop flight control information for each ESC may include a parameterization of a time sequence of motor control instructions, and performing, by each ESC, control of a motor associated with each ESC based on the open loop flight control information may include each ESC determining a sequence of motor control instructions based upon the parameterization of the time sequence of motor control instructions, and executing the sequence of motor control instructions.[0007] In some embodiments, performing open loop flight control of the motor associated with each ESC based on the open loop flight control information may include determining, by each ESC, motor control instructions for the motor associated with the ESC based on the open loop flight control information, and executing, by each ESC, the determined motor control instructions.[0008] In some embodiments, the open loop flight control information received by each ESC may include vehicle state information, and determining motor control instructions for the motor associated with the ESC based on the open loop flight control information may include determining, by each ESC, motor control instructions for executing an appropriate response to a loss of control signals event based on the vehicle state information. Such embodiments may further include periodically receiving, from the flight controller, the vehicle state information by each ESC.[0009] Some embodiments may further include a processing device of the robotic vehicle determining vehicle state information, determining the open loop flight control information for each ESC based on the vehicle state information, and providing the determined open loop flight control information to each ESC. In such embodiments, receiving, by each ESC of the robotic vehicle, open loop flight control information from the flight controller may include each ESC receiving the open loop flight control information from the processing device, and storing the received open loop flight control information in memory accessible by the ESC. In such embodiments, providing the determined open loop flight control information to each ESC may include storing, by the processing device, the determined open loop flight control information in memory that is accessible by each ESC, and receiving, by each ESC, open loop flight control information may include each ESC having access to the memory in which the processing device stores the determined open loop flight control information.[0010] In some embodiments, determining the open loop flight control information for each ESC based on the vehicle state information may include the processing device determining an appropriate response of the robotic vehicle to a loss of control signals event based on the vehicle state information, determining a sequence of motor control instructions for each ESC based on the determined appropriate response, and providing the respective sequence of motor control instructions to each ESC, in which receiving open loop flight control information by each ESC may include receiving the respective sequence of motor control instructions from the processing device. In suchembodiments, determining a sequence of motor control instructions for each ESC based on the determined appropriate response may include the processing device performing a forward simulation of robotic vehicle behavior in response to a sequence of ESC instructions beginning from the vehicle state information to determine a sequence of motor control instructions that will bring the robotic vehicle to an orientation that will enable the robotic vehicle to achieve a controlled landing or minimize damage to the robotic vehicle.[0011] Further embodiments include a robotic vehicle having a processing device and one or more ESCs configured to perform operations of any of the methods summarized above. Further embodiments include and ESC configured to perform operations of any of the methods summarized above. Further embodiments include a processing device for use in a robotic vehicle configured to perform operations of any of the methods summarized above.BRIEF DESCRIPTION OF THE DRAWINGS [0012] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate example embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of various embodiments.[0013] FIG. 1 is a system block diagram of a robotic vehicle operating within a communication system suitable for use with various embodiments.[0014] FIG. 2 is a component block diagram illustrating components of a robotic vehicle suitable for use with various embodiments.[0015] FIG. 3 is a component block diagram illustrating components of a controller suitable for use with robotic vehicles.[0016] FIG. 4 is a component block diagram illustrating components of a robotic vehicle suitable for use with various embodiments.[0017] FIG. 5 is a diagram illustrating vehicle trajectories according to various embodiments.[0018] FIG. 6 is a process flow diagram illustrating a method of controlling a robotic vehicle according to various embodiments.[0019] FIG. 7 is a process flow diagram illustrating a method of controlling a robotic vehicle according to some embodiments.[0020] FIG. 8 is a process flow diagram illustrating a method of controlling a robotic vehicle according to some embodiments.[0021] FIG. 9 is a process flow diagram illustrating an example method of generating open loop control information according to some embodiments.DETAILED DESCRIPTION[0022] Various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and embodiments are for illustrative purposes and are not intended to limit the scope of the claims.[0023] Various embodiments improve the functions and reliability of robotic vehicles by providing methods, and robotic vehicle flight controllers and ESCs configured to perform the methods, of determining a sequence of fail-safe flight control instructions to be performed by the ESCs in the event of flight controller failure. In various embodiments, in response to detecting a loss of control signals from the flight controller, the ESCs may control one or more associated motors using the determined open loop flight control instructions.[0024] The main or flight controller of a robotic vehicle is typically a robust processing device capable of controlling numerous functions of the robotic vehicle, such as flight control and navigation, processing sensor data (e.g., input from cameras, sonar, gyroscope, accelerometer, etc.), receiving and processing GPS signals, controlling radios for communication, and the like. The main or flight controller may include a robust processing device with memory, data interfaces, avionics sensors and processors, and other components configured to monitor and control various components and functionality of the robotic vehicle. The main controller may be implemented as a “system-on-chip” (SOC), which is a set of interconnected electronic circuits within a single package or assembly typically, but not exclusively, including one or more processors, a memory, a communication interface, and a storage memory interface. Robotic vehicles leverage the capabilities of such a main controller by including increasingly complex hardware components and software -based functionality. As the complexity of robotic vehicle components and functionality increases, so does the likelihood of a hardware or software fault that cause a loss of control signals from the main controller.[0025] Many multi-rotor aerial robotic vehicles are dynamically unstable, and during normal operation such robotic vehicles rely on active control by the main (flight) controller to stabilize and close the loop on attitude, position, and velocity control using a variety of onboard sensors such as accelerometers, gyroscopes, barometers, GPS receivers, magnetometers, and other suitable sensors. In some cases, the main or flight controller controls rotor speeds by continuously sending RPM or other control signals to electronic speed controllers (ESC) that power and control the motors that drive the rotors. In order to achieve closed-loop flight control, using a combination of the sensor data, the main or flight controller continuously computes and transmits motor behavior or state information (e.g., power, RPM, or other suitable information) to electronic speed controllers (ESCs). The ESCs receive this motor behavior or state information and apply closed-loop motor control techniques to achieve a desired state for each motor, thereby performing the final step in the closed loop flight control. In the event of a loss of control signals (e.g., due to a processor crash, communication bus failure, etc.) from the main or flight controller, a rotorcraft robotic vehicle may rapidly become unstable. Such instability may lead to loss of controlled flight, erratic maneuvering, and a collision that may injure people or animals, as well as potentially damage the robotic vehicle or other property. Such a loss of control signals may occur for a variety of reasons, including a software issue in the main or flight controller that results in a processor stall or reboot, a failure in the main or flight controller or circuity that communicates control signals to ESCs, and the like.[0026] Various embodiments provide methods, and ESCs configured to perform the methods of controlling a robotic vehicle when control signals from a flight controller are interrupted. In various embodiments, from time to time each ESC may receive from the flight controller open loop flight control information, which each ESC may store in a memory associated with the ESC. In response to an interruption of control signals from the flight controller, each ESC may independently access the stored open loop flight control information and independently perform control of an associated motor (i.e., without feedback for flight control behavior) based on the open loop flight control information. As open loop flight control of robotic vehicle motors cannot react to changes in vehicle attitude, such open loop flight control by ESCs may be limited to brief maneuvers to position or posture the robotic vehicle for an emergency landing (e.g., a parachute recovery) or temporary control until control signals are again received from the flight controller (e.g., following a reboot). In some embodiments, each ESC may detect a loss of control signals from the flight controller based on a loss of sensor information from sensors of the robotic vehicle.[0027] As used herein,“open loop flight control” means flight control of a robotic vehicle without the use of sensor feedback to achieve a desired closed loop flight control. For example, without information from accelerometers, gyroscopes, and other sensors, the robotic vehicle may attempt to fly“blindly” (i.e., without sensorinformation) while ESCs still have control of the individual motors and are aware of their state from motor position sensors or sensor-less commutation techniques. “Open loop flight control information” refers to information that enables an ESC to perform open loop flight control of an associated motor, such as motor control in order to achieve non-trivial open loop flight control behavior. One example of trivial open loop flight control is continuing to execute a last known valid motor command. Another example of trivial open loop flight control is simply stopping the motors. Open loop flight control information is distinguishable from the control of individual motors by ESCs, since ESCs receive information from the motors and perform closed-loop control over the motors while implementing the open loop flight control instructions. In some embodiments, in addition to normal communication that is required for flight operations, a flight controller may provide open loop flight control information to one or more ESCs. In some embodiments, r, the one or more ESCs may store the open loop flight control information and use the open loop flight control information to perform closed-loop control over one or more motors of the robotic vehicle in the event of a loss of flight control signals from the flight controller.[0028] In some embodiments, the open loop flight control information may include motor control instructions. In some embodiments, the motor control instructions may include a series of motor speeds (e.g., in RPM) to be applied to the corresponding motor over a period of time (e.g., 1-2 seconds). In some embodiments, the open loop flight control information may include a sequence of motor control instructions over a period of time similar to normal control instructions received from the flight controller. In some embodiments, the open loop flight control information may include a time- parameterized array of ESC commands that have the same format as a real-time ESC control (e.g., in RPM). For example, the flight controller may pre-determine motor control instructions appropriate for each ESC for responding to a loss of control signals event given the robotic vehicle’s current attitude, speed and altitude, and send the pre determined motor control instructions to each ESC (e.g., to a processor of each ESC) for storage in a local memory. The flight controller may determine the motor control instructions for each ESC by determining an appropriate vehicle response a loss of control signals event based on state information of the robotic vehicle, such as an altitude, pitch, velocity, current motor RPMs, and/or other similar state information, and pre-calculating the sequence of motor control instructions that when executed independently by ESCs will result in the robotic vehicle executing that response.[0029] In some embodiments, the open loop flight control information for each ESC may formatted as a parameterization of a time sequence of motor control instructions. For example, the flight controller may compress a sequence of motor control instructions, such as by determining a sequence of polynomials that may be used to parameterize the motor control instructions, and by sending to each ESC coefficients of polynomials rather than sending a complete array of motor control instructions.[0030] In some embodiments, each ESC may store the determined motor control instructions in a memory, and in the event of loss of flight controller signals, each ESC may retrieve the motor control instructions and control its associated motor.[0031] In some embodiments, the open loop flight control information provided by the flight controller to each ESC may include state information about the robotic vehicle that enables each ESC to determine on its own an appropriate series of motor controls to execute a suitable response to a loss of control signals event. Such state information may include, for example, the vehicle’s altitude (e.g., pitch, roll and yaw angles), velocity, altitude, current motor RPMs, and/or other similar state information. Each ESC may store the received state information, and in response to detecting a loss of flight controller signals, each ESC may retrieve the state information and use the state information to generate motor control signals to establish control of its associated motor. In some embodiments, the open loop flight control information may enable each ESC to generate motor control instructions for a relatively short period of time (e.g., for one or more seconds).[0032] In some embodiments, the flight controller (or the ESC controller) may determine the motor control instructions to follow a predetermined fail-safe trajectory, place the robotic vehicle in a particular fail-safe state, or to perform a particular fail-safe action. For example, the flight controller (or the ESC controller) may determine the motor control instructions that will bring the robotic vehicle to a level flight attitude, and then turn off the motors. As another example, the flight controller (or the ESC controller) may determine the motor control instructions to reduce a speed of the robotic vehicle and/or level off at a specified altitude. As another example, the flight controller (or the ESC controller) may determine the motor control instructions to ascend to a specified altitude (if appropriate), stop the motors, and deploy a parachute. As another example, the flight controller (or the ESC controller) may determine the motor control instructions to stop the motors in such a manner that the propellers are detached from the motor (e.g., by being unscrewed), and deploy a parachute. Other forms of maneuvers in in the event of loss of flight controller signals are possible implementing various embodiments. [0033] Various embodiments may be implemented within a robotic vehicle operating within a variety of communication systems 100, an example of which is illustrated in FIG. 1. With reference to FIG. 1, the communication system 100 may include a robotic vehicle 102, a base station 104, an access point 106, a communication network 108, and a network element 110.[0034] The base station 104 and the access point 106 may provide wirelesscommunications to access the communication network 108 over a wired and/or wireless communication backhaul 116 and 118, respectively. The base station 104 may include base stations configured to provide wireless communications over a wide area (e.g., macro cells), as well as small cells, which may include a micro cell, a femto cell, a pico cell, and other similar network access points. The access point 106 may be configured to provide wireless communications over a relatively smaller area. Other examples of base stations and access points are also possible.[0035] The robotic vehicle 102 may communicate with the base station 104 over a wireless communication link 112 and with the access point 106 over a wireless communication link 114. The wireless communication links 112 and 114 may include a plurality of carrier signals, frequencies, or frequency bands, each of which may include a plurality of logical channels. The wireless communication links 112 and 114 may utilize one or more radio access technologies (RATs). Examples of RATs that may be used in a wireless communication link include 3GPP Long Term Evolution (LTE), 3G, 4G, 5G, Global System for Mobility (GSM), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Worldwide Interoperability for Microwave Access (WiMAX), Time Division Multiple Access (TDMA), and other mobile telephony communication technologies cellular RATs. Further examples of RATs that may be used in one or more of the various wireless communication links within the communication system 100 include medium range protocols such as Wi-Fi, LTE-U, LTE-Direct, LAA, MuLTEfire, and relatively short-range RATs such as ZigBee, Bluetooth, and Bluetooth Low Energy (LE).[0036] The network element 110 may include a network server or another similar network element. The network element 110 may communicate with the communication network 108 over a communication link 122. The robotic vehicle 102 and the network element 110 may communicate via the communication network 108. The network element 110 may provide the robotic vehicle 102 with a variety of information, such as navigation information, weather information, information about environmental conditions, movement control instructions, and other information, instructions, or commands relevant to operations of the robotic vehicle 102.[0037] In various embodiments, a robotic vehicle may include winged or rotorcraft varieties of aerial robotic vehicles. FIG. 2 illustrates an example of an aerial robotic vehicle 200 that utilizes multiple rotors 202 driven by corresponding motors to provide lift-off (or take-off) as well as other aerial movements (e.g., forward progression, ascension, descending, lateral movements, tilting, rotating, etc.). The robotic vehicle 200 is illustrated as an example of a robotic vehicle that may utilize variousembodiments, but is not intended to imply or require that various embodiments are limited to aerial robotic vehicles or rotorcraft robotic vehicles. Various embodiments may be used with winged robotic vehicles, land-based autonomous vehicles, and water borne autonomous vehicles that utilize ESCs.[0038] With reference to FIGS. 1 and 2, the robotic vehicle 200 may be similar to the robotic vehicle 102. The robotic vehicle 200 may include a number of rotors 202, a frame 204, and landing columns 206 or skids. The frame 204 may provide structural support for the motors associated with the rotors 202. The landing columns 206 may support the maximum load weight for the combination of the components of the robotic vehicle 200 and, in some cases, a payload. For ease of description and illustration, some detailed aspects of the robotic vehicle 200 are omitted such as wiring, frame structure interconnects, or other features that would be known to one of skill in the art. For example, while the robotic vehicle 200 is shown and described as having a frame 204 having a number of support members or frame structures, the robotic vehicle 200 may be constructed using a molded frame in which support is obtained through the molded structure. While the illustrated robotic vehicle 200 has four rotors 202, this is merely exemplary and various embodiments may include more or fewer than four rotors 202.[0039] The robotic vehicle 200 may further include a control unit 210 that may house various circuits and devices used to power and control the operation of the robotic vehicle 200. The control unit 210 may include a flight controller 220, a power module 230, sensors 240, one or more cameras 244, an output module 250, an input module 260, and a radio 270. [0040] The flight controller 220 may include a processing device 221 configured with processor-executable instructions to control maneuvering and other operations of the robotic vehicle 200. The processing device 221 may be a multi-core processor or multi processor assembly. The flight controller 220 may also include (e.g., as an SOC) or be coupled to a navigation unit 222, a memory 224, an inertial sensor/gyro/accelerometer unit 226 (which may include an accelerometer, a gyroscope, a magnetometer, an inertial measurement unit, and other similar components), and an avionics module 228, all coupled to the processing device 221. The flight controller 220 and/or the navigation unit 222 may be configured to communicate with a server through a wireless connection (e.g., a cellular data network) to receive data useful in navigation, provide real-time position reports, and assess data.[0041] The avionics module 228 may be coupled to the processing device 22 land/or the navigation unit 222, and may be configured to provide maneuvering control-related information such as altitude, attitude, airspeed, heading, and similar information that the navigation unit 222 may use for navigation purposes, such as dead reckoning between Global Navigation Satellite System (GNSS) position updates. The gyro/accelerometer unit 226 may include an accelerometer, a gyroscope, an inertial sensor, or other similar sensors. The avionics module 228 may include or receive data from thegyro/accelerometer unit 226 that provides data regarding the orientation andaccelerations of the robotic vehicle 200 that may be used in navigation and positioning calculations, as well as providing data used in various embodiments for processing images.[0042] The flight controller 220 may further receive additional information from the sensors 240, such as an image sensor or optical sensor (e.g., a sensor capable of sensing visible light, infrared, ultraviolet, and/or other wavelengths of light). The sensors 240 may also include a radio frequency (RF) sensor, a barometer, a humidity sensor, a sonar emitter/detector, a radar emitter/detector, a microphone or another acoustic sensor, a lidar sensor, a time-of-flight (TOF) 3-D camera, or another sensor that may provide information usable by the flight controller 220 for movement operations, navigation and positioning calculations, and determining environmental conditions. The sensors 240 may also include one or more sensors configured to detect temperatures generated by one or more components of the robotic vehicle, such as thermometers, thermistors, thermocouples, positive temperature coefficient sensors, and other sensor components. [0043] The power module 230 may provide power to various components, including the flight controller 220, the sensors 240, the one or more cameras 244, the output module 250, the input module 260, and the radio 270. In addition, the power module 230 may include energy storage components, such as rechargeable batteries. The flight controller 220 may be configured with processor-executable instructions to control the charging of the power module 230 (i.e., the storage of harvested energy), such as by executing a charging control algorithm using a charge control circuit. Alternatively or additionally, the power module 230 may be configured to manage its own charging.The flight controller 220 may be coupled to the output module 250, which may output control signals for managing the motors that drive the rotors 202 and other components.[0044] The robotic vehicle 200 may be controlled through control of the individual motors of the rotors 202 as the robotic vehicle 200 progresses toward a destination. The flight controller 220 may receive data from the navigation unit 222 and use such data in order to determine the present position and orientation of the robotic vehicle 200, as well as the appropriate course towards the destination or intermediate sites. In various embodiments, the navigation unit 222 may include a GNSS receiver system (e.g., one or more global positioning system (GPS) receivers) enabling the robotic vehicle 200 to navigate using GNSS signals. Alternatively or in addition, the navigation unit 222 may be equipped with radio navigation receivers for receiving navigation beacons or other signals from radio nodes, such as navigation beacons (e.g., very high frequency (VHF) omni-directional range (VOR) beacons), Wi-Fi access points, cellular network sites, radio station, remote computing devices, other robotic vehicles, etc.[0045] The radio 270 may be configured to receive navigation signals, such as signals from aviation navigation facilities, etc., and provide such signals to the processing device 221 and/or the navigation unit 222 to assist in robotic vehicle navigation. In various embodiments, the navigation unit 222 may use signals received fromrecognizable RF emitters (e.g., AM/FM radio stations, Wi-Fi access points, and cellular network base stations) on the ground.[0046] The navigation unit 222 may include a planning application that may perform calculations to plan a path of travel for the robotic vehicle within a volumetric space (“path planning”). In some embodiments, the planning application may perform path planning using information including information about aspects of a task to be performed by the robotic vehicle, information about environmental conditions, an amount of heat that may be generated by one or more components of the robotic vehicle in performing the task, as well as one or more thermal constraints.[0047] The radio 270 may include a modem 274 and a transmit/receive antenna 272. The radio 270 may be configured to conduct wireless communications with a variety of wireless communication devices (e.g., a wireless communication device (WCD) 290), examples of which include a wireless telephony base station or cell tower (e.g., the base station 104), a network access point (e.g., the access point 106), a beacon, a smartphone, a tablet, or another computing device with which the robotic vehicle 200 may communicate (such as the network element 110). The flight controller 220 may establish a bi-directional wireless communication link 294 via the modem 274 and the antenna 272 of the radio 270 and the wireless communication device 290 via a transmit/receive antenna 292. In some embodiments, the radio 270 may be configured to support multiple connections with different wireless communication devices using different radio access technologies.[0048] In various embodiments, the wireless communication device 290 may be connected to a server through intermediate access points. In an example, the wireless communication device 290 may be a server of a robotic vehicle operator, a third-party service (e.g., package delivery, billing, etc.), or a site communication access point. The robotic vehicle 200 may communicate with a server through one or more intermediate communication links, such as a wireless telephony network that is coupled to a wide area network (e.g., the Internet) or other communication devices. In someembodiments, the robotic vehicle 200 may include and employ other forms of radio communication, such as mesh connections with other robotic vehicles or connections to other information sources (e.g., balloons or other stations for collecting and/or distributing weather or other data harvesting information).[0049] In various embodiments, the control unit 210 may be equipped with an input module 260, which may be used for a variety of applications. For example, the input module 260 may receive images or data from an onboard camera 244 or sensor, or may receive electronic signals from other components (e.g., a payload).[0050] While various components of the control unit 210 are illustrated as separate components, some or all of the components (e.g., the flight controller 220, the output module 250, the radio 270, and other units) may be integrated together in a single device, circuit board or module, such as an SOC.[0051] FIG. 3 illustrates further components within a robotic vehicle flight controller 220 integrated as an SOC. With reference to FIGS. 1-3, a processing device 221 within the flight controller 220 may include a one or more processors or processor cores 314, a working memory 316, a communication interface 318, and a storage memory interface 320. The storage memory interface 320 may be configured to enable the processors 314 to store data to and retrieve data from a storage memory 224, which may be integrated within the flight controller 220 SOC as illustrated or connected as a separatecomponent. The flight controller 220 configured as an SOC may include acommunication component 322, which may integrate a radio 270 with a wireless modem 274, that is configured to connect to an antenna 272 for establishing a wireless communication link, and/or the like.[0052] The flight controller 220 integrated as an SOC may further include a hardware interface 328 configured to enable the processing device 221 to interface with the navigation module 222, inertial sensor/gyroscope/accelerometer module 226, and avionics module 228, as well as communicate with and control various components of a robotic vehicle. In some embodiments, the hardware interface 328 may also provide an output 330 to one or more ESCs to provide open loop flight control information to the ESCs, as further described below.[0053] The processing device 221 may include a variety of different types of processors 314 and processor cores, such as a general purpose processor, a central processing unit (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), a subsystem processor of specific components of the processing device, such as an image processor for a camera subsystem or a display processor for a display, an auxiliary processor, a single-core processor, and a multicore processor. The processing device 221 may further embody other hardware and hardware combinations, such as a field programmable gate array (FPGA), an application- specific integrated circuit (ASIC), other programmable logic device, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and time references. Integrated circuits may be configured such that the components of the integrated circuit reside on a single piece of semiconductor material, such as silicon. [0054] The flight controller 220 may include more than one processing device 221, thereby increasing the number of processors 314 and processor cores within the flight controller 220. The flight controller 220 may also include other processors (not shown) that are not within the processing device 221. The one or more processors 314 may each be configured for specific purposes that may be the same as or different from other processors 314 of the processing device 221 or flight controller 220 SOC. One or more of the processors 314 and processor cores of the same or different configurations may be grouped together.[0055] The working memory 316 of the processing device 221 may be a volatile or non-volatile memory configured for storing data and processor-executable instructions for access by the processor 314. The flight controller 220 and/or processing device 221 may include one or more storage memories 224 configured to store data for various purposes, including mission-related data (e.g., video data, navigation maps, mission planning, etc.). The working memory 316 may include volatile memories such as random access memory (RAM) or main memory, and cache memory.[0056] Some or all of the components of the flight controller 220 and the processing device 221 may be arranged differently and/or combined while still serving the functions of the various aspects. The flight controller 220 and the processing device 221 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations. Further, another processing device (similar to 221) may be included within or coupled to the flight controller 220 and configured to perform some or all of the operations of various embodiments associated with providing open loop flight control information to ESCs. For ease of reference, the term“processing device” is used generally to refer to either the flight controller, a processor within the flight controller, or a separate processing device within the robotic vehicle configured to perform operations of various embodiments.[0057] FIG. 4 is a component block diagram illustrating components of a robotic vehicle 400 suitable for use with various embodiments. With reference to FIGS. 1-4, the robotic vehicle 400 may be similar to the robotic vehicles 102, 200. The robotic vehicle 400 is illustrated as an example of a robotic vehicle, but is not intended to imply or require that various embodiments are limited to aerial robotic vehicles or rotorcraft robotic vehicles. Various embodiments may be used with winged robotic vehicles, land-based autonomous vehicles, and water-borne autonomous vehicles. [0058] The robotic vehicle 400 may include one or more electronic speed controllers (ESCs) 402 coupled to the control unit 210. The ESC 402 may handle functions including controlling aspects of the operation of each of the rotors 406 by way of the corresponding motors 404. The ESC 402 may be coupled to the power module 230.The power module 230 (e.g., an onboard battery) may be coupled to the motors 404 (e.g., via the ESC 402) and the flight controller 220. Each motor 404 may be associated with a respective motor driver 402b and a decoder 402a. Each decoder 402a may decode signals, such as control signals, from the flight controller 220 directed to a corresponding motor driver 402b.[0059] In normal operation, the flight controller 220 may send control signals to the ESC 402 for controlling power to the motors 404 that drive each of the rotors 406. The flight controller 220 may individually control speeds of each motor 404 via the control signals sent to each ESC 402. The flight controller 220 may drive the motors 404 “forward” at differing rotation rates to generate varying amounts of auxiliary thrust, or “backward” to produce varying amounts of mixed aerodynamic forces. By controlling the speeds of individual motors 404 corresponding to each of the rotors 406, the flight controller 220 may maintain controlled flight as the robotic vehicle 400 progresses toward a destination and/or operates in various flight modes.[0060] The flight controller 220 is typically a robust processing device capable of controlling numerous functions of the robotic vehicle, such as control of the motors 404 via the ESC 402, as well as other operations including flight control, processing sensor data, receiving and processing GPS signals, controlling radios for communication, and the like. As noted above, the consequences of a flight controller failure or reboot during flight operations of an aerial robotic vehicle can be catastrophic because the flight controller 220 will stop signaling the ESC 402, causing the ESCs to stop powering the motors 404. In some embodiments, the loss of control signals from the flight controller 220 may include a loss of signals from one or more of the navigation unit 222, the internal sensor/gyro/accelerometer unit 226, and/or the avionics module 228.[0061] Each ESC 402 may include an ESC controller 408, which may be coupled to a corresponding memory 408a. The ESC controller 408 may be configured to receive open loop flight control information from the flight controller 220 or another processing device from time to time (e.g., via the output 330). The ESC controller 408 may store the received open loop flight control information in the memory 408a. In various embodiments, in response to detecting that control signals from the flight controller 220 have been lost, the ESC controller 408 may be configured to take control of the ESC 402 (i.e., begin issuing motor control signals to the ESC 402). In such embodiments, the ESC controller 408 may retrieve the stored open loop flight control information from the memory 408a and may issue motor control instructions to the motor 404 via the ESC 402. As described, in some embodiments the ESC controller 408 may issue motor control instructions to the motor 404 according to a stored sequence of instructions previously received from the flight controller 220 or another processing device. In some embodiments, the ESC controller 408 may use attitude information received from the flight controller 220 or another processing device to generate and issue a sequence of motor control instructions to the motor 404.[0062] FIG. 5 is a diagram illustrating examples of vehicle trajectories following a loss of control signals from a flight controller, showing a trajectory 500 of a conventional robotic vehicle and a trajectory 550 of a robotic vehicle implementing various embodiments. With reference to FIGS. 1-5, a robotic vehicle (e.g., 102, 200, 400) under control of a flight controller 210 may be in a dynamic state (e.g., shifting from one orientation to another) at any given instant.[0063] FIG. 5 illustrates both a conventional robotic vehicle (trajectory 500) and a robotic vehicle implementing various embodiments (trajectory 550) executing the same roll maneuver prior to a failure of the flight controller. Thus, in the illustrated examples the conventional robotic vehicle at state 502 and the robotic vehicle implementing various embodiments at state 552 are in a non-level state with the flight controller providing active flight control directing the ESCs to control flight motors to induce a counter clockwise rotation about the pitch axis (indicated by the curved arrows).[0064] The pitch up commanded by the flight controller at states 502 and 552 results in the robotic vehicles reaching a state 504, 554 in which the vehicle is approaching a level orientation but still rotating about the pitch axis just prior to the flight control failure.[0065] Referring to the trajectory 500 of a conventional robotic vehicle, in the event of a failure of the flight controller (indicated by the dashed line), the ESCs of the robotic vehicle may continue to execute the same motor controls that were generated in response to control signals from the flight controller just prior to the failure. In the illustrated example, such motor controls caused the robotic vehicle to rotate about the pitch axis from state 502 to the state 504. Thus, the result of the ESCs continuing to execute the same motor controls will cause the robotic vehicle to continue rotating about the pitch axis through states 506, 508, 510, 512, and 514, resulting in a loss of controlled flight.[0066] Referring to the trajectory 550 of a robotic vehicle implementing various embodiments, the flight controller may periodically generate a set of open loop flight control information 553 to be executed in the event of a loss of flight control signals (e.g., may occur upon failure, reboot, or pause of the flight controller). Thus, in addition to sending the control signals to the ESCs that induced the rotation about the pitch axis from state 552 to state 554, the flight controller may provide open loop flight control information to the ESCs for storage in a local memory (or may store the information in a memory accessed by ESCs), and the ESCs may be configured to access and implement the stored open loop flight control information if control signals from the flight controller are lost.[0067] In the event of a flight controller failure, each ESC may access the stored open loop flight control information and control one or more flight motors based on the open loop flight control information without information from accelerometers, gyroscopes, and other sensors. For example, in state 556 just after the flight controller failure, the ESCs may obtain from memory the stored open loop flight control information and begin controlling the associated motor(s) to counter the rotation about the pitch axis that existed in state 554 and reorient the robotic vehicle towards level flight in state 558. In addition to countering the pitch-axis motion that existed in state 554 by inducing a counter rotational force about the pitch axis in state 556, the stored open loop flight control information may include instructions for the ESCs to control the associated motor(s) to substantially cancel or reduce the angular velocity of the robotic vehicle about the pitch axis induced in state 556. Thus, in state 558 just prior to achieving level flight, the stored open loop flight control information may include instructions for the ESCs to control the associated motor(s) to briefly induce a counter rotational force about the pitch axis so that the robotic vehicle is stable upon achieving level flight in state 560.[0068] The stored open loop flight control information may further include instructions for the ESCs to control the associated motor(s) to reduce lift (or stop) in a balanced manner (e.g., with equal velocity to avoid inducing pitch or roll) so that the robotic vehicle can descend through state 562 to a landing in state 564. In some embodiments, the ESCs may subsequently perform a failsafe operation, such as stopping the motors upon reaching a stable (e.g., flat) orientation in state 560 as illustrated, enabling a freefall with little or no angular velocity in states 562 and 564. By the flight controller generating such open loop flight control information, a robotic vehicle 550 is able to achieve a controlled flight state upon failure of the flight controller that may reduce, minimize or eliminate damage upon landing or impact (state 564), especially in contrast to the uncontrolled flight of a conventional robotic vehicle illustrated in state 514.[0069] In various embodiments, the flight controller may determine the open loop flight control information to place the robotic vehicle into a desired state. The desired state may vary depending on the initial state information of the robotic vehicle (i.e., the state of the vehicle at the time the open loop flight control information is generated). In some embodiments, the flight controller may generate and provide the open loop flight control information and provide the determined open loop flight control information to the one or more ESCs while providing normal flight control signals or instructions. In some embodiments, the flight controller may determine future motor control instructions at one or more of a plurality of time steps (e.g., into the future).[0070] In various embodiments, the flight controller may generate the open loop flight control information in a variety of ways. In some embodiments, the flight controller may use a mathematical model of the robotic vehicle to project or predict the motion of the robotic vehicle some period of time into the future (e.g., a period of milliseconds or seconds) based on a current known state of the vehicle (e.g., position, orientation, linear velocity, and pitch/roll/yaw angular velocities), and generate suitable flight control instructions to be executed by the ESCs during the projected time into the future to achieve a desired flight orientation (e.g., flat or stable flight) in view of the projected motions.[0071] In some embodiments, the flight controller may use a mathematical model of the robotic vehicle to perform a forward simulation of vehicle dynamics of the robotic vehicle executing a sequence of ESC instructions beginning from the current vehicle state including simulation of the vehicle’s response to stored flight control information. In some embodiments, the flight controller may perform a forward simulation of robotic vehicle behavior in response to a sequence of ESC instructions beginning from the current state to determine a sequence of motor control instructions that will result in a sequence of predicted next vehicle states to bring the robotic vehicle to the desired state.[0072] In some embodiments, the flight controller may use the same flight control algorithm(s) for providing normal flight operation signals or commands to the ESCs and to determine the open loop flight control information. In some embodiments, the flight controller may use the simulated vehicle state information from the forward simulation in combination with one or more flight control algorithm(s) to determine the open loop flight control information. In some embodiments, after completion of the forward simulation, the flight controller may determine the open loop flight control information and provide the determined open loop flight control information to the one or more ESCs.[0073] In some embodiments, the flight controller may determine the open loop flight control information without performing an explicit discrete simulation of vehicle dynamics and control. For example, the flight controller may determine polynomial- form solutions for controlling ESC that are time-parameterized, which are substantially faster to compute and may be represented in more compact form. Such time- parameterized open loop flight control information may be more efficient to determine or generate and more efficient to transmit to the one or more ESCs than open loop flight control information determined based on forward simulations.[0074] FIG. 6 illustrates a method 600 of controlling a robotic vehicle according to various embodiments. With reference to FIGS. 1-6, the method 600 may beimplemented in hardware components and/or software components of the robotic vehicle (e.g., 102, 200, 400), the operation of which may be controlled a flight controller (e.g., 220) or another processing device and ESC controllers (e.g., 408 and/or the like) of the robotic vehicle.[0075] In block 602, the flight controller or another processing device may determine state information of the vehicle. For example, the flight controller may determine one or more of an altitude, pitch, velocity, current motor RPMs, and/or other similar state information as part of normal flight control operations. As another example, another processing device may access flight control information within memory of the flight controller or from attitude sensors (e.g., accelerometers and gyroscopes). [0076] In block 604, the flight controller or another processing device may provide open loop flight control information to the ESC controller (e.g., the ESC controller 408). The open loop flight control information may enable the ESC controller to control an associated motor. For example, the open loop flight control information may be a sequence (or parameterized sequence) of motor control instructions that can be performed by each ESC in response to a loss of control signals. As another example, the flight controller or another processing device may determine attitude information that is suitable for use by an ESC controller for generating motor controls in response to a loss of control signals from the flight controller.[0077] In block 606, the ESC may store the provided open loop flight control information in a memory (e.g., the memory 408a).[0078] The operations in blocks 602-604 may be performed periodically in parallel with the normal control of ESCs by the flight controller. Thus, while not illustrated in FIG. 6, the flight controller may transmit to each ESC motor control signals for maintaining controlled flight until some event (e.g., a processor reboot) causes a loss of control signals from the flight controller in block 608.[0079] In determination block 612, the ESC controller may determine whether a loss of control signals from the flight controller is detected.[0080] In response to determining that a loss of control signals from the flight controller is not detected (i.e., determination block 612 =“No”), the ESC controller may continue to store open loop flight control information received from by the flight controller or another processing device. In some embodiments, each set of received open loop flight control information may be stored in memory to replace (e.g., over write) the previously stored set of such information.[0081] In response to determining that a loss of control signals from the flight controller is detected (i.e., determination block 612 =“Yes”), the ESC controller may access the stored open loop flight control information in block 614.[0082] In block 616, the ESC controller may control a motor or motors associated with the ESC based on the open loop flight control information. In some embodiments, the ESC controller may adjust a closed-loop control of the motor or motors associated with the ESC based on the open loop flight control information. [0083] FIG. 7 illustrates a method 700 of controlling a robotic vehicle according to some embodiments. With reference to FIGS. 1-7, the method 700 may be implemented in hardware components and/or software components of the robotic vehicle (e.g., 102, 200, 400), the operation of which may be controlled by a flight controller (e.g., 220) or another processing device and ESC controllers (e.g., 408 and/or the like) of the robotic vehicle. In blocks 502, 508, and 512 the flight controller or another processing device and the ESC controller, respectively, may perform operations of like-numbered blocks of the method 500 as described.[0084] In block 702, the flight controller or another processing device may determine a suitable robotic vehicle response to a loss of control signals event based upon the determined vehicle state information. The appropriate response may be a response that minimizes risk to other objects and people and/or minimizes damage to the robotic vehicle. For example, if the vehicle state information indicates that the robotic vehicle is moving at a high speed, the suitable response may be to slow down or stop before the motors are turned off in order to avoid crashing into people or objects. As another example, if the vehicle state information indicates that the robotic vehicle is in an unstable attitude, the suitable response may be to level the vehicle before the motors are turned off to enable a more controlled decent to the ground. As another example, if the vehicle state information indicates that the robotic vehicle is close to the ground, the suitable response may be to level the vehicle and descend to a soft landing before the motors are turned off. As another example, if the vehicle state information indicates that the robotic vehicle is too high to land before motors stop but too low for a parachute deployment, the suitable response may be to climb to a higher altitude before the motors are turned off and a parachute deployed to enable a soft landing. A variety of suitable responses may be determined based on the current vehicle attitude information, and the suitable response to a loss of control signals event in one instant may not be appropriate a few seconds later when the attitude, speed and altitude of the robotic vehicle has changed. In some embodiments, the flight controller or another processing device may also consider other information, such as camera images and terrain maps, when determining the suitable response to a loss of control signals event so as to avoid crashing into humans, animals and other objects.[0085] In block 704, the flight controller or another processing device may generate motor control instructions for each ESC that will cause the robotic vehicle to perform the determined suitable response to a loss of control signals event when executed by the ESCs in an open loop manner. For example, the motor control instructions may include the motor speed (e.g., in RPM) to be achieved at various time instances following the loss of control signals. In some embodiments, the open loop flight control information for each ESC may include a parameterization of a time sequence of motor control instructions. For example, the flight controller or another processing device may compress a sequence of motor control instructions, such as by determining a sequence of polynomials that may be used to parameterize the motor control instructions, and by sending to each ESC coefficients of polynomials rather than sending a complete array of motor control instructions. In some embodiments, the open loop flight control information may include a sufficient number of motor control instructions for a relatively short period of time (e.g., a period of one or more seconds).[0086] In block 706, the flight controller or another processing device may provide the generated open loop flight control instructions to each ESC. This operation may be performed periodically in parallel with transmission of normal control signals to each ESC.[0087] In block 708, each ESC controller may store in a memory corresponding to the ESC (e.g., in a memory within the ESC) the open loop flight control instructions received from the flight controller.[0088] In determination block 512, each ESC controller may determine whether control signals from the flight controller have been lost. So long as control signals continue to be received from the flight controller (i.e., determination block 512 =“No”), each ESC controller continue to receive and store open loop flight control instructions from the flight controller or another processing device in block 708. In this manner, fresh open loop flight control instructions received and stored in memory periodically so that the ESCs are ready to respond to a loss of control signals event at all times.[0089] In response to detecting a loss of control signals from the flight controller (i.e., determination block 512 =“Yes”), each ESC controller may access the stored open loop flight control instructions, and perform control of the motor associated with the ESC by executing the open loop flight control instructions in block 712. Thus, each ESC may execute the stored sequence of open loop flight control instructions as if the instructions were issued by the flight controller or another processing device. In some embodiments, the ESC controller may determine the open loop flight control instructions executed in block 712 based on parameterized information stored in memory that is accessed in block 710. In some embodiments, the ESC controller may determine the open loop flight control instructions executed in block 712 by performing other operations on the information stored in memory that is accessed in block 710, such as decompressing instructions that were stored in a compressed format. In some embodiments, each ESC may adjust a closed-loop control of the associated motor(s) based on the open loop flight control information. Once all stored open loop flight control instructions have been executed, the ESC may perform another operation, such as the depowering the motor.[0090] FIG. 8 illustrates a method 800 of controlling a robotic vehicle according to various embodiments. With reference to FIGS. 1-8, the method 800 may be implemented in hardware components and/or software components of the robotic vehicle (e.g., 102, 200, 400), the operation of which may be controlled by a flight controller (e.g., 220) or another processing device and ESC controllers (e.g., 408 and/or the like) of the robotic vehicle. In blocks 502-514, the flight controller or another processing device and the ESC controller, respectively, may perform operations of like- numbered blocks of the method 500 as described.[0091] In block 802, the flight controller or another processing device may periodically provide vehicle state information to each ESC based on at least a subset of the vehicle information determined in block 502. For example, the flight controller or another processing device may provide each ESC with the current roll, pitch and yaw angles of the robotic vehicle, as well as airspeed and altitude information. The subset of vehicle state information provided to each ESC may be information that is particularly suitable for enabling each ESC controller to determine an appropriate response to a loss of control signals event. In some embodiments, such state information may include, for example, an altitude, pitch, velocity, current motor RPMs, and/or other similar state information.[0092] In block 804, each ESC controller may store in memory the vehicle state information received from the flight controller or another processing device.[0093] In determination block 512, each ESC controller may determine whether control signals from the flight controller have been lost. So long as control signals continue to be received from the flight controller (i.e., determination block 512 =“No”), each ESC controller continue to receive and store vehicle state information from the flight controller or another processing device in block 804. In this manner, fresh vehicle state information is received and stored in memory periodically so that the ESCs are equipped with current vehicle state information at all times to respond to a loss of control signals event.[0094] In response to detecting a loss of control signals from the flight controller (i.e., determination block 512 =“Yes”), each ESC controller may access the stored vehicle state information in block 806.[0095] In block 808, each ESC controller may determine appropriate motor control instructions for an associated motor for responding to the loss of control signals event based on the stored vehicle state information. In some embodiments, each ESC controller may determine a series of motor control instructions for its associated motor that, when independently executed by all of the ESCs, will cause the robotic vehicle to perform a maneuver that places the vehicle in a safer condition before all motors are turned off. The appropriate maneuver to be performed in response to a loss of control signals may depend upon the current vehicle attitude (e.g., pitch, roll and yaw), speed and altitude. In some embodiments, each ESC controller may apply and algorithm to the current vehicle state information to generate the appropriate series of motor control instructions. In some embodiments, each ESC controller may use the current vehicle state information in a table look up operation to obtain the appropriate series of motor control instructions from a database of instructions correlated to vehicle states.[0096] In block 810, the ESC controller may perform open loop flight control of an associated motor or motors by executing the determined open loop flight control instructions. The number or duration of motor control instructions determined in block 808 and executing 810 may be sufficient to stabilize or otherwise place the robotic vehicle and a safer condition before all motors deenergized, such as stopping and/or leveling the vehicle or a brief period time (e.g., 1 to 2 seconds). In some embodiments, each ESC may adjust a closed-loop control of the associated motor(s) based on the open loop flight control information.[0097] FIG. 9 illustrates an example method 900 for generating open loop flight control information by the flight controller or another processing device according to some embodiments. With reference to FIGS. 1-9, the method 900 may be implemented by a processing device (e.g., 220) within a flight controller (e.g., 210) or by another processing device with access to vehicle state information (e.g., gyroscopes,accelerometers, etc.) For general applicability, the term“processing device” is used in the description of the method 900. The method 900 illustrates an example of operations that may be performed in blocks 604 of the method 600 and blocks 702 and 704 of the method 700.[0098] In block 902, the processing device may determine or access vehicle orientation and vehicle state information. For example, a flight controller will have or have immediate access to the vehicle state information (e.g., horizontal and vertical velocities as well as pitch/roll/yaw rotational velocities) as part of the process of generating closed loop flight control instructions. As another example, a processing device performing the method 900 may access memory where vehicle state information is stored. As a further example, a processing device performing the method 900 may receive or access in memory the outputs of state sensors, including gyroscopes, accelerometers, etc.Determining or accessing vehicle orientation and vehicle state information enables the processor to determine the vehicle orientation and motions at the instance that the information is obtained.[0099] In block 904, the processor may determine or access the current control signals that the flight controller is issuing to the ESCs. For example, if the flight controller is executing the method 900, such information is generated as the control signals are issued. As another example, a processing device performing the method 900 may receive or intercept control signals as they are issued to the ESCs or access a memory where control signals are temporarily stored. Determining or accessing the current control signals issuing to the ESCs provides information to the processor regarding the forces being applied to the vehicle by the rotors at the instance that the information is obtained, as well as forces that will be applied to the vehicle in the next instant as the control signals are implemented and the motors respond accordingly.[00100] In block 906, the processor may determine a desired state for the robotic vehicle to achieve in the event of a loss of control signals. The desired state may be an orientation that will enable of the robotic vehicle to achieve a controlled landing or minimize damage to the robotic vehicle, such as upon striking the ground. The desired state of the robotic vehicle may depend upon the current orientation and velocity state of the vehicle, particularly in situations in which the vehicle is currently in a highly unstable configuration that may be difficult to control using open loop control information, is traveling too fast to be stopped using open loop control information, or is too high to achieve a controlled landing using open loop control information. In some embodiments, this may be because the duration that open loop flight control can be implemented is limited by the predictability of future vehicle states (e.g., 1-2 seconds). As an example, if the robotic vehicle is relatively low and in a near stable flight orientation, the processor may determine the desired state to be level flight with zero velocity and the rotors providing equalized lift (e.g., to avoid inducing role or pitch) that enables a controlled descent towards the ground within a matter of a few seconds. As another example, if the robotic vehicle is flying at a high altitude, the desired state may be level flight with zero velocity followed by turning the rotors off so that the robotic vehicle free falls in that configuration. As another example, if the robotic vehicle is flying in a highly unstable configuration, such as with a large pitch angle to maximize horizontal velocity, the processor may determine that the desired state is an orientation that will stop or minimize the horizontal velocity before the duration of open loop flight control ends so that the vehicle’s velocity upon impact is reduced.[00101] In block 908, the processor may apply a mathematical model of the robotic vehicle to the current state information in the current ESC control signals to project the vehicle orientation and velocity state a brief amount of time (referred to herein as a “time step”) into the future. Said another way, the processor may use a mathematical model of the robotic vehicle to simulate the behavior of the robotic vehicle a brief time into the future. Such a mathematical model may calculate the orientation and velocity state of the robotic vehicle a few milliseconds into the future based upon the current orientation and current pitch, roll, yaw and translational velocities. Such a mathematical model may also calculate projected pitch, roll, yaw and translational velocities of the vehicle at the next time step based upon the accelerations applied to the vehicle in the six dimensions (i.e., X, Y, Z, pitch, roll, yaw) from air resistance on the structures of the vehicle and the forces on the vehicles generated by the rotors due to the current ESC control signals. Such calculations may be in the form of:P(t + At) = P(t) + Vp(t)*AtVp(t + At) = Ap*At where P(t) is the position or orientation along a particular dimension at time increment t; Vp(t) is the velocity along the particular dimension at time increment t; Ap(t) is the acceleration along particular dimension at time increment t; and At is time between each time increment (i.e., the duration of the simulation steps). The contribution to change in position orientation due to acceleration during the time between acceleration steps is ignored in this approximation, which is valid for short time increments. Theaccelerations about each dimension or degree of freedom may be calculated using well known equations of linear acceleration (e.g., F=MA) and angular acceleration (e.g., a=I/t). In some embodiments, such equations of motion may be implemented in matrix format for the six degrees of freedom of the robotic vehicle, which may be processed by a processor optimized for vector operations, such as a graphics processing unit (GPU).[00102] In block 910, the processor may use the projected orientation and velocity state of the robotic vehicle (i.e., the information generated in block 908) to generate an open loop flight control instruction for each ESC suitable for directing the robotic vehicle towards the desired state. In some embodiments, the open loop flight control instructions may be generated using the same or similar flight control rules as used in normal close loop flight control except that the orientation and velocity stateinformation used in the rules is that generated by the mathematical model (i.e. by simulation) for the time step. The generated open loop flight control instruction may be one or a series of instructions suitable for a duration of the time steps used in simulating the future orientation and velocity of the vehicle. The open loop flight control instruction(s) generated in block 910 may be temporarily stored in a buffer or other memory.[00103] In determination block 912, the processor may determine whether a maximum simulation time into the future over which open loop flight control is feasible has been reached. Again, open loop flight control is only feasible for a finite period of time (e.g., 1-2 seconds) before errors in initial state conditions and unpredictable forces render the control instructions inappropriate. Similar to predicting the weather, small errors in orientation (e.g., gyroscope errors) and acceleration measurements used as the initial conditions for the simulation, as well as unknown external forces, like wind, will lead to large effects in orientation and velocities after a sufficient time. Thus, controlling a robotic vehicle using open loop flight control information beyond the time that projecting robotic vehicle orientations and dynamics is feasible without gyroscope and accelerometer feedback could cause the robotic vehicle to enter a state that will result in more damage than if the motors were simply turned off. To account for this, the duration into the future that the processor may simulate vehicle orientations and generate open loop flight controls may be limited by determination block 912, such as to a matter of a few seconds.[00104] In response to determining that the maximum simulation time into the future has not been reached (i.e., determination block 912 =“No”), the processor may apply the mathematical model of the robotic vehicle to the projected vehicle state information and the last open loop ESC control signals to project the vehicle orientation and velocity state, as well as accelerations, in the next time step into the future in block 914. Said another way, in block 914, processor continues the simulation of the robotic vehicle starting from the last projected orientation and velocity state (determined in block 908 or subsequently determined in block 914) and applying the last projected accelerations of the vehicle in the six dimensions (e.g., determined from the open loop flight control instruction determined in block 910 as applied to the rotors by the ESCs implementing the instruction). The projection of the vehicle orientation and velocity state information as well as updated accelerations may be calculated as described with reference to block 908.[00105] In determination block 916, the processor may determine whether the projected orientation and velocity state determined in block 914 has achieved the desired state. Said another way, the processor may determine whether sufficient open loop control instructions have been assembled to enable the robotic vehicle to achieve the desired state when executed by the ESCs starting from the current orientation and velocity state as determined in block 902.[00106] In response to determining that the projected vehicle state has not achieved the desired state (i.e., determination block 916 =“No”), the processor may again generate an open loop flight control instruction for each ESC suitable for the projected vehicle orientation and velocity state and accelerations in the simulation time step in block 910. Again, the open loop flight control instruction for each ESC may be generated for the time step so as to drive the robotic vehicle towards the desired state. [00107] The operations in blocks 910 through 916 may continue in a simulation loop until either the desired state is achieved (determination block 916) or the maximum simulation time is reached (as determined in determination block 912).[00108] In response to determining that the projected vehicle state has achieved the desired state (i.e., determination block 916 =“Yes”), the processor may generate open loop flight control instructions for each ESC suitable for maintaining the desired state while operating in an open loop manner. Said another way, the processor may generate open loop flight control instructions in block 918 configured to avoid disturbing the orientation of the vehicle once it achieves the desired state. For example, the flight control instructions may cause the ESCs to control the flight motors so that equal thrust and lift is applied, thus avoiding inducing a pitch or roll in the vehicle. As another example, the flight control instructions may cause the ESCs to stop powering the rotors so that the robotic vehicle can free fall without pitch or roll torque being applied by the rotors.[00109] In response to determining that the maximum simulation time has been reached (i.e., determination block 912 =“Yes”) or following generation of open loop flight controls instructions for maintaining the desired state in block 918, the processor may provision the generated open loop flight control instructions (or information implementing the instructions) in block 920 so that the information can be accessed by each ESC if needed. For example, the processor may send the generated open loop flight control information to each ESC in block 920 so that each ESC may store the information in local memory. As another example, in block 920 the processor may store the generated open loop flight control information in a memory or memories that each ESC can access in the event a loss of flight control signals occurs.[00110] Various embodiments illustrated and described are provided merely as examples to illustrate various features of the claims. However, features shown and described with respect to any given embodiment are not necessarily limited to the associated embodiment and may be used or combined with other embodiments that are shown and described. Further, the claims are not intended to be limited by any one example embodiment. For example, one or more of the operations of the methods 500, 600, 700, and 800 may be substituted for or combined with one or more operations of the methods 500, 600, 700, and 800, and vice versa. [00111] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing embodiments may be performed in any order. Words such as“thereafter,”“then,” “next,” etc. are not intended to limit the order of the operations; these words are used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles“a,”“an,” or“the” is not to be construed as limiting the element to the singular.[00112] Various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such embodiment decisions should not be interpreted as causing a departure from the scope of the claims.[00113] The hardware used to implement various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of receiver smart objects, e.g., a combination of a DSP and amicroprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function. [00114] In one or more aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module or processor-executable instructions, which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable storage media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage smart objects, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer program product.[00115] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the claims. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
An integrated circuit is provided with a low-power island including embedded memory power domains that may selectively couple to either an active-mode power supply voltage supplied on a first power rail or to a sleep-mode power supply voltage supplied on a second power rail. |
CLAIMSWhat is claimed is:1. An integrated circuit; comprisinga first power rail configured to supply an active-mode power supply voltage; a second power rail configured to supply a sleep-mode power supply voltage, wherein the active-mode power supply voltage is greater than the sleep-mode power supply voltage;an embedded memory including a power supply node; anda power multiplexer coupled between the power supply node and the first and second power rails, wherein the power multiplexer is configured to select for the first power rail during an active mode of operation for the embedded memory, and wherein the multiplexer is further configured to select for the second power rail during a sleep mode of operation for the embedded memory.2. The integrated circuit of claim 1, further comprising:a system-on-a-chip (Soc) processor embedded memory power domain powered by the second power rail.3. The integrated circuit of claim 2, wherein the integrated circuit is incorporated into a system including a power management integrated circuit (PMIC) comprising: a first switch-mode power supply configured to supply the sleep-mode power supply voltage to the second power rail.4. The integrated circuit of claim 2, wherein the embedded memory comprises a plurality of embedded memories for a plurality of corresponding subystems, and wherein the power multiplexer comprises a corresponding plurality of power multiplexers5. The integrated circuit of claim 4, wherein each subsystem is configured to independently enter the sleep mode and the active mode.6. The integrated circuit of claim 4, wherein the plurality of subsystems includes at least one sensor subsystem.7. The integrated circuit of claim 4, wherein the plurality of subsystems includes at least one wireless interface subsystem.8. The integrated circuit of claim 2, further comprising an SoC processor coupled to a plurality of embedded memories in the SoC processor embedded memory power domain.9. The integrated circuit of claim 1, wherein the power multiplexer comprises a pair of PMOS transistors.10. The integrated circuit of claim 4, further comprising a controller configured to control the power multiplexers such that each power multiplexer disengages temporarily from the first power rail and from the second power rail when the corresponding subsystem switches between the active mode and the sleep mode.1 1. The integrated circuit of claim 3, wherein the PMIC further comprises a second switch-mode power supply configured to power the first power rail through a linear drop-out regulator.12. The integrated circuit of claim 3, further comprising a first decoupling capacitor coupled to the first power rail and a second decoupling capacitor coupled to the second power rail.13. The integrated circuit of claim 4, further comprising:a logic domain power rail;a plurality of logic power domains corresponding to the plurality of subsystems, each subsystem including a corresponding one of the logic power domains, wherein each logic power domain is coupled to the logic domain power rail.14. A method, comprising:powering a first power rail with an active mode power supply voltage;powering a second power rail with a sleep mode power supply voltage, wherein the active mode power supply voltage is greater than the sleep mode power supply voltage;for a subsystem within an integrated circuit, coupling an embedded memory power domain in the subsystem to the first power rail while the subsystem operates in an active mode; andcoupling the embedded memory power domain to the second power rail while the subsystem operates in a sleep mode.15. The method of claim 14, further comprising temporarily decoupling both the second power rail and the first power rail from an embedded memory power domain in a transitioning subsystem in the integrated circuit when the transitioning subsystem transitions between the active mode and sleep mode.16. The method of claim 14, wherein the subsystem is included within an integrated circuit having a processor embedded memory power domain coupled to the second power rail, the method further comprising powering the second power rail through a switch-mode power supply.1 7. An integrated circuit, comprising:a first power rail configured to supply an active-mode power supply voltage; a second power rail configured to supply a sleep-mode power supply voltage, wherein the active-mode power supply voltage is greater than the sleep-mode power supply voltage;a low-power domain having a plurality of subsystems, each subsystem including a memory power domain including one or more embedded memories; andmeans for selectively coupling active ones of the memory power domains to the first power rail and for selectively coupling dormant ones of the memory power domains to the second power rail.18. The integrated circuit of claim 17, further comprising a system-on-a-chip (SoC) memory power domain coupled to the second power rail.19. The integrated circuit of claim 18, wherein the integrated circuit is incorporated in a system including a switch-mode power supply configured to power the second power rail with a sleep-mode power supply voltage.20. The integrated circuit of claim 1 8, wherein the plurality of subsystems includes at least one sensor subsystem. |
POWER MULTIPLEXER FOR AN INTEGRATED CIRCUITCROSS REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to U.S. Application No. 14/836,694 filed August 26, 2015.TECHNICAL FIELD[0002] This relates to integrated circuit power management, and more particularly to a power multiplexer for increased integrated circuit power efficiency.BACKGROUND[0003] System on a chip (SoC) integrated circuits include assorted subsystems. For example, a smart phone SoC may integrate a modem, a graphics processor, Bluetooth, WiFi, and other subsystems. Each of these subsystems will typically have different timing requirements with regard to entering sleep mode, active mode, or shutdown as compared to the timing requirements for the SoC processor. Given these different timing requirements, it is conventional to power the subsystems independently from the SoC processor. For example, the subsystems may be organized into a "low- power island" powered by two power rails: an island embedded memory (MX) power rail and an island core logic (CX) power rail. The processor would similarly be powered by a processor CX power rail and a processor MX power rail.[0004] Each MX power rail provides the power supply voltage to the various embedded memories within a corresponding embedded memory power domain. The island MX power rail thus provides power to an island embedded memory power domain within the low-power island. Similarly, the processor MX power rail provides power to a processor embedded memory power domain for the SoC processor. In contrast, each CX power rail provides the power supply voltage to the core logic within a corresponding core logic power domain. The island CX power rail thus provides the power supply voltage to the core logic within an island core logic power domain in the low-power island whereas the processor CX power rail provides the power supply voltage to the core logic for the SoC processor. In general, the voltage levels required by the embedded memory power domains are different from those for the core logic power domains. For example, embedded memories require a higher power supply voltage to retain their stored values in the sleep mode as compared to the reduced power supply voltage for powering the logic gates in a sleep mode. If a common power rail were used for both the embedded memories and the core logic, the core logic would waste power during the sleep mode from, for example, unnecessary leakage current loss due to the elevated power supply voltage that would be required to maintain the stored states in the embedded memories. Having independent memory and core logic power domains thus saves power. However, the power grid formed by the conventional memory and core logic power domains faces several challenges that may be better appreciated through the following discussion of a conventional SoC 100 as shown in Figure 1.[0005] SoC 100 includes a low-power island 1 10 that includes corresponding subsystems. For example, low-power island 1 10 may include a sensor sub-system 1 14 that includes an island CX power domain 1 1 1 powered by an island CX power rail 1 15. In addition, low-power island 110 includes an MX power domain 1 12 powered by an island embedded memory (MX) power rail 120. An SoC processor (not illustrated) in the remainder of SoC 1 00 includes an SoC MX power domain 120 powered by an SoC MX power rail 130. A CX power domain and corresponding CX power rail for the SoC processor is not shown for illustration clarity. A power management integrated circuit (PMIC) 105 powers the various power rails within SoC 100. For example, PMIC 105 includes a dedicated switch-mode power supply (MX SMPS) 135 to provide power to SoC MX power rail 130. But switch-mode power supplies are relatively expensive in terms of die area demands so an island SMPS 140 is shared by both island CX power rail 1 15 and island MX power rail 120. Since the island MX and CX power supply voltages may be different as discussed above, each power rail 1 15 and 120 couples to island SMPS 140 through a corresponding island linear drop-out regulator (LDO) 150 and 145, respectively. Since each island power rail 1 15 and 120 has its own corresponding island LDO, their voltages may be independently controlled despite being commonly powered by island SMPS 140. Low-power island 1 10 isadvantageous in that its island power rails 1 15 and 120 may be placed into sleep mode while the SoC processor is still in active mode. In this fashion, power is not needlessly wasted with regard to supplying low-power island 1 10 with active-level power supply voltages simply because the SoC processor is active.[0006] Island CX power rail 1 15 may be completely collapsed (discharged to ground) in the sleep mode. In contrast, island MX power domain 1 12 would lose its state if the MX power supply voltage on island MX power rail 120 were collapsed during sleep mode. Thus, the MX power supply voltage is maintained at a retention level during the sleep mode for low-power island 1 10. The MX power supply voltage carried on island MX power rail 120 must thus transition from the active mode power supply voltage level to the retention power supply voltage level when low-power island 1 10 transitions into the sleep mode. But note that island CX power rail 1 15, island MX power rail 120 (as well as SoC MX power rail 130) each requires a decoupling capacitor (C) to provide instantaneous power should the corresponding CX or MX power domain suddenly demand power. The capacitances of these decoupling capacitors is relatively large so that the instantaneous power demands may be met. A relatively large amount of charge must thus be discharged to ground from island MX power rail 120 when low- power island 1 10 transitions to the sleep mode, which reduces battery life accordingly. In addition, island MX LDO 145 wastes power when converting an active-mode power supply voltage from island SMPS 140 to the retention voltage for island MX power rail 120. Another problem with regard to SoC 100 is that the efficiency of a switch -mode power supply such as SMPS 140 tends to drop dramatically at the reduced current output levels associated with the sleep mode of operation for low-power island 110. The reduced power grid efficiency can be quite dramatic.[0007] Accordingly, there is a need in the art for improved power architectures for integrated circuits including independently-powered subsystems.SUMMARY[0008] A low-power island is provided that includes at least one subsystem. Each subsystem includes an embedded memory (MX) power domain having a power supply node. Each subsystem associates with a corresponding power multiplexer that couples a selected one of an active-mode MX power rail and a sleep-mode MX power rail to a power supply node for the subsystem's embedded memory power domain. A power source such as a power management integrated circuit powers the active-mode MX power rail with an active-mode MX power supply voltage. Similarly, the power source powers the sleep-mode MX power rail with a sleep-mode MX power supply voltage that is less than the active-mode MX power supply voltage. [0009] Each power multiplexer may select for the active-mode MX power rail while the corresponding subsystem operates in an active mode. Conversely, each power multiplexer may select for the sleep-mode MX power rail while the corresponding subsystem operates in a sleep mode. Since the sleep-mode MX power supply voltage and the active-mode MX power supply voltage need not be changed during shifts between the active mode and the sleep mode, there is no waste of power on associated decoupling capacitors for the corresponding power rails during these mode transitions.[0010] The sleep-mode MX power rail may also power an embedded memory power domain outside of the low-power island such as a processor embedded memory power domain. This aggregation of the low-power island memory power domains with the processor embedded memory power domain improves the efficiency of a switch- mode power supply supplying power to the sleep-mode MX power rail.[001 1] These and additional advantageous features may be better appreciated with regard to the following detailed description of example implementations.BRIEF DESCRIPTION OF THE DRAWINGS[0012] Figure 1 is a block diagram of a conventional SoC integrated circuit including a low-power island.[001 3] Figure 2 is a block diagram of an SoC integrated circuit including a low- power island in accordance with an aspect of the disclosure.[0014] Figure 3 is a flow chart for a method of operation for an SoC including a low-power island in accordance with an aspect of the disclosure.[0015] Aspects of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures.DETAILED DESCRIPTION[0016] An example system 200 with a power grid architecture that addresses the problems associated with conventional independent power domains is shown in Figure 2. System 200 includes an integrated circuit such as a system-on-a-chip (SoC) 205 in which a low-power island 210 includes one or more subsystems. For example, low- power island 210 may include a sensor subsystem 215, a wireless interface subsystem 220, and an always-on power management (AOP) subsystem 225. Each subsystem includes a core logic (CX) power domain such as illustrated by island CX power domains 230 and 240 in subsystems 215 and 220, respectively. For illustration clarity, an island CX domain in AOP subsystem 225 is not shown in Figure 2. Each subsystem 21 5, 220, and 220 also includes an island embedded memory (MX) power domain 245, 250, and 255, respectively. SoC 205 also includes an SoC processor (not illustrated) with an SoC embedded memory (MX) power domain 235. As discussed with regard to Figure 1 , the various MX power domains in low-power island 210 are not powered by the same power supply voltage as the CX power domains because the island MX power domains retain their state during a sleep mode. In contrast, the power supply voltages for the island CX power domains in low-power island 210 may be completely discharged to ground during the sleep mode. However, it will be appreciated that the island CX power domains may remain powered during the sleep mode in alternative implementations.[0017] To address the shortcoming of conventional low-power island architectures such as discussed with regard to Figure 1, each MX domain in low-power island 210 may selectively couple to one of two power rails through a corresponding power multiplexer 280. If a subsystem is in an active mode (operational) of operation, its power multiplexer 280 selects for an active-mode MX power rail 265 that supplies an active-mode MX power supply voltage. In contrast, if a subsystem is in a sleep mode (retention) of operation, its power multiplexer 280 selects for a sleep-mode MX power rail 270 that supplies a sleep-mode MX power supply voltage. These power supply voltages will vary depending upon the requirements of a particular process node but the sleep-mode MX power supply voltage is lower than the active-mode MX power supply voltage across the various process nodes.[001 8] In contrast to the island MX power domains, each CX power domain in low-power island 210 such as CX power domains 230 and 240 couples directly to an island CX power rail 285. In that regard, note that the power supply voltage for island CX power rail 285 may be completely collapsed (discharged to ground) during a sleep mode for low-power island 210. In such implementations, there is thus no need for a power multiplexer for island CX power domains 230 and 240 since island CX power rail 285 may be discharged to ground during the sleep mode. But such a discharge is undesirable for the island MX power domains because they need to retain their state during the sleep (retention) mode of operation. In contrast to conventional architectures such as discussed with regard to Figure 1, a decoupling capacitor (C) for active-mode MX power rail 265 does not needlessly discharge to ground when low-power island 210 switches from the active mode to the sleep mode because the active-mode MX power supply voltage supplied by active-mode MX power rail 265 does not change in response to such mode shifts. The corresponding decoupling capacitor C (or capacitors) for active-mode MX power rail 265 will thus not waste charge during the mode transitions from active mode to sleep mode for subsystems within low-power island 210. Similarly, a decoupling capacitor C for sleep-mode MX power rail 270 need not be discharged to ground during the mode transitions.[0019] To eliminate the power grid efficiency issues that conventional integrated circuit power grid architectures have when a switch-mode power supply must support a relatively low amount of output current during the sleep mode, sleep-mode MX power rail 270 may also supply power to SoC MX power domain 235. A power management integrated circuit (PM1C) 260 includes a switch-mode power supply 295 for powering sleep-mode MX power rail 270. Although the current drawn by any given MX power domain during the sleep mode is relatively small, the aggregation of SoC MX power domain 235 with island MX power domains 245, 250, and 255 with respect to being powered by sleep-mode MX power rail 270 greatly improves the efficiency of switch-mode power supply 295 as compared to the efficiency of conventional switch- mode power supply 135 during the sleep mode. In particular, note that only the low- power island MX power domains are powered by switch-mode power supply 135 in conventional SoC 100 during the sleep mode whereas all the MX power domains across SoC 205 may be powered by switch-mode power supply 295 during the sleep mode. Thus, switch-mode power supply 295 operates at higher efficiency due to its larger output current in the sleep mode as compared to conventional architectures.[0020] An island switch-mode power supply 290 in PM1C 260 powers island CX power rail 285 and active-mode MX power rail 265 through island linear drop-out regulators 292 and 291 , respectively. This is advantageous as compared to conventional architectures since there is no linear drop-out regulator power loss with regard to down converting an active mode power supply voltage into a sleep-mode power supply voltage. In contrast, note that linear drop-out regulator 145 in conventional SoC 100 of Figure 1 wastes power during the sleep mode because it must drop the active-mode power supply voltage from switch-mode power supply 140 to the sleep-mode power supply voltage. In addition, power multiplexers 280 allow the various subsystems in low-power island 210 to independently operate in the active and sleep modes. In this fashion, power is not wasted by needlessly supplying an active-mode power supply voltage to a dormant subsystem merely because another subsystem is in the active mode of operation. In addition, the transition from sleep mode to active mode for a dormant subsystem merely requires its power multiplexer 280 to select for active-mode MX power rail 265. This reduces the latency with regard to waking up into the active mode and thus conserves additional power as compared to conventional architectures.[0021] Each power multiplexer 280 may comprise any suitable set of switches such as a parallel arrangement of PMOS transistors. Alternatively, transmission gates may also be used to form power multiplexers 280. In some implementations, the plurality of power multiplexers 280 may be deemed to comprise a means for selectively coupling active ones of the island embedded memory (MX) power domains to active- mode MX power rail 265 and for selectively coupling dormant ones of the island embedded memory (MX) power domains to sleep-mode MX power rail 270. With regard to controlling power multiplexer 280, a suitable control circuit such as always-on subsystem 225 may control their operation. Alternatively, the SoC processor or an SoC state machine (not illustrated) may control power multiplexers 280.[0022] Regardless of where the controller for power multiplexers 280 is located, it may be configured to prevent the simultaneous coupling of both sleep-mode MX power rail 270 and active-mode MX power rail 265 to any given island MX domain because such simultaneous coupling could cause active-mode MX rail 265 to undesirably discharge to sleep-mode MX rail 270. Thus, each power multiplexer 280 may be controlled to temporarily disengage the power supply node to its corresponding MX power domain from both active-mode MX power rail 265 and sleep-mode MX power rail 270 during any mode transition (either active mode to sleep mode or sleep mode to active mode). The disengagement period may be relatively brief (e.g., tens of nanoseconds) such that the corresponding MX power domain can continue to operate off the residual voltage on its power supply node until power multiplexer 280 engages to the desired power rail.[0023] A method of operation for a low-power island will now be discussed. The method includes an act 300 of powering a first power rail with an active mode power supply voltage as well as an act 305 of powering a second power rail with a sleep mode power supply voltage, wherein the active mode power supply voltage is greater than the sleep mode power supply voltage. The powering of active-mode MX power rail 265 is an example of act 300 whereas the powering of sleep-mode MX power rail 270 is an example of act 305.[0024] The method further comprises an act 310 that is performed for a subsystem within an integrated circuit and comprises coupling an embedded memory power domain in the subsystem to the first power rail while the subsystem operates in an active mode. The selection of active-mode MX power rail 265 by one of the power multiplexers 280 is an example of act 310.[0025] Finally, the method also includes an act 315 that comprises coupling the embedded memory power domain to the second power rail while the subsystem operates in a sleep mode. The selection of sleep-mode MX power rail 270 by one of the power multiplexers 280 is an example of act 315.[0026] As those of some skill in this art will by now appreciate and depending on the particular application at hand, many modifications, substitutions and variations can be made in and to the materials, apparatus, configurations and methods of use of the devices of the present disclosure without departing from the spirit and scope thereof. In light of this, the scope of the present disclosure should not be limited to that of the particular implementations illustrated and described herein, as they are merely by way of some examples thereof, but rather, should be fully commensurate with that of the claims appended hereafter and their functional equivalents. |
The invention provides an apparatus and a method for right-shifting packed quadwords and extracting packed words. The invention provides the apparatus and the method for performing right-shifting operations on packed quadword data. For example, one embodiment of a processor comprises: a decoder to decode a right-shift instruction to generate a decoded right-shift instruction; a first source register to store a plurality of packed quadword data elements, each of the packed quadword data elements including a sign bit; execution circuitry to execute the decoded right-shift instruction, the execution circuitry comprising shift circuitry with sign preservation logic to right-shift first and second packed quadword data elements from first and second packed quadword data element locations, respectively, in the first source register by an amount specified in an immediate value or in a control value in a second source register, the right-shifting to generate first and second right-shifted quadwords, the sign preservation logic to shift in the sign bit to any bit positions exposed by the right-shifting of the first and second quadwords; the execution circuitry to cause selection of 16 most significant bits of the first and second right-shifted quadwords, including the sign bit, to be written to 16 least significant bit regions of first and second quadword data element locations, respectively, of a destination register. |
1.A processor comprising:a decoder for decoding an instruction;a first source register for storing a plurality of packed four-word data elements, each of the plurality of packed four-word data elements including a sign bit;An execution circuit for executing the decoded instructions, the execution circuit including a shift circuit having symbol save logic for respectively using a first packed quadword data element from the first source register The first packed quadword data element and the second packed quadword data element of the position and the second packed quadword data element position are right shifted in the immediate value or the amount specified in the control value in the second source register, the right Shifting is used to generate a first right-shifted four-word and a second right-shifted four-word;The symbol saving logic is for shifting the symbol to a position of any bit exposed by the right shift of the first four words and the second four words;The execution circuit is configured to cause selection of the first right-shifted four-word and the second right-shifted four-word 16 most significant bits including the sign bit to be respectively written to a destination The first four-word data element position of the register and the region of the 16 least significant bits of the second four-word data element position.2.The processor of claim 1 further comprising:And a rounding circuit for performing a rounding operation on the 16 most significant bits of the first right shifted quadword and the second right shifted quadword according to a rounding mode specified in the control register.3.The processor of claim 1 or 2, further comprising:a saturation circuit, configured to: store encoded values in the 16 most significant bits of the first right shifted quadword and the second right shifted quadword in the destination register Saturated before.4.The processor of claim 3 wherein the one or more saturation flags are responsive to the 16 most significant bits of the first right shifted four word and the second right shifted four word It is saturated and updated.5.The processor according to claim 1 or 4, wherein if the immediate value or the amount specified in the control value is higher than a threshold number, the first right-shifted four-word and The 16 most significant bits of the second right shifted quadword include all values of the sign bit.6.The processor according to claim 1 or 5, wherein the immediate value or the control value of the second source register includes a value of 6 bits for indicating a shift amount.7.The processor of claim 1 or 6, wherein the first source register and the destination register comprise 128-bit packed data registers.8.The processor of claim 7 wherein said 128-bit packed data register comprises an xmm register.9.A method comprising:Decoding the instruction;Storing a plurality of packed four-word data elements in a first source register, each of the plurality of packed four-word data elements including a sign bit;The decoded instructions are executed by: first tightening quadword data elements and second compact four from respective first packed quadword data element locations and second packed quadword data element locations in the first source register The word data element is shifted right in the immediate value or in the control value in the second source register, the right shift is used to generate a first right shifted quadword and a second right shifted quadword, wherein Each of the plurality of packed four-word data elements includes a sign bit,Shifting the symbol into a position of any bit exposed by the right shift of the first quadword and the second quadword;Selecting the first right-shifted four-word and the second right-shifted four-word that comprise the 16 most significant bits of the sign bit to be written to the first four-word data of the destination register, respectively The area of the element and the area of the 16 least significant bits of the position of the second four-word data element.10.The method of claim 9 further comprising:A rounding operation is performed on the 16 most significant bits of the first right shifted quadword and the second right shifted quadword according to a rounding mode specified in the control register.11.The method of claim 9 or 10, further comprising:The encoded values in the 16 most significant bits of the first right shifted quadword and the second right shifted quadword are saturated before being stored in the destination register.12.The method of claim 11 wherein the one or more saturation flags are saturated in response to the 16 most significant bits of the first right shifted quadword and the second right shifted quadword It was updated.13.The method according to claim 9 or 12, wherein if said immediate value or said amount specified in said control value is higher than a threshold number, said first right shifted quadword and said The 16 most significant bits of the second right shifted quadword include all values of the sign bit.14.The method according to claim 9 or 13, wherein said immediate value or said control value of said second source register includes a value of 6 bits for indicating a shift amount.15.The method of claim 9 or 14, wherein said first source register and said destination register comprise 128-bit packed data registers.16.The method of claim 15 wherein said 128-bit packed data register comprises an xmm register.17.A machine readable medium having program code stored thereon that, when executed by a machine, causes the machine to:Decoding the instruction;Storing a plurality of packed four-word data elements in a first source register, each of the plurality of packed four-word data elements including a sign bit;The decoded instructions are executed by: first tightening quadword data elements and second compact four from respective first packed quadword data element locations and second packed quadword data element locations in the first source register The word data element is shifted right in the immediate value or in the control value in the second source register, the right shift is used to generate a first right shifted quadword and a second right shifted quadword, wherein Each of the plurality of packed four-word data elements includes a sign bit;Shifting the symbol into a position of any bit exposed by the right shift of the first quadword and the second quadword;Selecting the first right-shifted four-word and the second right-shifted four-word that comprise the 16 most significant bits of the sign bit to be written to the first four-word data of the destination register, respectively The area of the element and the area of the 16 least significant bits of the position of the second four-word data element.18.The machine readable medium of claim 17 further comprising program code for causing said machine to:A rounding operation is performed on the 16 most significant bits of the first right shifted quadword and the second right shifted quadword according to a rounding mode specified in the control register.19.A machine readable medium according to claim 17 or 18, further comprising program code for causing said machine to:The encoded values in the 16 most significant bits of the first right shifted quadword and the second right shifted quadword are saturated before being stored in the destination register.20.The machine readable medium of claim 19 wherein the one or more saturation flags are responsive to the 16 most significant of the first right shifted four word and the second right shifted four word The bit is saturated and updated.21.A machine readable medium according to claim 17 or 20, wherein said first right shifted quadword if said immediate value or said amount specified in said control value is above a threshold number And the 16 most significant bits of the second right shifted quadword include all values of the sign bit.22.A machine readable medium according to claim 17 or 21, wherein said immediate value or said control value of said second source register comprises a value of 6 bits for indicating a shift amount.23.The machine readable medium of claim 17 or 22, wherein the first source register and the destination register comprise 128-bit packed data registers.24.The machine readable medium of claim 23 wherein the 128-bit compact data register comprises an xmm register.25.A device that includes:Means for decoding instructions;Means for storing a plurality of packed four-word data elements in a first source register, each of the plurality of packed four-word data elements comprising a sign bit;Means for executing the decoded instructions by: first constricting four word data elements from a first packed quadword data element position and a second packed four word data element position in the first source register, respectively The second packed quadword data element is right shifted in an immediate value or an amount specified in a control value in a second source register for generating a first right shifted quadword and a second right shifted Four words,Means for shifting the symbol into a position of any bit exposed by the right shift of the first four words and the second four words;Means for selecting the first right shifted four words and the second right shifted four words comprising the 16 most significant bits of the sign bit to be respectively written to the destination The first four-word data element position of the register and the region of the 16 least significant bits of the second four-word data element position. |
Apparatus and method for shifting a packed quadword and extracting a compacted wordBackground techniqueTechnical fieldEmbodiments of the present invention generally relate to the field of computer processors. More specifically, embodiments relate to apparatus and methods for shifting compact data elements and extracting compact data elements.Related technical descriptionThe Instruction Set or Instruction Set Architecture (ISA) is part of the computer architecture that involves programming, including native data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). . It should be noted that the term "instruction" as used herein generally refers to a macro--ie, an instruction that is provided to a processor for execution - rather than a micro-instruction or micro-operation - that is, the micro-instruction or micro-operation is a processor The result of decoding the macro by the decoder. A microinstruction or micro-op may be configured to instruct an execution unit on a processor to perform an operation to implement logic associated with the macro instruction.ISA is different from microarchitecture, which is a collection of processor design techniques for implementing instruction sets. Processors with different microarchitectures can share a common set of instructions. For example, thePentium 4 processor, theCoreTM processor, and multiple processing from Advanced Micro Devices, Inc., Sunnyvale, California. The implementation implements almost the same version of the x86 instruction set (with some extensions that have been added with the updated version), but with different internal designs. For example, the same register architecture of the ISA can be implemented in different ways in different microarchitectures using well-known techniques, including dedicated physical registers, using register renaming mechanisms (eg, using register alias tables (RATs), reorder buffers ( ROB) and one or more dynamically allocated physical registers that are retired from the register file. Unless otherwise specified, the phrases "register architecture", "register file", and "register" are used herein to refer to the register architecture, register file, and registers that are visible to the software/programmer and the manner in which the instructions are specified. Where distinction is needed, the adjectives "logical," "architectural," or "software-visible" will be used to indicate the register/register file in the register architecture, and different adjectives will be used to specify a given microarchitecture. Registers (for example, physical registers, reorder buffers, retirement registers, register pools).Multiply-accumulate is a commonly used digital signal processing operation that computes the product of two numbers and adds the product to the accumulated value. The existing single instruction multiple data (SIMD) microarchitecture implements a multiply-accumulate operation by executing a sequence of instructions. For example, a multiply instruction can be utilized followed by a 4-way addition, and then multiply-accumulate is performed using the accumulation of the destination quadword data to generate two 64-bit saturation results.DRAWINGSA better understanding of the present invention can be obtained from the following detailed description, in which:1A and 1B are block diagrams illustrating a generic vector friendly instruction format and its instruction templates in accordance with an embodiment of the present invention;2A-2C are block diagrams illustrating exemplary VEX instruction formats in accordance with an embodiment of the present invention;3 is a block diagram of a register architecture in accordance with one embodiment of the present invention;4A is a block diagram illustrating both an exemplary in-order fetch, decode, retired pipeline, and an exemplary register renaming out-of-order issue/execution pipeline, in accordance with an embodiment of the present invention;4B is a block diagram illustrating an exemplary embodiment of an ordered fetch, decode, retired core and an out-of-order issue/execute architecture core of an exemplary register renaming to be included in a processor, in accordance with an embodiment of the present invention;Figure 5A is a block diagram of a single processor core and its connection to an interconnect network on the die;5B illustrates an expanded view of a portion of the processor core of FIG. 5A, in accordance with an embodiment of the present invention;6 is a block diagram of a single core processor and a multi-core processor with an integrated memory controller and graphics device, in accordance with an embodiment of the present invention;Figure 7 illustrates a block diagram of a system in accordance with one embodiment of the present invention;Figure 8 illustrates a block diagram of a second system in accordance with an embodiment of the present invention;Figure 9 illustrates a block diagram of a third system in accordance with an embodiment of the present invention;Figure 10 illustrates a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present invention;11 illustrates a block diagram of converting binary instructions in a source instruction set to binary instructions in a target instruction set, in contrast to using a software instruction converter, in accordance with an embodiment of the present invention;Figure 12 illustrates a processor architecture upon which an embodiment of the invention may be implemented;Figure 13 illustrates a plurality of compact data elements including real and complex values, in accordance with one embodiment;Figure 14 illustrates a compact data processing architecture in accordance with one embodiment of the present invention;Figure 15 illustrates one embodiment of performing a right shift of a four-word data element based on values in an immediate value;Figure 16 illustrates one embodiment of performing a right shift of a four-word data element based on values in a source register;Figure 17 illustrates a method for performing a right shift of a four-word data element, in accordance with one embodiment of the present invention;Figure 18 illustrates a method for performing a right shift of a four-word data element in accordance with another embodiment of the present invention;Figure 19 illustrates one embodiment of performing a left shift of a four-word data element based on a value in an immediate value;Figure 20 illustrates an embodiment of performing a left shift of a four-word data element based on values in a source register;21 illustrates a method for performing a left shift of a four-word data element, in accordance with one embodiment of the present invention;Figure 22 illustrates a method for performing a left shift of a four-word data element in accordance with another embodiment of the present invention.Detailed waysIn the following description, numerous specific details are set forth However, it is apparent to those skilled in the art that the embodiments of the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the basic principles of the embodiments of the invention.Exemplary processor architecture, instruction format, and data typeThe instruction set includes one or more instruction formats. The given instruction format defines various fields (number of bits, location of bits) to specify the operation (opcode) to be performed, the operand(s) on which the operation will be performed, and so on. Some instruction formats are further decomposed by the definition of an instruction template (or subformat). For example, an instruction template for a given instruction format can be defined as a field with the instruction format (the fields included are usually in the same order, but at least some of the fields have different bit positions because fewer fields are included) A subset, and/or defined as a given field that is interpreted differently. Thus, each instruction of the ISA is expressed using a given instruction format (and, if defined, according to a given instruction template in the instruction template of the instruction format), and includes instructions for specifying operations and operands. Field. For example, an exemplary ADD (addition) instruction has a specific opcode and instruction format that includes an opcode field for specifying the opcode and for selecting an operand (source 1 / destination and source 2) The operand field; and the occurrence of the ADD instruction in the instruction stream will result in having specific content in the operand field that selects a particular operand.Embodiments of the (multiple) instructions described herein can be embodied in different formats. Additionally, the exemplary systems, architectures, and pipelines are detailed below. Embodiments of (multiple) instructions may be executed on such systems, architectures, and pipelines, but are not limited to those systems, architectures, and pipelines that are detailed.Universal vector friendly instruction formatThe vector friendly instruction format is an instruction format suitable for vector instructions (eg, there are specific fields that are specific to vector operations). Although an embodiment is described in which vector and scalar operations are supported by a vector friendly instruction format, alternative embodiments use only vector operations through vector friendly instruction formats.1A-1B are block diagrams illustrating a generic vector friendly instruction format and its instruction templates in accordance with an embodiment of the present invention. 1A is a block diagram illustrating a general vector friendly instruction format and its class A instruction template in accordance with an embodiment of the present invention; and FIG. 1B is a diagram illustrating a generic vector friendly instruction format and its class B instruction template in accordance with an embodiment of the present invention. Block diagram. In particular, Class A and Class B instruction templates are defined for the generic vector friendly instruction format 100, both of which include instruction templates without memory access 105 and instruction templates for memory access 120. The term "universal" in the context of a vector friendly instruction format refers to an instruction format that is not tied to any particular instruction set.Although an embodiment of the invention in which the vector friendly instruction format supports the following conditions will be described: 64 byte vector operand length (or size) and 32 bit (4 byte) or 64 bit (8 byte) data element width (or Size) (and thus, a 64-byte vector consists of 16 double-word sized elements, or alternatively consists of 8 four-word sized elements); a 64-byte vector operand length (or size) and 16 bits ( 2 bytes) or 8 bits (1 byte) data element width (or size); 32 byte vector operand length (or size) and 32 bits (4 bytes), 64 bits (8 bytes), 16 bits (2 bytes) or 8-bit (1 byte) data element width (or size); and 16-byte vector operand length (or size) and 32-bit (4 bytes), 64-bit (8-byte), 16-bit (2 bytes), or 8-bit (1 byte) data element width (or size); however, alternative embodiments may support larger, smaller, and/or different vector operand sizes (eg, 256 bytes) Vector operands are larger, smaller, or different data element widths (for example, 128-bit (16-byte) data element width).The class A instruction template of FIG. 1A includes: 1) an instruction template showing a full round control type operation 110 without memory access, and a data conversion type operation 115 without memory access in an instruction template without a memory access 105. The instruction template; and 2) within the instruction template of the memory access 120, an instruction template showing the timeliness of the memory access 125 and an instruction template for the non-timeliness 130 of the memory access. The class B instruction template in FIG. 1B includes: 1) an instruction template of a partial rounding control type operation 112 showing a write mask control without a memory access and an address mask having no memory access in an instruction template without a memory access 105. An instruction template for the code controlled vsize type operation 117; and 2) within the instruction template of the memory access 120, an instruction template showing the write mask control 127 of the memory access.The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in Figures 1A-1B.Format field 140 - A particular value (instruction format identifier value) in this field uniquely identifies the vector friendly instruction format and thereby identifies that the instruction appears in the vector friendly instruction format in the instruction stream. Thus, this field is not required for an instruction set having only a generic vector friendly instruction format, in the sense that this field is optional.Base operation field 142 - its content distinguishes between different base operations.Register Index Field 144 - its contents specify the location of the source or destination operand in the register or in memory either directly or through address generation. These fields include a sufficient number of bits to select N registers from the PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register file. Although N may have up to three source registers and one destination register in one embodiment, alternative embodiments may support more or fewer source and destination registers (eg, up to two sources may be supported, where these One source in the source is also used as a destination; up to three sources can be supported, one of which is also used as a destination; up to two sources and one destination can be supported).Modifier field 146 - its content distinguishes instructions that appear in the general vector instruction format for memory accesses from instructions that appear in the general vector instruction format that do not specify memory access; that is, instruction templates that have no memory access 105 A distinction is made between the instruction templates of the memory access 120. Memory access operations read and/or write to the memory hierarchy (in some cases, values in registers are used to specify source and destination addresses), while non-memory access operations do not (eg, source and/or destination are register). Although in one embodiment, the field is selected between three different ways to perform memory address calculations, alternative embodiments may support more, fewer, or different ways to perform memory address calculations.The augmentation operation field 150 - its content distinguishes which of a variety of different operations to perform in addition to the base operation. This field is for the context. In one embodiment of the invention, the field is divided into a class field 168, an alpha field 152, and a beta field 154. The augmentation operation field 150 allows multiple sets of common operations to be performed in a single instruction instead of 2, 3, or 4 instructions.Scale field 160 - its content allows for scaling of the contents of the index field for memory address generation (eg, for address generation using (2 scale * index + base)).Displacement field 162A - its content is used as part of the memory address generation (eg, for address generation using (2 scale * index + base + displacement)).The displacement factor field 162B (note that the displacement field 162A directly on the displacement factor field 162B indicates the use of one or the other) - its content is used as part of the address generation; it specifies the size of the memory access to be scaled (N The displacement factor of - where N is the number of bytes in the memory access (eg, for address generation using (2 scale * index + base + scaled displacement)). The redundant low order bits are ignored, and thus the contents of the displacement factor field are multiplied by the memory operand total size (N) to generate the final displacement that will be used in calculating the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 174 (described later herein) and the data manipulation field 154C. Displacement field 162A and displacement factor field 162B are not used for instruction templates without memory access 105 and/or different embodiments may implement either or neither of the two, in the sense that displacement Field 162A and displacement factor field 162B are optional.Data element width field 164 - its content distinguishes which of a plurality of data element widths will be used (in some embodiments for all instructions; in other embodiments only for some of the instructions). If only one data element width is supported and/or an aspect of the opcode is used to support the data element width, then the field is not required, in the sense that the field is optional.The write mask field 170, whose content controls the position of the data element in the destination vector operand from data element position, reflects the result of the underlying operation and the augmentation operation. Class A instruction templates support merge-write masking, while class B instruction templates support both merge-write masking and zero-to-write masking. When merging, the vector mask allows any set of elements in the destination to be protected from being updated during any operation (specified by the base operation and the augmentation operation); in another embodiment, the corresponding mask bit is maintained with 0. The old value of each element of the destination. Conversely, when zeroing, the vector mask allows zeroing any element set in the destination during any operation (specified by the underlying operation and the augmentation operation); in one embodiment, the element of the destination is in the corresponding mask Bits with a value of 0 are set to 0. A subset of this function is the ability to control the length of the vector of the operation being performed (ie, the span of the element from the first to the last being modified), however, the modified elements do not have to be contiguous. Thus, write mask field 170 allows for partial vector operations, including loading, storage, arithmetic, logic, and the like. Although it is described that the content of the write mask field 170 selects one of the plurality of writemask registers containing the write mask to be used (and thus, the contents of the write mask field 170 indirectly identify Embodiments of the invention that perform masking, but alternative embodiments alternatively or additionally allow the content of the mask write field 170 to directly specify the mask to be performed.Immediate field 172 - its content allows the specification of an immediate. This field is optional in the sense that it does not exist in a universal vector friendly format that does not support immediate and does not exist in instructions that do not use an immediate. In this sense, this field is optional.Class field 168 - its content distinguishes between instructions of different classes. Referring to Figures 1A-1B, the contents of this field are selected between Class A and Class B instructions. In FIGS. 1A-1B, a rounded square is used to indicate that a particular value exists in the field (eg, Class A 168A and Class B 168B for Class Field 168, respectively, in FIGS. 1A-1B).Class A instruction templateIn the case of an instruction template of class A non-memory access 105, the alpha field 152 is interpreted as its content distinguishing which of the different types of augmentation operations to perform (eg, rounding-type operation 110 and no memory for no memory access) The instruction templates of the accessed data transformation type operations 115 specify the RS field 152A of rounding 152A.1 and data transformation 152A.2), respectively, and the beta field 154 distinguishes which of the operations of the specified type is to be performed. In the instruction template without memory access 105, scale field 160, displacement field 162A, and displacement scale field 162B are not present.Instruction template without memory access - fully rounded control operationIn the instruction template of the full round control type operation 110 without memory access, the beta field 154 is interpreted as a rounding control field 154A for which the content(s) provide static rounding. Although rounding control field 154A includes suppressing all floating point exception (SAE) field 156 and rounding operation control field 158 in the described embodiment of the invention, alternative embodiments may support both concepts, both of which may be The concepts are encoded as the same field, or have only one or the other of these concepts/fields (eg, may only have rounding operation control field 158).SAE field 156 - its content distinguishes whether exception event reporting is disabled; when the content of SAE field 156 indicates that suppression is enabled, the given instruction does not report any kind of floating point exception flag and does not invoke any floating point exception handler.Rounding operation control field 158 - its content distinguishes which of a set of rounding operations to perform (eg, round up, round down, round to zero, and round to the nearest). Thus, the rounding operation control field 158 allows the rounding mode to be changed instruction by instruction. In one embodiment of the invention in which the processor includes a control register for specifying a rounding mode, the content of the rounding operation control field 150 overrides the register value.Instruction Template without Memory Access - Data Transformation OperationIn the instruction template of the data conversion type operation 115 without memory access, the β field 154 is interpreted as a data transformation field 154B whose content distinguishes which of a plurality of data transformations to perform (eg, no data transformation, mixing, broadcast) .In the case of an instruction template of Class A memory access 120, the alpha field 152 is interpreted as an eviction hint field 152B whose content distinguishes which of the eviction hints to use (in FIG. 1A, the instruction template for memory access aging 125) The instruction templates of the memory access non-timeliness 130 specify the timeliness 152B.1 and the non-timeliness 152B.2), respectively, and the beta field 154 is interpreted as the data manipulation field 154C, the content of which is to perform multiple data manipulation operations. Which of the (also known as primitives) (eg, no manipulation, broadcast, source up conversion, and destination down conversion). The instruction template for memory access 120 includes a scale field 160 and optionally includes a displacement field 162A or a displacement scale field 162B.Vector memory instructions use translation support to perform vector loading from memory and vector storage to memory. As with ordinary vector instructions, vector memory instructions transfer data from/to the memory in a data elemental manner, where the elements actually transmitted are specified by the content of the vector mask selected as the write mask.Instruction template for memory access - time-sensitiveTime-sensitive data is data that may be reused quickly enough to benefit from cache operations. However, this is a hint and different processors can implement it in different ways, including completely ignoring the hint.Instruction template for memory access - non-time-sensitiveNon-time-sensitive data is data that is unlikely to be reused quickly enough to benefit from cache operations in the first level cache and should be given priority for eviction. However, this is a hint and different processors can implement it in different ways, including completely ignoring the hint.Class B instruction templateIn the case of a Class B instruction template, the alpha field 152 is interpreted as a write mask control (Z) field 152C whose content distinguishes whether the write mask controlled by the write mask field 170 should be merged or zeroed.In the case of an instruction template of class B non-memory access 105, a portion of the beta field 154 is interpreted as an RL field 157A whose content distinguishes which of the different types of augmentation operations to perform (eg, for a write mask without memory access) The code control portion rounds the instruction template of the control type operation 112 and the write mask control without the memory access. The instruction template of the VSIZE type operation 117 specifies the rounding 157A.1 and the vector length (VSIZE) 157A.2), respectively, and the β field 154. The rest of the distinction distinguishes which of the operations of the specified type is to be performed. In the instruction template without memory access 105, scale field 160, displacement field 162A, and displacement scale field 162B are not present.In the instruction template of the write mask control portion of the memory mask-free write mask control portion, the remainder of the beta field 154 is interpreted as the rounding operation field 159A, and the exception event report is disabled (given instructions do not report any kind) The floating point exception flag does not evoke any floating point exception handlers).Rounding operation control field 159A - as with rounding operation control field 158, whose content distinguishes which of a set of rounding operations to perform (eg, rounding up, rounding down, rounding to zero, and rounding nearest) ). Thus, the rounding operation control field 159A allows the rounding mode to be changed instruction by instruction. In one embodiment of the invention in which the processor includes a control register for specifying a rounding mode, the contents of the rounding operation control field 150 override the register value.In the instruction template of the write mask control VSIZE type operation 117 without memory access, the remainder of the beta field 154 is interpreted as a vector length field 159B whose content distinguishes which of a plurality of data vector lengths to execute (eg, 128) Bytes, 256 bytes or 512 bytes).In the case of the instruction template of the class B memory access 120, a portion of the beta field 154 is interpreted as a broadcast field 157B whose content distinguishes whether a broadcast type data manipulation operation is to be performed, and the remainder of the beta field 154 is interpreted as a vector length field. 159B. The instruction template for memory access 120 includes a scale field 160 and optionally includes a displacement field 162A or a displacement scale field 162B.For the generic vector friendly instruction format 100, the full opcode field 174 is shown to include a format field 140, a base operation field 142, and a data element width field 164. Although one embodiment is shown in which the full opcode field 174 includes all of these fields, in embodiments that do not support all of these fields, the full opcode field 174 includes fewer than all of these fields. The full opcode field 174 provides an operation code (opcode).The augmentation operation field 150, the data element width field 164, and the write mask field 170 allow these features to be specified in a generic vector friendly instruction format, instruction by instruction.The combination of the write mask field and the data element width field creates various types of instructions because these instructions allow the mask to be applied based on different data element widths.The various instruction templates that appear within Class A and Class B are beneficial in different situations. In some embodiments of the invention, different cores within different processors or processors may support only Class A, Class B only, or both. For example, high-performance general-purpose out-of-order cores intended for general-purpose computing can only support Class B, and cores intended primarily for graphics and/or scientific (throughput) computing can only support Class A and are intended to be used. A core for both general computation and graphics and/or science (throughput) calculations supports both class A and class B (of course, with some mix of templates and instructions from both classes, but not from both The core of all templates and instructions is within the scope of the invention). Likewise, a single processor may include multiple cores, all of which support the same class, or where different cores support different classes. For example, in a processor with a separate graphics core and a generic core, a core in the graphics core intended primarily for graphics and/or scientific computation may only support class A, while one or more of the generic cores It can be a high-performance general-purpose core with out-of-order execution and register renaming that only supports class B for general purpose computing. Another processor that does not have a separate graphics core may include one or more general purpose or out-of-order cores that support both Class A and Class B. Of course, features from one class may also be implemented in other classes in different embodiments of the invention. Programs written in a high-level language will be made (eg, compiled in time or statically compiled) in a variety of different executable forms, including: 1) only having (supported by) the target processor for execution The form of the instruction of the class; or 2) has the form of an alternative routine and has control flow code that is written using different combinations of instructions of all classes that are selected to be executed based on the current execution The code's processor supports the instructions to execute.VEX instruction formatVEX encoding allows instructions to have more than two operands and allows SIMD vector registers to be longer than 28 bits. The use of the VEX prefix provides a three-operand (or more operand) syntax. For example, the previous two-operand instruction performs an operation of overwriting the source operand such as A=A+B. The use of the VEX prefix enables the operand to perform non-destructive operations such as A=B+C.2A illustrates an exemplary AVX instruction format including a VEX prefix 202, a real opcode field 230, a Mod R/M byte 240, an SIB byte 250, a displacement field 262, and an IMM 8272. 2B illustrates which fields from FIG. 2A constitute a complete opcode field 274 and a base operation field 241. 2C illustrates which fields from FIG. 2A constitute register index field 244.The VEX prefix (Bytes 0-2) 202 is encoded in three bytes. The first byte is format field 290 (VEX byte 0, bit [7:0]), which contains an explicit C4 byte value (a unique value used to distinguish the C4 instruction format). The second-third byte (VEX bytes 1-2) includes a number of bit fields that provide dedicated capabilities. Specifically, the REX field 205 (VEX byte 1, bit [7-5]) is represented by the VEX.R bit field (VEX byte 1, bit [7] - R), VEX. X bit field (VEX byte 1, Bits [6]–X) and the VEX.B bit field (VEX byte 1, bit [5]–B). The other fields of these instructions encode the lower three bits (rrr, xxx, and bbb) of the register index as known in the art so that VEX.R, VEX.X, and VEX.B can be added. Rrrr, Xxxx, and Bbbb are formed. The opcode mapping field 215 (VEX byte 1, bits [4:0] - mmmmmm) includes content for encoding the implied leading opcode byte. W field 264 (VEX byte 2, bit [7] - W) - is represented by the notation VEX.W and provides different functions depending on the instruction. The role of VEX.vvvv 220 (VEX byte 2, bit [6:3]-vvvv) may include the following: 1) VEX.vvvv pairs the first source register operand specified in reverse (1's complement) form Encoded and valid for instructions with two or more source operands; 2) VEX.vvvv encodes destination register operands that are specified in 1's complement for some vector offsets; or 3) VEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. If the VEX.L 268 size field (VEX byte 2, bit [2]-L) = 0, it indicates a 28 bit vector; if VEX.L = 1, it indicates a 256 bit vector. The prefix encoding field 225 (VEX byte 2, bits [1:0]-pp) provides additional bits for the base operation field 241.The real opcode field 230 (byte 3) is also referred to as an opcode byte. A portion of the opcode is specified in this field.The MOD R/M field 240 (byte 4) includes a MOD field 242 (bits [7-6]), a Reg field 244 (bits [5-3]), and an R/M field 246 (bits [2-0]). The role of Reg field 244 may include the following: encoding a destination register operand or a source register operand (rrr of Rrrr); or being considered an opcode extension and not used to encode any instruction operand. The role of the R/M field 246 can include the following: encoding an instruction operand that references a memory address; or encoding a destination register operand or a source register operand.Proportional, Index, Base Address (SIB) - The content of the Scale field 250 (Byte 5) includes SS 252 (Bits [7-6]), which is used for memory address generation. The contents of SIB.xxx 254 (bits [5-3]) and SIB.bbb 256 (bits [2-0]) have been previously mentioned for register indices Xxxx and Bbbb.Displacement field 262 and immediate field (IMM8) 272 contain data.Exemplary register architectureFIG. 3 is a block diagram of a register architecture 300 in accordance with one embodiment of the present invention. In the illustrated embodiment, there are 32 512-bit wide vector registers 310; these registers are referenced as zmm0 to zmm31. The lower order 256 bits of the lower six zmm registers are overlaid on registers ymm0-15. The lower order 128 bits of the lower 6 zmm registers (lower order 128 bits of the ymm register) are overlaid on registers xmm0-15.General Purpose Register 325 - In the illustrated embodiment, there are sixteen 64-bit general purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.A scalar floating point stack of register registers (x87 stack) 345 over which the MMX packed integer flat register file 350 is overlaid - in the illustrated embodiment, the x87 stack is used to extend the 32/64 using the x87 instruction set. The /80-bit floating-point data performs an eight-element stack of scalar floating-point operations; the MMX registers are used to perform operations on 64-bit packed integer data, and the operands are saved for some operations performed between the MMX and XMM registers.Alternate embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the present invention may use more, fewer, or different register files and registers.Exemplary core architecture, processor, and computer architectureProcessor cores can be implemented in different processors in different ways, for different purposes. For example, implementations of such cores may include: 1) a generic ordered core intended for general purpose computing; 2) a high performance general purpose out-of-order core intended for general purpose computing; 3) intended primarily for graphics and/or Or a dedicated core for scientific (throughput) calculations. Implementations of different processors may include: 1) a CPU including one or more general purpose ordered cores intended for general purpose computing and/or one or more general purpose out-of-order cores intended for general purpose computing; A coprocessor that includes one or more dedicated cores intended primarily for graphics and/or science (throughput). Such different processors result in different computer system architectures, which may include: 1) a coprocessor on a separate chip from the CPU; 2) in the same package as the CPU but on a separate die a coprocessor; 3) a coprocessor on the same die as the CPU (in which case such coprocessors are sometimes referred to as dedicated logic or as dedicated cores, such as integrated graphics and / or science (throughput) logic); and 4) on-chip system, which can describe the described CPU (sometimes referred to as application core(s) or application processor(s), co-processing described above And additional features are included on the same die. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture. Circuits (units) including example cores, processors, etc. are described in detail herein.Exemplary core architecture4A is a block diagram illustrating an out-of-order issue/execution pipeline of an exemplary in-order pipeline and an exemplary register renaming, in accordance with various embodiments of the present invention. 4B is a block diagram showing an exemplary embodiment of an in-order architecture core and an exemplary register renaming out-of-order issue/execution architecture core to be included in a processor, in accordance with various embodiments of the present invention. The solid lined boxes in Figures 4A-4B illustrate an in-order pipeline and an ordered core, while the dashed box optionally adds a register-renamed, out-of-order issue/execution pipeline and core. Considering that the ordered aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 4A, processor pipeline 400 includes fetch stage 402, length decode stage 404, decode stage 406, allocation stage 408, rename stage 410, schedule (also referred to as dispatch or issue) stage 412, register read/memory Read stage 414, execution stage 416, write back/memory write stage 418, exception handling stage 422, and commit stage 424.4B shows a processor core 490 that includes a front end unit 430 that is coupled to the execution engine unit 450, and that both the front end unit 430 and the execution engine unit 450 are coupled to the memory unit 470. Core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As a further option, core 490 can be a dedicated core such as, for example, a network or communication core, a compression engine, a coprocessor core, a general purpose computing graphics processing unit (GPGPU) core, a graphics core, and the like.The front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434 that is coupled to an instruction translation lookaside buffer (TLB) 436 that is coupled to the instruction The unit 438 is fetched and coupled to the decoding unit 440. Decoding unit 440 (or decoder) may decode the instructions and generate one or more micro-ops, microcode entry points, micro-codes that are decoded from, or otherwise reflect, the original instructions. Instructions, other instructions, or other control signals are used as outputs. Decoding unit 440 can be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memory (ROM), and the like. In one embodiment, core 490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (eg, in decoding unit 440, or otherwise within front end unit 430). Decoding unit 440 is coupled to renaming/allocator unit 452 in execution engine unit 450.Execution engine unit 450 includes a rename/allocator unit 452 that is coupled to retirement unit 454 and a set 456 of one or more scheduler units. The scheduler unit(s) 456 represent any number of different schedulers, including reserved stations, central command windows, and the like. The scheduler unit(s) 456 are coupled to the physical register file unit(s) 458. Each physical register file unit in physical register file unit 458 represents one or more physical register files, wherein different physical register files store one or more different data types, such as scalar integers, scalar floats Point, compact integer, compact floating point, vector integer, vector floating point, state (for example, instruction pointer as the address of the next instruction to be executed), and so on. In one embodiment, physical register file unit(s) 458 includes a vector register unit and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file unit(s) 458 are overlapped by the retirement unit 454 to illustrate various ways in which register renaming and out-of-order execution can be implemented (eg, using reorder buffer(s) and retired register(s) Heap; use (multiple) future files, history buffer(s), retired register file(s); use register maps and register pools, etc.). The retirement unit 454 and the physical register file unit(s) 458 are coupled to the execution cluster(s) 460. The execution cluster(s) 460 include a set 462 of one or more execution units and a set 464 of one or more memory access units. Execution unit 462 can perform various operations (eg, shifting, addition, subtraction, multiplication) and can perform on various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). Although some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 456, the physical register file unit(s) 458, and the execution cluster(s) 460 are shown as possibly multiple, as some embodiments create separate types of data/operations for certain types. Pipeline (eg, scalar integer pipeline, scalar floating point/compact integer/compact floating point/vector integer/vector floating point pipeline, and/or each have its own scheduler unit, physical register file unit(s), and/or The cluster's memory access pipeline is executed - and in the case of a separate memory access pipeline, some embodiments in which only the pipeline's execution cluster has memory access unit(s) 464 are implemented. It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution, and the remaining pipelines may be ordered.A set 464 of memory access units is coupled to a memory unit 470 that includes a data TLB unit 472 that is coupled to a data cache unit 474 that is coupled to a second stage (L2) Cache unit 476. In an exemplary embodiment, memory access unit 464 can include a load unit, a memory address unit, and a store data unit, each of which is coupled to data TLB unit 472 in memory unit 470. Instruction cache unit 434 is also coupled to a second level (L2) cache unit 476 in memory unit 470. L2 cache unit 476 is coupled to one or more other levels of cache and is ultimately coupled to main memory.As an example, an exemplary register renaming out-of-order issue/execute core architecture may implement pipeline 400 as follows: 1) instruction fetch 438 performs fetch stage 402 and length decode stage 404; 2) decode unit 440 performs decode stage 406; 3) Rename/Assignment unit 452 performs allocation stage 408 and rename stage 410; 4) Scheduler unit 456 executes scheduling stage 412; 5) Physical register file unit(s) 458 and memory unit 470 execute Register read/memory read stage 414; execution cluster 460 executes execution stage 416; 6) memory unit 470 and physical register file unit(s) 458 perform write back/memory write stage 418; 7) each unit may be involved The exception handling stage 422; and 8) the retirement unit 454 and the physical register file unit(s) 458 execute the commit stage 424.Core 490 can support one or more instruction sets (eg, x86 instruction set (with some extensions that have been added with newer versions); MIPS instruction set from MIPS Technologies, Inc., Sunnyvale, California; Sunnyvale, California The ARM instruction set of ARM Holdings, Inc. (with optional additional extensions such as NEON), including the (multiple) instructions described herein. In one embodiment, core 490 includes logic for supporting a compact data instruction set extension (eg, AVX1, AVX2), thereby allowing the use of compacted data to perform operations used by many multimedia applications.It should be understood that the core can support multi-threading (performing two or more parallel operations or a collection of threads), and the multi-threading can be done in various ways, including time-division multi-threading and multiple simultaneous Threading (where a single physical core provides a logical core for each of the threads that the physical core is simultaneously multi-threaded), or a combination thereof (eg, time-division fetching and decoding, and subsequent sNER3_hyperthreading techniques) Multi-threaded).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 434/474 and shared L2 cache unit 476, alternative embodiments may have a single internal cache for both instructions and data. For example, a first level (L1) internal cache or multiple levels of internal cache. In some embodiments, the system can include a combination of an internal cache and an external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific exemplary ordered core architecture5A-5B illustrate block diagrams of a more specific exemplary in-order core architecture that will be one of several logical blocks in a chip (including other cores of the same type and/or different types). Depending on the application, the logic blocks communicate with some fixed functional logic, memory I/O interfaces, and other necessary I/O logic through a high bandwidth interconnect network (eg, a ring network).5A is a block diagram of a single processor core and its connection to the on-die interconnect network 502 and its local subset 504 of a second level (L2) cache, in accordance with an embodiment of the present invention. In one embodiment, instruction decoder 500 supports an x86 instruction set with a compact data instruction set extension. The L1 cache 506 allows for low latency access to the cache memory into the scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 508 and vector unit 510 use separate sets of registers (scalar register 512 and vector register 514, respectively), and data transferred between these registers is written to memory. And then read back from the first level (L1) cache 506, but alternative embodiments of the present invention may use different methods (eg, using a single set of registers or including allowing data to be transferred between the two register files without The communication path to be written back and read back).The local subset 504 of the L2 cache is part of a global L2 cache that is divided into a plurality of separate local subsets, one for each processor core. Each processor core has a direct access path to a local subset 504 of its own L2 cache. The data read by the processor core is stored in its L2 cache subset 504 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. The data written by the processor core is stored in its own L2 cache subset 504 and, if necessary, flushed from other subsets. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other logic blocks to communicate with each other within the chip. In some embodiments, each circular data path is 1024 bits wide in each direction.Figure 5B is an expanded view of a portion of the processor core of Figure 5A, in accordance with an embodiment of the present invention. FIG. 5B includes the L1 data cache 506A portion of L1 cache 504, as well as more details regarding vector unit 510 and vector register 514. In particular, vector unit 510 is a 16 wide vector processing unit (VPU) (see 16 wide ALU 528) that performs one or more of integer, single precision floating point, and double precision floating point instructions. The VPU supports mixing of register inputs by mixing unit 520, numerical conversion by numerical conversion units 522A-B, and replication of memory inputs by copy unit 524.Processor with integrated memory controller and graphics device6 is a block diagram of a processor 600 that may have more than one core, may have an integrated memory controller, and may have integrated graphics devices, in accordance with an embodiment of the present invention. The solid lined box in Figure 6 illustrates a processor 600 having a single core 602A, a system agent 610, one or more sets of bus controller units 616, and an optional addition of the dashed box with multiple cores 602A-N A set 614 of one or more integrated memory controller units in system agent unit 610 and an alternate processor 600 of dedicated logic 608.Thus, different implementations of processor 600 may include: 1) a CPU, where dedicated logic 608 is integrated graphics and/or science (throughput) logic (which may include one or more cores), and cores 602A-N are one or Multiple general purpose cores (eg, general purpose ordered cores, generic out-of-order cores, combinations of the two); 2) coprocessors, where cores 602A-N are intended primarily for graphics and/or science (throughput) A large number of dedicated cores; and 3) coprocessors, where cores 602A-N are a large number of general purpose ordered cores. Thus, processor 600 can be a general purpose processor, coprocessor, or special purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (general graphics processing unit), a high throughput integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, and more. The processor can be implemented on one or more chips. Processor 600 can be part of one or more substrates and/or can be implemented on one or more substrates using any of a variety of process technologies such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more cache levels within cores 604A-N, a set 606 of one or more shared cache units, and an external memory (not shown) coupled to a set 614 of integrated memory controller units. The set 606 of shared cache units may include one or more intermediate level caches, such as second level (L2), third level (L3), fourth level (L4) or other level of cache, last level Cache (LLC) and/or a combination of the above. Although in one embodiment, the ring-based interconnect unit 612 interconnects the integrated graphics logic 608, the set of shared cache units 606, and the system proxy unit 610 / integrated memory controller unit(s) 614, alternative embodiments Any number of well known techniques can be used to interconnect such units. In one embodiment, consistency is maintained between one or more cache units 606 and cores 602A-N.In some embodiments, one or more cores 602A-N are capable of multi-threading. System agent 610 includes those components that coordinate and operate cores 602A-N. System agent unit 610 can include, for example, a power control unit (PCU) and a display unit. The PCU may be the logic and components required to adjust the power states of cores 602A-N and integrated graphics logic 608, or may include such logic and components. The display unit is used to drive one or more externally connected displays.Cores 602A-N may be isomorphic or heterogeneous in terms of architectural instruction sets; that is, two or more of cores 602A-N may be capable of executing the same set of instructions, while other cores may be able to execute the instructions Only a subset of sets or a different set of instructions.Exemplary computer architecture7-10 are block diagrams of exemplary computer architectures. Laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices known in the art Other system designs and configurations for video game devices, set top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of containing a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 7, shown is a block diagram of a system 700 in accordance with one embodiment of the present invention. System 700 can include one or more processors 710, 715 that are coupled to controller hub 720. In one embodiment, controller hub 720 includes a graphics memory controller hub (GMCH) 790 and an input/output hub (IOH) 750 (which may be on separate chips); GMCH 790 includes a memory and graphics controller, memory 740 And a coprocessor 745 is coupled to the memory and graphics controller; the IOH 750 couples an input/output (I/O) device 760 to the GMCH 790. Alternatively, one or both of the memory and graphics controller are integrated within a processor (as described herein), memory 740 and coprocessor 745 are directly coupled to processor 710, and controller hub 720 and IOH The 750 is in a single chip.The optionality of the additional processor 715 is indicated in Figure 7 by dashed lines. Each processor 710, 715 can include one or more of the processing cores described herein, and can be a certain version of processor 600.Memory 740 can be, for example, a dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, controller hub 720 communicates with processor(s) 710, 715 via a multi-drop bus, such as a front side bus (FSB), a point-to-point interface, or the like.In one embodiment, coprocessor 745 is a dedicated processor such as, for example, a high throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and the like. In one embodiment, controller hub 720 can include an integrated graphics accelerator.There may be various differences in a range of quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, etc. between physical resources 710,715.In one embodiment, processor 710 executes instructions that control general types of data processing operations. Embedded in these instructions can be coprocessor instructions. Processor 710 identifies these coprocessor instructions as having a type that should be executed by attached coprocessor 745. Accordingly, processor 710 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 745 on a coprocessor bus or other interconnect. The coprocessor 745 accepts and executes the received coprocessor instructions.Referring now to Figure 8, shown is a block diagram of a first more specific exemplary system 800 in accordance with an embodiment of the present invention. As shown in FIG. 8, multiprocessor system 800 is a point-to-point interconnect system and includes a first processor 870 and a second processor 880 coupled via a point-to-point interconnect 850. Each of the processors 870 and 880 can be a certain version of the processor 600. In one embodiment of the invention, processors 870 and 880 are processors 710 and 715, respectively, and coprocessor 838 is coprocessor 745. In another embodiment, processors 870 and 880 are processor 710 and coprocessor 745, respectively.Processors 870 and 880 are shown as including integrated memory controller (IMC) units 872 and 882, respectively. Processor 870 also includes point-to-point (P-P) interfaces 876 and 878 as part of its bus controller unit; similarly, second processor 880 includes P-P interfaces 886 and 888. The processors 870, 880 can exchange information via the P-P interface 850 using point-to-point (P-P) interface circuits 878, 888. As shown in FIG. 8, IMCs 872 and 882 couple the processors to respective memories, namely memory 832 and memory 834, which may be portions of the main memory that are locally attached to the respective processors.Processors 870, 880 can each exchange information with chipset 890 via respective P-P interfaces 852, 854 using point-to-point interface circuits 876, 894, 886, 898. Chipset 890 can optionally exchange information with coprocessor 838 via high performance interface 892. In one embodiment, coprocessor 838 is a dedicated processor such as, for example, a high throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and the like.A shared cache (not shown) may be included in either processor or external to the two processors but connected via the PP interconnect such that if the processor is placed in a low power mode, then The local cache information for one or both processors can be stored in the shared cache.Chipset 890 can be coupled to first bus 816 via interface 896. In one embodiment, the first bus 816 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another I/O interconnect bus, although the scope of the present invention is not limited in this respect.As shown in FIG. 8, various I/O devices 814 can be coupled to first bus 816 along with bus bridge 818, which couples first bus 816 to second bus 820. In one embodiment, one such as a coprocessor, a high throughput MIC processor, a GPGPU, an accelerator such as, for example, a graphics accelerator or digital signal processing (DSP) unit, a field programmable gate array, or any other processor A plurality of additional processors 815 are coupled to the first bus 816. In one embodiment, the second bus 820 can be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to a second bus 820, including, for example, a keyboard and/or mouse 822, a communication device 827, and a storage unit 828, such as may include instructions/code and data 830. Disk drive or other mass storage device. Additionally, audio I/O 824 can be coupled to second bus 820. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 8, the system can implement a multi-drop bus or other such architecture.Referring now to Figure 9, shown is a block diagram of a second more specific exemplary system 900 in accordance with an embodiment of the present invention. Similar elements in Figures 8 and 9 use similar reference numerals, and certain aspects of Figure 8 are omitted from Figure 9 to avoid obscuring other aspects of Figure 9.FIG. 9 illustrates that processors 870, 880 can include integrated memory and I/O control logic ("CL") 971 and 982, respectively. Thus, CL 972, 982 includes an integrated memory controller unit and includes I/O control logic. FIG. 9 illustrates that not only memory 832, 834 is coupled to CL 972, 982, but I/O device 914 is also coupled to control logic 972, 982. Traditional I/O device 915 is coupled to chipset 890.Referring now to Figure 10, shown is a block diagram of a SoC 1000 in accordance with an embodiment of the present invention. Similar elements in Figure 6 use similar reference numerals. In addition, the dashed box is an optional feature on more advanced SoCs. In FIG. 10, interconnect unit(s) 1002 are coupled to: an application processor 1010 that includes a set of one or more sets of cores 602A-N, cache units 604A-N, and shared high speed(s) Cache unit 606; system proxy unit 610; bus controller unit(s) 616; integrated memory controller unit(s) 614; set of one or more coprocessors 1020, which may include integrated graphics logic, image processing , an audio processor and a video processor; a static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and a display unit 1040 for coupling to one or more external displays. In one embodiment, coprocessor(s) 1020 includes a dedicated processor such as, for example, a network or communication processor, a compression engine, a GPGPU, a high throughput MIC processor, or an embedded processor, and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. Embodiments of the invention may be implemented as a computer program or program code for execution on a programmable system, the programmable system including at least one processor, storage system (including volatile and nonvolatile memory and/or storage elements) At least one input device and at least one output device.Program code, such as code 830 illustrated in Figure 8, can be applied to the input instructions to perform the functions described herein and to generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code can be implemented in an advanced process oriented programming language or an object oriented programming language to communicate with the processing system. The program code can also be implemented in assembly or machine language if desired. In fact, the mechanisms described herein are not limited to the scope of any particular programming language. In any case, the language can be a compiled or interpreted language.One or more aspects of at least one embodiment can be implemented by a representative instruction stored on a machine-readable medium, which represents various logic in a processor that, when read by a machine, causes the machine to be manufactured The logic for performing the techniques described herein. Such representations, referred to as "IP cores", may be stored on a tangible, machine readable medium and may be supplied to various customers or production facilities for loading into a manufacturing machine that actually manufactures the logic or processor.Such machine-readable storage media may include, but are not limited to, non-transitory, physical arrangements of articles manufactured or formed by a machine or device, including storage media such as a hard disk; any other type of disk, including floppy disks, optical disks, compact Disk read only memory (CD-ROM), rewritable compact disk (CD-RW), and magneto-optical disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM), and static random access memory Memory (SRAM) random access memory (RAM), erasable programmable read only memory (EPROM), flash memory, electrically erasable programmable read only memory (EEPROM); phase change memory (PCM); magnetic card or Optical card; or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include non-transitory tangible machine readable media containing instructions or containing design data, such as hardware description language (HDL), which defines the structures, circuits, devices, processors described herein. And / or system features. These embodiments are also referred to as program products.Simulation (including binary transformation, code transformation, etc.)In some cases, an instruction converter can be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter can transform the instructions (eg, using static binary transforms, dynamic binary transforms including dynamic compilation), morph, emulate, or otherwise convert to one or more other instructions to be processed by the core. The instruction converter can be implemented in software, hardware, firmware, or a combination thereof. The instruction converter can be on the processor, external to the processor, or partially on the processor and partially external to the processor.11 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively the instruction converter can be implemented in software, firmware, hardware, or various combinations thereof. 11 illustrates that a program in the form of a high level language 1102 can be compiled using a first compiler 1104 to generate a first binary code (eg, x86) 1106 that can be natively executed by a processor 1116 having at least one first instruction set core. . In some embodiments, the processor 1116 having at least one first instruction set core represents any of the functions substantially the same as having an Intel processor having at least one x86 instruction set core by performing or otherwise performing the following: Processor: 1) the essential part of the instruction set of the Intel x86 instruction set core, or 2) the target is to run on an Intel processor with at least one x86 instruction set core in order to obtain an Intel processor with at least one x86 instruction set core Basically the same result of the application or the target code version of other software. A first compiler 1104 represents a compiler operable to generate binary code 1106 (eg, object code) in a first instruction set, the binary code being operative with at least one first instruction set with or without additional link processing Executed on processor 1116. Similarly, FIG. 11 illustrates that an alternative instruction set compiler 1108 can be used to compile a program in the form of a high-level language 1102 to generate a processor 1114 that can be executed by not having at least one first instruction set core (eg, having an implementation of California mulberry The MIPS instruction set of MIPS Technologies of Neville, and/or the processor executing the core of the ARM instruction set of ARM Holdings Inc. of Sunnyvale, Calif.) is an alternate instruction set binary code 1110 that is natively executed. The instruction converter 1112 is operative to convert the first binary code 1106 into code that can be natively executed by the processor 1114 that does not have the first instruction set core. The converted code is unlikely to be identical to the alternate instruction set binary code 1110 because the instruction converter capable of doing so is difficult to manufacture; however, the converted code will perform the general operations and consist of instructions from the alternate instruction set. Accordingly, the instruction converter 1112 represents, by simulation, simulation, or any other process, software, firmware, hardware, or software that allows the processor or other electronic device that does not have the first instruction set processor or core to execute the first binary code 1106 or combination.Apparatus and method for digital signal processing instructionsDigital signal processing (DSP) instructions are described below. In one embodiment, the circuitry and logic for performing DSP operations are integrated within the execution engine unit 450 shown in FIG. 4B, integrated into the various cores described above (see, for example, Figures 6 and 10). The cores 602A-N) and/or within the vector unit 510 shown in Figure 5A. For example, the various source and destination registers may be the SIMD registers in physical register file unit 458(s) in FIG. 4B and/or vector registers 310 in FIG. The multiplying circuit, the adder circuit, the accumulating circuit, and other circuits described below may be integrated into the execution components of the architecture described above, by way of example and not limitation, the execution component includes the execution unit(s) in FIG. 4B 462. However, it should be noted that the basic principles of the present invention are not limited to these specific architectures.One embodiment of the invention includes circuitry and/or logic for processing digital signal processing (DSP) instructions. In particular, one embodiment includes a multiply-accumulate (MAC) architecture with eight 16x16 bit multipliers and two 64-bit accumulators. The Instruction Set Architecture (ISA) described below can handle various multiply and MAC operations on 128-bit packed (8-bit, 16-bit, or 32-bit data elements) integer, fixed-point, and complex data types. In addition, some instructions have direct support for highly efficient Fast Fourier Transform (FFT) and Finite Impulse Response (FIR) filtering as well as post-processing of accumulated data by shifting, rounding, and saturation operations.One embodiment of the new DSP instruction uses opcode encoding based on the VEX.128 prefix, and several of the post-processing SSE/SSE2/AVX instructions handling the data are used with the DSP ISA. VEX encoded 128-bit DSP instructions with memory operands can have slack memory alignment requirements.In one embodiment, the instructions also support various integer and fixed point data types, including:1)a Q31 data type with more than 16 bits for signals requiring analog-to-digital conversion (ADC) and digital-to-analog conversion (DAC);2)The Q15 data type commonly used in DSP algorithms;3)16The plural data type of the bit;4)32The plural data type of the bit.The instruction set architecture described herein is directed to a wide range of standard DSPs (eg, FFT, filtering, pattern matching, correlation, polynomial estimation, etc.) and statistical operations (eg, mean, moving average, variance, etc.) ).Target applications of embodiments of the present invention include sensors, audio, classification tasks for computer vision, and speech recognition. The DSP ISA described herein includes a wide range of instructions for deep neural network (DNN), automatic speech recognition (ASR), sensor fusion using Kalman filtering, other major DSP applications, and the like. Given the weight sequence {w1, w2,...wk} and the input sequence {x1,x2,x3,...xn}, many image processing, machine learning tasks require computation by yi=w1xi+w2xi+1+...+wkxi+k- 1 defined result sequence {y1, y2, y3, ... yn+1-k}.FIG. 12 illustrates an example processor 1255 including an embodiment on which the present invention may be implemented, the exemplary processor 1255 including a plurality of cores 0-N for simultaneously executing a plurality of instruction threads. The illustrated embodiment includes DSP instruction decode circuitry/logic 1231 within decoder 1230 and DSP instruction execution circuitry/logic 1241 within execution unit 1240. These pipeline components can perform the operations described herein in response to decoding and execution of DSP instructions. Although only the details of a single core (core 0) are shown in FIG. 12, it will be understood that each of the other cores of processor 1255 may include similar components.Before describing the specific details of embodiments of the present invention, a description of the various components of the example processor 1255 is provided directly below. The plurality of cores 0-N may each include a memory management unit 1290 for performing memory operations (eg, such as load/store operations), a set 1205 of general purpose registers (GPRs), a set of vector registers 1206, and a set of mask registers 1207. . In one embodiment, a plurality of vector data elements are packed into each vector register 1206, each vector register 1206 may have a 512-bit width for storing two 256-bit values, four 128-bit values, eight 64-bit value, sixteen 32-bit values, and so on. However, the basic principles of the invention are not limited to vector data of any particular size/type. In one embodiment, mask register 1207 includes eight 64-bit operand mask registers for performing bitmask operations on values stored in vector register 1206 (eg, implemented as a mask register as described herein) K0-k7). However, the basic principles of the invention are not limited to any particular mask register size/type.Each core 0-N may include a dedicated first level (L1) cache 1212 and a second level (L2) cache 1211 for caching instructions and data in accordance with a specified cache management policy. The L1 cache 1212 includes a separate instruction cache 1220 for storing instructions and a separate data cache 1221 for storing data. The instructions and data stored in each processor cache are managed at the granularity of the cache line, which may be of a fixed size (eg, 64 bytes, 128 bytes, 512 bytes in length). Each core of the exemplary embodiment has an instruction fetch unit 1210 for fetching instructions from the main memory 1200 and/or the shared third level (L3) cache 1216. The instruction fetch unit 1210 includes various well-known components including: a next instruction pointer 1203 for storing an address of a next instruction to be fetched from the memory 1200 (or one of the caches); for storing the most recently used virtual to a translation lookaside buffer (ITLB) 1204 that maps physical instruction addresses to improve address translation speed; a branch prediction unit 1202 for speculatively predicting instruction branch addresses; and a branch target buffer (BTB) for storing branch addresses and destination addresses ) 1201.As mentioned, decoding unit 1230 includes DSP instruction decoding circuitry/logic 1231 for decoding DSP instructions described herein as micro-ops or "uop" and DSP instruction execution circuitry/logic 1241 for executing DSP instructions. The write back/retire unit 1250 retires the executed instruction and writes back the result.Move the four words to the right and extract the tight wordsOne embodiment of the invention includes an instruction to perform two or more right shifts of a signed four-word and extract a signed word from a designated position of the shifted signed four-word. As used herein, a compact word includes a 16-bit packed data element, and a four word includes a 64-bit compact data element.One embodiment maintains and propagates the most significant bit (symbol bit) of each of the aligned four words during the shift and fetch operations. In a particular implementation, using a 6-bit count of the specified shift amount stored in imm8[5:0], in two aligned four words of a 128-bit source register or memory location (eg, xmm2/m128) The bits in each perform an arithmetic right shift. In another implementation, the 6-bit count is specified in another source register. For example, in one embodiment, bits [5:0] and/or [69:64] of xmm3/m128 may encode the shift amounts of the first four words and the second four words, respectively.In one embodiment, the shift amount is determined anyway, after which the most significant 16 bits [63:48] of each of the shifted four words are extracted and written to the destination register (eg, xmm1) In the corresponding four-word position in bits [15:0].In one embodiment, the shifted upper 16 bits from each of the four words are rounded. In particular, a two-digit rounding control field may be specified in a control register (eg, MXCSR[0:1]) that indicates one of several different rounding modes when the instruction is executed. You can specify four rounding modes: round nearest, round up, round down, and round to zero. Rounding nearest means that the rounding result is closest to infinitely accurate results. If the two values are equally close, the result is the equal value (the value with the least significant bit being zero). In one embodiment, the default rounding mode is rounded up because it provides the most accurate and statistically unbiased estimate of the true results and is suitable for most applications. Rounding up means that the rounded result is closest but not greater than infinitely accurate. Rounding down means that the rounded result is closest but not less than infinitely accurate, and rounding to zero means the absolute value of the rounded result. The closest but not greater than infinitely accurate results. However, it should be noted that the basic principles of the invention are not limited to any particular rounding type.In addition, saturation can be performed on the obtained 16 bits. For example, a 16-bit value can be rounded according to the selected rounding mode and the value is saturated to the word. If saturation occurs, the saturation flag can be set in a control register (for example, the MXCSR status register).In one embodiment, the xmm1, xmm2, and xmm3 registers are 128-bit packed data registers that store double quadword values, four double word values, eight word values, or sixteen bytes. FIG. 13 illustrates exemplary data elements and bit distributions for an exemplary source register and/or destination register. As illustrated, the data elements can be packed into a source register and/or a destination register in bytes (8 bits), words (16 bits), double words (32 bits), and/or four words (64 bits).The operations described herein may be performed in response to execution of a single instruction. For example, VPSRARSWQ xmm1, xmm2/m128, imm8 performs a logical right shift by a certain amount on the packed signed four words in xmm2/m128 based on the immediate (imm8), and selects the most significant from the shifted result (including the sign bit). The word is stored in the destination xmm1. In addition, VPSRARSWQxmm1, xmm2, xmm3/m128 performs a logical right shift by a certain amount on the packed signed four words in xmm2 based on the value in xmm3/m128, and selects the most significant word from the shifted result (including the sign bit). Stored in destination xmm1.FIG. 14 illustrates an exemplary architecture for executing instructions and performing the operations described herein. Although many of the functional elements in FIG. 14 are not required for the instructions described herein, certain components of the illustrated architecture may be utilized. Eight multipliers 1405 are included to multiply the data elements in SRC1 1401 with the data elements in SRC2 1402 in accordance with the instruction being executed to generate a plurality of products. If multiplication is not performed, the values can be provided directly to the adder network 1410-1411, which adds, subtracts, and performs various logical operations on the data elements in accordance with the instructions. Depending on the implementation, the shifting circuit described herein can be implemented by multiplier 1405 or adder network 1410-1411.The accumulation circuits 1420-1412 can combine the above results with the previously accumulated results (if any) stored in the SRC3/DEST register 1460, but some embodiments described herein do not perform the accumulation. These results can then be saturated by saturation circuit 1440 (i.e., if one or more of the values are greater than the supported maximum, then the maximum is output) and the results are stored back to the destination via output multiplexer 1450. The local register (SRC1/DEST) 1460.The most significant 16 for shifting the packed signed four words to the right by the immediate number 1501, saving the sign bit (b63 in the example) and taking the resulting right shifted four words is illustrated in FIG. One embodiment of the lower 16 bit architecture written to destination 1460. In particular, two packed signed four words identified as four-word 0 (stored at bit 63:0) and four-word 1 (stored at bit 127:64) are illustrated in SRC1 1401. In response to the value included in the immediate value 1501 (eg, imm8[5:0]), the shift unit 1503 shifts the value in each of the four words to the right by N bits, and stores the result in a temporary register or memory. Position 1520. One embodiment of the illustrated circuit includes symbol preservation logic for shifting the symbol bit to the position of all bits exposed by the shift operation (ie, such that the bit shift result is sign-extended).After shifting, the most significant 16 bits of the shifted four words (according to the rounding mode) are rounded and saturated by the rounding/saturation circuit 1504 (if needed) and copied to the destination register 1460. The position of the 16 least significant bits (bits [15:0]). As shown, the symbols are retained in the resulting word within destination 1460 due to the sign extensions performed during the shift operation.Given that 6 immediate digits are used to identify the amount of shift in this embodiment, N may have a range of values between 0 and 64 (i.e., 26 = 64). In the particular example shown in Figure 15, bit b64 and bit b63 are shown shifted by a value N between 0 and 64. In one embodiment, shift unit 1503 inserts zeros in the locations of the bits from which the values are shifted. Thus, in the illustrated example, the locations of the most significant bits occupied by b64, b63, and b62 are padded with zeros.As mentioned, in one embodiment, a 16-bit result can be extracted from each of the right-shifted four words without affecting the arithmetic flags in the processor. Additionally, if necessary, the shifted higher 16 bits from each of the four words can be rounded based on rounding control and saturated to the word value. If saturation occurs, the rounding/saturation circuit can set the saturation flag 1510 (eg, in the MXCSR status register).In one embodiment, shift unit 1503 is integrated within adder network 1410-1411 in FIG. 14, and rounding/saturation circuit 1504 is integrated in saturation circuits 1440-1440. Alternatively, shift unit 1503 and rounding circuitry can be implemented as separate circuits/logic from the architectural components shown in FIG.Figure 16 illustrates an embodiment in which the shifting value (N) of the amount by which the shift unit 1503 is to be shifted right by two four words is placed in another source register such as SRC3 1402. Specified. The 6-bit value can be stored in the least significant or most significant position of the compact data element such as a compact byte or a compact word, and the bits other than the 6 bits are set to 0 or the 6 bits are ignored. Bit. In one embodiment, the operation of shift unit 1503 is otherwise substantially the same as described above with respect to FIG.A method in accordance with an embodiment of the present invention is illustrated in FIG. The method can be implemented within the context of the processor/system architecture described herein, but is not limited to any particular system architecture.At 1701, a first instruction is fetched, the first instruction having a field for an opcode, an immediate value, a first source operand, and a compact four-word data destination operand, the first source operand identifying the packed four-word data element. At 1702, the instruction is decoded (eg, decoded into a plurality of micro-ops to be performed on the architecture described herein). At 1703, at least two quadwords associated with the first source operand are retrieved (eg, from a cache, memory, etc.) and stored in the first source register. The decoded instructions are then scheduled for execution.At 1704, the decoded instructions are executed to shift the at least two packed quadword data elements to the right based on the values in the immediate, thereby generating a right shifted quadword. As described, the immediate value can include a 6-bit field that encodes the right shift value to be used by the instruction. For example, the right-shifted four words can be stored in a temporary register or memory location. The sign bit (b63, the most significant bit of each four words) is shifted into the position of the bit exposed by the shift. For example, if a quadword is shifted 4 bits to the right, the sign bit will be copied 4 times and filled in the position of the exposed bit.At 1705, the 16 most significant bits of the right shifted quadword are written to the least significant 16 of the first and second packed quadword regions of the destination register (identified by the destination operand) The location of the bit. In the example provided herein, this means the bit 15:0 of the first and second quadword data element locations in the destination register.A method in accordance with an embodiment of the present invention is illustrated in FIG. The method can be implemented within the context of the processor/system architecture described herein, but is not limited to any particular system architecture.At 1801, an instruction is fetched, the instruction having a field for an opcode, a first source operand, a second source operand, and a compact four-word data destination operand, the first source operand identifying a compact four-word data element The second source operand identifies the shift value. At 1802, the instruction is decoded (eg, decoded as a plurality of micro-ops to be performed on the architecture described herein). At 1803, at least two quadwords associated with the first source operand are retrieved (eg, from a cache, memory, etc.) and stored in the first source register. The shift value is retrieved and stored in the second source register. The decoded instructions are then scheduled for execution.At 1804, the decoded instructions are executed to shift the at least two packed quadword data elements to the right based on the shift value, thereby generating a right shifted quadword. As described, the shift value can be a 6-bit field that encodes the right shift value to be used by the instruction. For example, the right-shifted four words can be stored in a temporary register or memory location. The sign bit (b63, the most significant bit of each four words) is shifted into the position of the bit exposed by the shift. For example, if a quadword is shifted 4 bits to the right, the sign bit will be copied 4 times and filled in the position of the exposed bit.At 1805, the 16 most significant bits of the right shifted quadword are written to the least significant 16 of the first and second packed quadword regions of the destination register (identified by the destination operand) The location of the bit. In the example provided herein, this means the bit 15:0 of the first and second quadword data element locations in the destination register.The shift instructions described herein can be executed within the context of a larger instruction stream, all of which are handled by the architecture shown in FIG. As an example, the architecture can be used to perform various forms of multiply-add and multiply-accumulate instructions that process complex numbers with real and imaginary parts. In such implementations, real and imaginary numbers can be stored as data elements within the data element locations of the source and destination registers.Move the condensed four words to the left and extract the condensed wordsOne embodiment of the present invention includes an instruction to perform two or more left-shifts of a packed signed four-word and extract a signed word from a designated position of the shifted signed four-word. As used herein, a compact word includes a 16-bit packed data element, and a four word includes a 64-bit compact data element.One embodiment maintains and propagates the sign bits of each of the aligned four words during the shift and fetch operations. In a particular implementation, using a 6-bit count of the specified shift amount stored in imm8[5:0], in two aligned four words of a 128-bit source register or memory location (eg, xmm2/m128) The bits in each of them perform an arithmetic left shift. In another implementation, the 6-bit count is specified in another source register. For example, in one embodiment, bits [5:0] and/or [69:64] of xmm3/m128 may encode the shift amounts of the first four words and the second four words, respectively.In one embodiment, the left shift amount is determined anyway, after which the most significant 16 bits [63:48] of each of the shifted four words are extracted and written to the destination register (eg, xmm1). In the corresponding four-word bit [15:0].In one embodiment, the shifted upper 16 bits from each of the four words are rounded. In particular, a two-digit rounding control field may be specified in a control register (eg, MXCSR[0:1]) that indicates one of several different rounding modes when the instruction is executed. You can specify four rounding modes: round nearest, round up, round down, and round to zero. Rounding nearest means that the rounding result is closest to infinitely accurate results. If the two values are equally close, the result is the equal value (the value with the least significant bit being zero). In one embodiment, the default rounding mode is rounded up because it provides the most accurate and statistically unbiased estimate of the true results and is suitable for most applications. Rounding up means that the rounded result is closest but not greater than infinitely accurate. Rounding down means that the rounded result is closest but not less than infinitely accurate, and rounding to zero means the absolute value of the rounded result. The closest but not greater than infinitely accurate results. However, it should be noted that the basic principles of the invention are not limited to any particular rounding type.In addition, saturation can be performed on the obtained 16 bits. For example, a 16-bit value can be rounded according to the selected rounding mode and the value is saturated to the word. If saturation occurs, the saturation flag can be set in a control register (for example, the MXCSR status register).The operations described herein may be performed in response to execution of a single instruction. For example, VPSLLRSWQ xmm1, xmm2/m128, imm8 performs a logical left shift by a certain amount on the packed signed four words in xmm2/m128 based on the immediate (imm8), and selects the most significant from the shifted result (including the sign bit). The word is stored in the destination xmm1. In addition, VPSLLVRSWQ xmm1, xmm2, xmm3/m128 performs a logical left shift by a certain amount on the packed signed four words in xmm2 based on the value in xmm3/m128, and selects the most significant word from the shifted result (including the sign bit). To store in the destination xmm1.The most significant 16 for shifting the packed signed four words to the left by a certain amount based on the immediate number 1901, saving the sign bit (b63 in the example) and taking the resulting left shifted four words is illustrated in FIG. One embodiment of the lower 16 bit architecture written to destination 1460. In particular, two packed signed four words identified as four-word 0 (stored at bit 63:0) and four-word 1 (stored at bit 127:64) are illustrated in SRC1 1401. In response to the value included in the immediate value 1901 (eg, imm8[5:0]), the shift unit 1503 shifts the value in each of the four words to the left by N bits, and stores the result in a temporary register or memory. Position 1520. One embodiment of the illustrated circuit includes symbol saving logic for maintaining sign bits during a shift operation. In one embodiment, shift unit 1503 shifts 0 to the position of the least significant bit exposed by the shift of four words.After shifting, the most significant 16 bits of the shifted four words (according to the rounding mode) are rounded and saturated by the rounding/saturation circuit 1504 (if needed) and copied to the destination register 1460. The position of the 16 least significant bits (bits [15:0]).Given that 6 immediate digits are used to identify the amount of shift in this embodiment, N may have a range of values between 0 and 64 (i.e., 26 = 64). In the particular example shown in FIG. 15, bit B63 (sign bit) and bit b62 are shown shifted by a value N between 0 and 64. In one embodiment, shift unit 1503 inserts zeros in the locations of the bits from which the values are shifted. Thus, in the illustrated example, the locations of the least significant bits vacated by b0, b1, b2, etc. are filled with zeros.As mentioned, in one embodiment, a 16-bit result can be extracted from each of the left shifted four words without affecting the arithmetic flags in the processor. Due to the symbol preservation performed during the shift operation, the symbol is copied to the location of the most significant bit of the resulting word within destination 1460 (ie, [15]). Additionally, if necessary, the shifted higher 16 bits from each of the four words can be rounded based on rounding control and saturated to the word value. If saturation occurs, the rounding/saturation circuit can set the saturation flag 1510 (eg, in the MXCSR status register).Figure 20 illustrates an embodiment in which the shifting value (N) of the amount by which the shift unit 1503 is to be shifted by two four words is to be placed in another source register such as SRC3 1402. Specified. The 6-bit value can be stored in the least significant or most significant position of the compact data element such as a compact byte or a compact word, and the bits other than the 6 bits are set to 0 or the 6 bits are ignored. Bit. In one embodiment, the operation of shift unit 1503 is otherwise substantially the same as described above with respect to FIG.A method in accordance with an embodiment of the present invention is illustrated in FIG. The method can be implemented within the context of the processor/system architecture described herein, but is not limited to any particular system architecture.At 2101, a first instruction is fetched, the first instruction having a field for an opcode, an immediate value, a first source operand, and a compact four-word data destination operand, the first source operand identifying the packed four-word data element. At 2102, the instruction is decoded (eg, decoded into a plurality of micro-ops to be performed on the architecture described herein). At 2103, at least two quadwords associated with the first source operand are retrieved (eg, from a cache, memory, etc.) and stored in the first source register. The decoded instructions are then scheduled for execution.At 2104, the decoded instruction is executed to shift the at least two packed quadword data elements to the left based on the value in the immediate value, thereby generating a left shifted quadword. As described, the immediate may include a 6-bit field that encodes the left shift value to be used by the instruction. For example, the left shifted four words can be stored in a temporary register or memory location. The sign bit (b63, the most significant bit of each four words) is shifted into the position of the bit exposed by the shift. For example, if a quadword is shifted 4 bits to the left, the sign bit will be copied 4 times and filled in the position of the exposed bit.At 2105, the 16 most significant bits of the left shifted four words are written to the least significant 16 of the first and second packed quadword regions of the destination register (identified by the destination operand) The location of the bit. In the example provided herein, this means the bit 15:0 of the first and second quadword data element locations in the destination register.A method in accordance with an embodiment of the present invention is illustrated in FIG. The method can be implemented within the context of the processor/system architecture described herein, but is not limited to any particular system architecture.At 2201, an instruction is fetched, the instruction having a field for an opcode, a first source operand, a second source operand, and a compact four-word data destination operand, the first source operand identifying a compact four-word data element The second source operand identifies the shift value. At 2202, the instruction is decoded (eg, decoded into a plurality of micro-ops to be performed on the architecture described herein). At 2203, at least two quadwords associated with the first source operand are retrieved (eg, from a cache, memory, etc.) and stored in the first source register. The shift value is retrieved and stored in the second source register. The decoded instructions are then scheduled for execution.At 2204, the decoded instructions are executed to shift the at least two packed quadword data elements to the left based on the shift value, thereby generating a left shifted quadword. As described, the shift value can be a 6-bit field that encodes the left shift value to be used by the instruction. For example, the left shifted four words can be stored in a temporary register or memory location. The sign bit (b63, the most significant bit of each four words) is shifted into the position of the bit exposed by the shift. For example, if a quadword is shifted 4 bits to the left, the sign bit will be copied 4 times and filled in the position of the exposed bit.At 2205, the 16 most significant bits of the left shifted four words are written to the least significant 16 of the first and second packed quadword regions of the destination register (identified by the destination operand) The location of the bit. In the example provided herein, this means the bit 15:0 of the first and second quadword data element locations in the destination register.The shift instructions described herein can be executed within the context of a larger instruction stream, all of which are handled by the architecture shown in FIG. As an example, the architecture can be used to perform various forms of multiply-add and multiply-accumulate instructions that process complex numbers with real and imaginary parts. In such implementations, real and imaginary numbers can be stored as data elements within the data element locations of the source and destination registers.Although the data blocks are shifted and written into blocks of data in 16-bit blocks (words) in the embodiments described above, the basic principles of the present invention are not limited to any particular number of bits. For example, other embodiments can operate on bytes, 32-bit data elements, 64-bit data elements, or even 128-bit data elements in a similar manner.In the foregoing specification, embodiments of the invention have been described with reference However, it will be apparent that various modifications and changes may be made to the embodiments without departing from the spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded asEmbodiments of the invention may include the various steps that have been described above. These steps can be embodied in machine executable instructions that can be used to cause a general purpose or special purpose processor to perform these steps. Alternatively, these steps may be performed by dedicated hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.As described herein, an instruction may refer to a particular configuration of hardware, such as an application specific integrated circuit (ASIC) configured to perform certain operations or have predetermined functions, or may be stored in a non-transitory computer. Read software instructions in memory in the media. Thus, the techniques shown in the figures can be implemented using code and data stored on one or more electronic devices (eg, terminal stations, network elements, etc.) and executed on the one or more electronic devices. Such electronic devices use, for example, non-transitory computer machine readable storage media (eg, magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase change memory) and transitory computer machine readable communication media (eg, Computer-readable medium, such as electrical, optical, acoustic or other forms of propagating signals, such as carrier waves, infrared signals, digital signals, etc., to store and communicate code internally (and/or through other electronic devices over a network) And data. Additionally, such electronic devices typically include a collection of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine readable storage media) ), user input/output devices (eg, keyboard, touch screen, and/or display) and network connections. The coupling of the set of processors to other components is typically through one or more buses and bridges (also known as bus controllers). The storage device and the signals carrying the network traffic represent one or more machine readable storage media and machine readable communication media, respectively. Accordingly, a storage device of a given electronic device typically stores code and/or data for execution on a collection of one or more processors of the electronic device.Of course, one or more portions of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. Numerous specific details are set forth to provide a thorough understanding of the invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without some of these details. In some instances, well-known structures and functions are not described in detail to avoid obscuring the subject matter of the invention. Therefore, the scope and spirit of the invention should be determined in accordance with the appended claims. |
Methods and apparatus for improving system performance using redundant arithmetic are disclosed. In one embodiment, one or more dependency chains are formed. A dependency chain may comprise of two or more instructions. A first instruction may generate a result in a redundant form. A second instruction may accept the result from the first instruction as a first input operand. The instructions in the dependency chain may execute separately from instructions not in the dependency chain. |
1. A method for implementing redundant arithmetic, comprising:forming a chain of instructions, the chain of instructions comprising a first instruction and a second instruction from a group of instructions, wherein a result of the first instruction is an input of the second instruction, the second instruction accepting a redundant form of the result of the first instruction without having to convert the result into a conventional form, the second instruction further accepting one or more input in the conventional form in addition to the input in the redundant form; and executing the instructions in the chain of instructions in an instruction pipeline different from instruction pipelines associated with instructions not in the chain of instructions. 2. The method of claim 1, wherein the redundant form comprises two bit vectors and the conventional form comprises one bit vector.3. The method of claim 2, wherein the two bit vectors in the redundant form comprises a sum vector and a carry vector.4. The method of claim 2, wherein an instruction at top of the chain of instructions accepts all of its input in the conventional form, and wherein an instruction at bottom of the chain of instructions generates its result in the conventional form.5. A computer system, comprising:a processor to receive instructions which, when executed by the processor, cause the processor to perform a method for implementing redundant arithmetic comprising: forming a chain of instructions, the chain of instructions comprising a first instruction and a second instruction from a group of instructions, wherein the group of instructions is to execute in a predetermined execution sequence, wherein a result of the first instruction is an input of the second instruction, the second instruction accepting a redundant form of the result of the first instruction without having to convert the result into a conventional form, the second instruction further accepting one or more inputs in the conventional form in addition to the input in the redundant form; and executing the instructions in the chain of instructions not in the predetermined execution sequence. 6. The computer system of claim 5, wherein the redundant form comprises two bit vectors and the conventional form comprises one bit vector.7. The computer system of claim 6, wherein the two bit vectors in the redundant form comprises a sum vector and a carry vector.8. The computer system of claim 6, wherein an instruction at top of the chain accepts all of its input in the conventional form, and wherein an instruction at bottom of the chain generates its result in the conventional form.9. An article of manufacture, comprising:a machine-accessible medium including data that, when accessed by a machine, cause the machine to performs operations for implementing redundant arithmetic comprising: forming a chain of instructions, the chain comprising a first instruction and a second instruction, wherein a result of the first instruction is an input of the second instruction, the second instruction accepting redundant form of the result of the first instruction without having to convert the result into a conventional form, the second instruction further accepting one or more inputs in the conventional form in addition to the input in the redundant form. 10. The article of manufacture of claim 9, wherein the redundant form comprises two bit vectors and the conventional form comprises one bit vector.11. The article of manufacture of claim 10, wherein the two bit vectors in the redundant form comprises a sum vector and a carry vector.12. The article of manufacture of claim 9, wherein an instruction at top of the chain accepts all of its input in the conventional form.13. The article of manufacture of claim 9, wherein an instruction at bottom of the chain generates its result in the conventional form.14. The article of manufacture of claim 9, further comprising:capturing dependency information between the first instruction and the second instruction, the dependency information used to form the chain of instructions. 15. The article of manufacture of claim 14, wherein the dependency information includes information indicating one instruction using a result of another instruction as its input. |
FIELD OF THE INVENTIONThe present invention relates generally to the field of processor design. More specifically, the present invention is directed to a method of and apparatus for implementing redundant arithmetic to enhance system performance.BACKGROUNDAs software applications become more complex, hardware designers try to come up with different approaches to improve the performance of the system. Hardware circuits are limited by their physical hardware that affects practical logic designs. One of the important limitations is the fact that a real logic element cannot respond instantaneously to a change of signals on its inputs. There is a finite period of time after the input signal changes before the output signal will change. This time delay is dependent upon the type of circuit used, the number of inputs, and the specific geometry and physical composition of each specified component in a circuit.One of the areas of the hardware that experiences the delay is the computation processing functions in the arithmetic logic unit (ALU) of the processor. FIG. 1A illustrates an exemplary full adder 100. The full adder 100 adds the three bits A0, B0 and Carry-in (Cin) together and produce a sum bit S0 and a Carry-out (C0). FIG. 1B illustrates the most basic type of carry propagate n-bit adder (CPA), the carry-ripple adder, implemented using n cascaded full adders. The result is all the sum bits (S0, S1, . . . , Sn-1) with one sum bit per result position (0 to n-1), where n is the number of bits per add operand. The carry-out bit C0 from the first full adder is propagated to the second full adder, etc. The carry-out bit C1 from the second full adder must wait for the carry-out bit C0 bit from the first full adder, etc. The wait time at each adder is a propagated time delay (tpd). Thus, for the n-bit adder, the total time delay is ntpd. This propagated time delay tpd for the output signals to be available provides a bottleneck for the speed performance of the adder, especially if the numbers to be added are longer in length. There are various carry propagate adders to speed up the execution of the add and subtract opertions. Some of these include carry skip, carry select, carry look ahead, and complex hybrid implementations. All CPAs suffer from increased delay due to increased precision of the operation.One scalable approach to reduce the execution time is to implement redundant arithmetic. With redundant arithmetic implementation, each position's adder in FIG. 1B is not chained to the previous adder. Each adder performs the addition without having to wait for the carry out (Cout) bit from the previous adder. Each result position is represented by two bits, the sum bit and the Cout bit from the previous full adder. This is referred to as a redundant form as compared to the conventional form where there is one bit per result position. Only selected instructions can operate with redundant operands. Thus, an operand in the redundant form needs to be converted back to the conventional form with one bit per position for the instructions that require the conventional operands. Generally, an optimal carry propagate adder is used to convert the redundant form into the conventional form. Using the redundant arithmetic does not necessarily produce faster execution. The execution delay may increase when the redundant processing is always performed before the resulting operand is converted back to the conventional form. For example, the redundant addition of two conventional operands producing a redundant result can increase the delay when a next scheduled instruction requires an input operand in the conventional form instead of the redundant form.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example in the following drawings in which like references indicate similar elements. The following drawings disclose various embodiments of the present invention for purposes of illustration only and are not intended to limit the scope of the invention.FIGS. 1A and 1B illustrate an exemplary prior full adder.FIGS. 2A and 2B illustrate an exemplary prior implementation of redundant adders in carry-save representation.FIG. 3 illustrates an exemplary prior carry propagated adder used as a converter.FIG. 4 illustrates an exemplary flow diagram of one embodiment of the technique disclosed.FIG. 5A illustrates an exemplary representation of a sequence of instruction comprising of dependent and non-dependent instructions.FIG. 5B illustrates an exemplary representation of one embodiment of dependency chains.FIG. 6 illustrates an exemplary extended carry-save representation of a redundant form.FIGS. 7A, 7B and 7C are simplified representations of exemplary compressions using the redundant form.FIG. 7D is an exemplary illustration of one embodiment of the redundant adder arrays with the multiplexor.FIG. 8 illustrates exemplary scenarios where multiple instructions can be executed in one cycle.FIGS. 9A, 9B, 9C and 9D illustrate exemplary addition operations using the extended carry save format.FIGS. 10A, 10B, 10C and 10D illustrate exemplary subtraction operations using the extended carry save format.FIG. 11 illustrates an exemplary computer system that can be used with the technique disclosed.DETAILED DESCRIPTIONA method and apparatus for improving system performance using redundant arithmetic are disclosed. In the following description, for purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that these specific details are not required in order to practice the present invention.The following detailed description sets forth numerous specific details to provide a thorough understanding of the invention. However, those of ordinary skill in the art will appreciate that the invention may be practiced without these specific details. In other instances, well-known methods, procedures, protocols, components, algorithms, and circuits have not been described in detail so as not to obscure the invention.Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the methods used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of processes leading to a desired result. The blocks are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.FIG. 2A illustrates an exemplary representation of a prior one bit redundant adder implemented with 42 carry-save adders. The 4-2 carry save adder is implemented with two full adder arrays, the top full adder array 205 and the bottom full adder arrays 210. The carry save adder 200 can compress five inputs of the same weight or bit position to three outputs of different weights. For example, referring to FIG. 2A, the five inputs Ai, Bi, Di, Ei, and Xi have the same weight or bit position. The two outputs Si, and Ci+1 have different weight. The signal Xi+1 is an intermediate carry that travels one position to the left.FIG. 2B illustrates an exemplary representation of prior 4 bit redundant adder. In this exemplary representation, there are four bit positions (0 to 3). The full adder 215 generates the least significant sum bit S0 and the carry out bit C1. The full adder 220 generates the next least significant sum bit S1 and the carry out bit C2, etc. The input bits A (bits A0 to A3) and the input bits B (bits B0 to B3) correspond to the sum bit vector and the carry bit vector for the first operand. Similarly, the input bits D (bits D0 to D3) and the input bits E (bits E0 to E3) correspond to the sum bit vector and the carry bit vector for the second operand. The redundant result includes the sum bit vector S (bits S0 to S3) and the carry bit vector C (bits C1 to C4). The carry bit C4 is the overflow bit. The C0 bit is the carry bit for the adder 215.FIG. 3 illustrates an exemplary representation of a prior carry propagate adder. The input to the 4-bit carry propagate adder (CPA) 300 is in the redundant form and comprises of a sum bit vector S and a carry bit vector C. The CPA 300 may be used to convert the result in the redundant form to the conventional result (CR) in the conventional form. The conventional result is required by instructions that do not operate with operands in the redundant form.Redundant arithmetic can be used for add/subtract, compare, memory and instruction address calculations, etc. Generally, when a system implements redundant arithmetic, the redundant operations are performed without regarding the instruction type of the next scheduled instruction. This may cause unnecessary delay when the next scheduled instruction can only operate with an operand in the conventional form. Furthermore, when an instruction has two or more input operands, the format of the operands may be in one of the three different combinations such as, for example, all conventional, some redundant and some conventional, or all redundant. Even though the instruction remains the same, each combination of operands may lead to a different execution time.A method of and an apparatus for implementing redundant arithmetic to improve processing performance are disclosed. In one embodiment, redundant processing effectively splits the operations into finer partitions, and the redundant values are essentially the intermediate results of these operations. Performance improvement is obtained by eliminating the unnecessary conversions back to the conventional results before starting the dependent operations.In one embodiment, the redundant operands generated by the executed instructions are detected. The redundant operands and their associated instructions are used to form the dependency chains. FIG. 4 is an exemplary flow diagram illustrating one embodiment of this method. In block 405, as each instruction is retrieved from the main instruction pipeline and executed for the first time, the information specific to redundant arithmetic optimization is collected and stored in the memory. The information includes the instruction type, the format of the operands, and the format of the result.In block 410, as each instruction is executed, a trace sequence information of the instruction sequence is kept so that the dependency chains can be formed. For example, the trace sequence information may include instructions that share the same operand. In one embodiment, the dependency chain comprises of instructions that can accept the redundant result of a previous instruction as its input operand. In block 415, based on the trace sequence information and the information about the instruction type, one or more dependency chains are formed. For example, the first instruction is A=B+C, and the second instruction is E=D+A. The redundant result A from the first instruction is used as the input operand for the second instruction. In one embodiment, the operand D of the second instruction may be in the redundant form or in the conventional form. Thus, in this example, the dependency chain includes the first instruction and the second instruction.It would be appreciated by one skilled in the art that that there may be situations when the previous instruction in the above example may not be the most immediate previous instruction. For example, the first instruction is A=B+C, the second instruction is F=D+E, and the third instruction is G=H+A. The redundant result A in the first instruction is also used as the input operand for the third instruction. Thus, in this example, the dependency chain includes the first instruction and the third instruction. The third instruction is referred to as the dependent instruction because it depends on the first instruction for its input operand.The instruction may have one or more input operands. In one embodiment, the dependent instruction needs to be able to accept the input operand in the redundant form. This allows the dependent instruction to take advantage of the timesaving resulting from the redundant arithmetic. In another embodiment, when the dependent instruction can accept multiple input operands, the dependency chain allows the execution of the previous instruction in the dependency chain to be bypassed. For example, considering the above instruction sequence A=B+C, F=D+E and G=H+A, the add operation B+C in the first instruction may be bypassed and the operands B and C may be used as the input operands for the last instruction. The last instruction is the dependent instruction. In this example, the input operands for the dependent instruction are H, B and C.The first instruction in a dependency chain operates with its input operands in the conventional form. For example, the first instruction in the dependency chain may accept its input operands from the registers in the conventional form. The result of the first instruction is in the redundant form. The second instruction in the same dependency chain then accepts the result of the first instruction as its input operand.In block 420, the instructions in each dependency chains are executed. In one embodiment, the dependency chains are stored and executed from a separate pipeline (daughter pipeline) other than the main instruction pipeline. By chaining the instructions in the dependency chains where each instruction can operate with redundant operands, the extra conversions can be reduced. By speeding up the instructions on the critical path, performance can be improved.In one embodiment, in order to take advantage of the redundant operations, the daughter pipeline may need to be able to examine four levels of dependent instructions. In another embodiment, two carry propagate adders may be required for one redundant adder to convert the results to the conventional form in parallel. The conversions may be pipelined with one carry propagate adder.In one embodiment, performance of redundant arithmetic can be improved with the scheduling of the instructions in the dependency chains. FIG. 5A illustrates an exemplary set of instructions corresponding to an exemplary code section of a program. Each number from one to ten represents one instruction in a sequence of ten instructions. The instruction group 505 represents the normal sequence of instructions scheduled to be executed starting with the first instruction. In this exemplary instruction sequence, the fourth instruction 507 is dependent on the first instruction 506. The ninth instruction 509 is dependent on the fourth instruction 507. The instructions 506, 507 and 509 form one dependency chain. Similarly, the tenth instruction 510 is dependent on the sixth instruction 508 and together they form another dependency chain. In the normal execution sequence, the instruction group 505 would be executed in sequence from the first instruction 506 to the tenth instruction 510. Although the second instruction 525 and the third instruction 530 are not dependent on the first instruction 506, their execution takes cycle times between the first instruction 506 and the fourth instruction 507. By the time the fourth instruction 507 is executed, it does not matter if the redundant arithmetic was used in the first instruction 506. This is because by the time the fourth instruction 507 is scheduled, the redundant result from the first instruction 506 is already converted to the conventional form. Therefore, in the normal execution sequence, using redundant arithmetic for the first instruction 506 does not add any performance benefit. It would be appreciated to one skilled in the art that the flow of FIG. 5 is a simplified example and is used to demonstrate one embodiment of the method disclosed.FIG. 5B illustrates an exemplary representation of one embodiment of the dependency chains. The instruction group 512 represents the first dependency chain. The top of the chain is the first instruction 506. The bottom of the chain is the ninth instruction 509. The first dependency chain also contains the fourth instruction 507. The instruction group 515 represents the second dependency chain with the top being the sixth instruction 508 and the bottom being the tenth instruction 510. The instruction group 520 contains the remaining instructions from the instruction group 505 and includes the instructions that are not dependent on any other instructions.In one embodiment, the processing performance can be improved by scheduling the dependency chains separately from the non-dependent instructions. For example, referring to FIG. 5B, the first dependency chain in the instruction group 512 may be scheduled separately from the second dependency chain in the instruction group 515. Both dependency chains 512 and 515 may be scheduled separately from the non-dependent instructions in the instruction group 520. For example, each chain can be executed sequentially in a separate thread from the main execution of the instructions not in the dependency chains. This way, the benefit of redundant arithmetic can be exploited to provide additional performance advantage. Although there are different processing implementations, it would be apparent to one skilled in the art that performance can be improved by using the redundant operand dependency chain method without departing from the scope of the invention.FIG. 6 illustrates an exemplary extended carry-save representation of a redundant form that may be used with one embodiment of the method being disclosed. Generally, the redundant form of a carry save consists of a sum bit vector and a carry bit vector. For each bit position, there is a corresponding sum bit and carry bit. With the extended carry save format shown in FIG. 6, an extra bit is required to capture the correct value of the operand. In one embodiment, a radix-2 extended carry-save format is used because it allows for fast addition and because addition operation makes up a large percentage of the executed instructions. The extended carry-save format has a sum bit vector 605, a carry bit vector 610 and the extra bit 615. With the extended carry save format, all add and subtract operations are handled differently. The value of the bit 615 in the 0<th > bit position may be inferred from the instruction type for addition or subtraction operation. For example, when the instruction is the addition, the value of the bit 615 is a zero, and when the instruction is the subtraction, the value of the bit 615 is a one.The bit position indices used here may be different from other formats. For example, the indices are based on the weight of the signal: 0=2<0> , 1=2<1> , 2=2<2> , and etc. The carry bits are indexed by Cin indices and not shifted by one as the conventional Cout indices. The bit C0 is the carry bit at position 0 not 1. Mathematically, 2's complement bit vectors have the following weights at each bit position (n equals the number of bits for the precision)-2<n-1 > . . . 2<3> 2<z> 2<1> 2<0> . The value of a 2's complement number is determined by-2<n-1> An-1+ . . . 2<3> A<3> +2<2> A<2> 2<1> A<1> +2<0> A<0> . Therefore, addition involves adding the sum of the 2 bit vectors as shown:-2<n-1> An-1+ . . . 2<3> A3+2<2> A2+2<1> A1+2<0> A0 ±2<n-1> Bn-1+ . . . 2<3> B<3> +2<2> B2+2<1> B1+2<0> B0 When applying the traditional redundant arithmetic on two input operands in the conventional forms, the following results: The dashed parallelogram encloses the result bits of the redundant addition. The vertical dashed line marks the statically determined n precision of the values. In one embodiment, the Cn bit is discarded and there is no value for the C bit.FIG. 7A is a simplified representation of an instruction sequence that can be compressed using the redundant adder. In one embodiment, the compression applies to a 4-2 adder (compressor) array used as the redundant adder adding four bit vectors. Each shaded rectangle in FIG. 7A represents one bit vector in the extended carry save format. Each bit vector has n bits. Each enclosed box 702 represents one operation. The operation may be an addition or a subtraction. There are three operations in this representation, the first operation on the top two bit vectors 705 and 710, the second operation on the bottom two bit vectors 715 and 720, and the third operation involves the results of the first operation and the second operation. The executions of the three operations can be compressed into one cycle using a redundant adder. For example, each of the top two bit vectors can be associated with an operand in the conventional form. Each of the bottom two bit vectors can be associated with an operand in the conventional form. Using the 4-2 compressor, the addition of the top two bit vectors is performed by the top full adder and the addition of the bottom two bit vectors is performed by the bottom full adder.FIG. 7B is another simplified representation of an instruction sequence that can be compressed using the redundant adder. Each rectangle represents a bit vector. There are three consecutive operations in this representation. The first operation uses the two input operands 725 and 730. The second operation uses the result of the first operation and the bit vector 735. The third operation uses the result of the second operation and the bit vector 740. In one embodiment, the executions of these three operations are compressed into one operation using one redundant adder and the extended carry save formats for the operands. For example, the four bit vectors 725, 730, 735 and 740 can be used to represent the four operands in the conventional form and the four bit vectors can be added using one redundant adder.FIG. 7C is another simplified representation of an instruction sequence that can be compressed using the redundant adder. Each of the rectangles represents a bit vector. The top three rectangles 745, 750 and 755 represent the bit vectors corresponding to the operands in a first dependency chain. The bottom three rectangles 760, 765 and 770 represent the bit vectors corresponding to the operands in a second dependency chain. The operations associated with the second dependency chain are independent from the operations associated with the first dependency chain. In this case, both dependency chains can be executed in parallel by compressing their operations using one adder. For example, the first dependency chain can include the instructions A+B=C and C+D=E. The operands A, B and D are represented by the bit vectors 745, 750 and 755. The two add operations may be compressed into one addition and processed by one full adder array. The second dependency chain can include the instructions V+W=X and X+Y=Z. The operands V, W and Y can be represented by the bit vectors 760, 765 and 770. The execution of the second dependency chain is similar to that of the first dependency chain in that the two operations can be compressed into one addition and processed by one full adder array.In one embodiment, the 4-2 redundant adder may be viewed as two 3-2 adders, the top adder and the bottom adder. This may require modifying the 4-2 adder to accept the additional input signals. In one embodiment, two two-input multiplexors are inserted between the two redundant adder arrays. FIG. 7D is an exemplary illustration of one embodiment of the redundant adder arrays with the multiplexors. There are two bit positions 773 and 776 being illustrated in FIG. 7D. The intermediate signal 784 between the two adder cells 774 and 775 is supplied to the multiplexor 780. Normally, the intermediate signal 784 is sent directly to the full adder 775 by the multiplexor 780. However, when the top operation and the bottom operation are independent from each other, as illustrated in FIG. 7C, the intermediate signal 784 is not sent to the full adder 775. In this situation, the full adder 775 may be used to accept another bit vector for a different add operation. For example, the multiplexor 780 is used to send another input operand 793 to the full adder 775 so that the adder 775 can be used to process the add operation corresponding to the second dependency chain illustrated in FIG. 7C. Using the multiplexor 780 to send another signal 793 to the second full adder 775 allows the top layer of full adders 774 and 786 to perform one add operation while allowing the bottom layer of full adders 775 and 787 to perform another add operation. The four pair of bits 797, 798, 799 and 800 represents the redundant result of each of the four full adders 786, 787, 774 and 775. Using this approach, six different bit vectors 790, 791, 792, 793, 794 and 795 may be added by one adder in one cycle. Performance can be improved since multiple dependent and independent instructions can be compressed into one operation during one cycle time.FIG. 8 illustrates exemplary scenarios where multiple operations can be executed in one cycle. The operations associated with the scenarios 820, 825 and 830 can be executed in one cycle using the exemplary redundant adder illustrated in FIG. 7D. Each circle represents an instruction. The fully shaded circle represents an instruction having the input operands in the conventional form. The non-shaded circle represents an instruction having the two input operands in the redundant form. The light shaded circle represents an instruction having one input operand in the redundant form and one input operand in the conventional form.In the scenario 820, there are two independent instructions 805 and 810 each having the input operands in the conventional form. The results of the instructions 805 and 810 are in the redundant form. The instruction 815 depends on the instructions 805 and 810. The two results are used as the input operands by the instruction 815. For example, the instruction 805 can be A+B=C, the instruction 810 can be X+Y=Z, and the instruction 815 can be C+Z=D where C and Z are in the redundant form. Normally, the instructions 803, 810 and 815 are executed in sequence. Since the input operands of the instruction 805 and 810 are in the conventional form, each input operand may be represented by one bit vector. Furthermore, since the redundant adder can add four bits per position, all three operations can be performed in one cycle. For example, the two input operands (bit vectors) associated with the instruction 805 and one input for the instruction 810 can be added by the top full adder. The resulting sum bit from the first adder is passed through the multiplexor and is added to the second input for the instruction 810 by the bottom full adder. Thus, the three operations 805, 810 and 815can be performed in one cycle by adding four bit vectors. The scenario 820 is similar to the representation illustrated in FIG. 7A.In the scenario 825, the instruction 822 generates a result in the redundant form. The instruction 823 accepts one input operand from the instruction 822. The instruction 823 generates a result in the redundant form. The instruction 824 accepts that result as its input operand. The conventional form input operands of the instructions 822, 823 and 824 can be added in one cycle. For example, the instructions 822, 823 and 824 can be A+B=C, C+D=F and F+G=H, with the input operands A, B and G in the conventional form, and the input operands C and F in the redundant form. The conventional form input operands A, B, D and G can be represented by three bit vectors. In this example, the bit vectors corresponding to the input operands A, B, D can be added by the top full adder. G and the two bit vectors corresponding to the F operand can be added by the bottom fill adder. The sum and carry bits of the top full adder is passed through the multiplexor and is added by the bottom full adder. This allows the three instructions to be added in one cycle. The scenario 825 is similar to the representation illustrated in FIG. 7B.In the scenario 830, there are two dependency chains independent of each other with each chain having two instructions. The instruction 828 accepts the input operand in the redundant form from the instruction 826. For example, the first dependency chain may contain the instructions A+B=C and C+D=E, and the second dependency chain may contain the instructions F+G=H and H+J=K. There are six operands or six bit vectors being added in this example including A, B, D and F, G and J. The operands are in the conventional form and each operand is represented by one bit vector. Using the redundant adder illustrated in FIG. 7D, the above four add instructions can be performed in one cycle by adding the six bit vectors. For example, the three operands associated with the first dependency chain can be added by the top full adder. Similarly, the three operands associated with the second dependency chain can be added by the bottom full adder. The multiplexor is configured to pass the bit corresponding to one of the operand in the second dependency chain to the bottom full adder. The results of the two dependency chains are in the redundant form. The scenario 830 is similar to the representation illustrated in FIG. 7C. The operations corresponding to the instructions in the scenario 820, 825 and 830 can be executed in one cycle using the method illustrated in FIG. 7D. It would be appreciated by on skilled in the art that the dependency chain can be formed with two or more operations using the method described. For example, as long as the current instruction can operate with the input operand in the redundant form and as long as the previous instruction in the dependency chain can generate the result in the redundancy form, the dependency chain can continue to grow. In one embodiment, when the previous instruction in the dependency chain is the type of instruction that generate its result in the conventional form, then the dependency chain may end at the previous instruction. In another embodiment, the current instruction can be used to start another dependency chain using the same redundant operand requirement discussed above.For each addition operation, there are four possible different combinations of input operands. For example, the input operands can be two conventional input operands, one conventional input operand and one redundant input operand, one redundant input operand and one conventional input operand, and two redundant input operands. FIG. 9A illustrates an exemplary addition operation with two conventional input operands using the extended carry save format. The redundant adder/subtractor comprises of a chain of 42 compressors as illustrated in FIG. 7D. In one embodiment, for all add operations, the C0in bit is set to zero. When the two input operands are in the conventional form, the first and second input operands are mapped directly to the sum vector 902 and the carry vector 904 of the result in the redundant form. The exemplary bit representations of the two input operands 905, 906 and the respective bit representations of the sum vector 907 and the carry vector 908 are shown on the right of FIG. 9A. The C0in bit 909 in the extended carry save representation is set to zero for the add operation.FIG. 9B illustrates an exemplary addition operation with one redundant input operand and one conventional input operand using the 4-2 compressor illustrated in FIG. 7D and the extended carry save format. In this example, the first input operand is in the redundant form and the second input operand is in the conventional form. The conventional input operand 910 is mapped to the input of the adder with its value in the sum vector 912. The associated carry vector 914 has all the bits set to zero. The C0in bit 915 for this second input operand 910 is set to zero. Generally, at every bit position, four bits can be added. However, for the right most bit position shown the by enclosing rectangle 916, five bits are added, including the C0in bit of the redundant form input operand using the extended carry save format. In one embodiment, the C0in bit 915 is mapped to the zero position 917 in the carry vector of the result. As for all add operations using the extended carry save format, the extra bit 918 of the result is set to zero. The exemplary bit representations of the two input operands and the respective bit representations of the sum vector and the carry vector are shown on the right of FIG. 9B. Only the sum vector bit representation is shown for the second input operand because the associated carry vector bit representation is all zero.FIG. 9C illustrates another exemplary addition operation with one redundant input operand and one conventional input operand using the 4-2 compressor of FIG. 7D and the extended carry save format. In this example, the first input operand is in the conventional form and the second input operand is in the redundant form. The conventional input operand 920 is mapped to the input of the adder with its value in the sum vector 922. The bits in the carry vector 924 are set to zero. The C0in bit 925 for this first input operand is set to zero. The five bits in the zero position shown by the enclosing rectangle 926 are added together. The other bit positions of the input operands have four bits to be added. The result of the add operation is stored in the sum vector and the carry vector of the result. The C0in bit 928 of the second input operand is mapped to the zero position 929 in the carry vector of the result. Since this is an add operation, the extra bit 930 of the result is set to zero. The exemplary bit representations of the two input operands and the respective bit representations of the sum vector and the carry vector are shown on the right of FIG. 9C. Only the sum vector bit representation is shown for the first input operand because the associated carry vector bit representation is all zero.FIG. 9D illustrates an exemplary addition operation with two redundant input operands using the 4-2 compressor illustrated in FIG. 7D and the extended carry save format. The five bits in the zero position enclosed by the rectangle 940 are added and the sum is stored in the zero position of the sum vector 942. The C0in bit 944 of the second input operand is mapped to the zero position 946 in the carry vector of the result. Since this is an add operation, the extra bit 948 of the result is set to zero. The exemplary bit representations of the two input operands and the respective bit representations of the sum vector and the carry vector are shown on the right of FIG. 9D.For subtract operation, generally the second input operand is inverted, a 1 is added, and the subtract operation is changed to an add operation. The inversion of the second operand requires changing every bit in the second input operand from a 1 to a 0 and from a 0 to a 1. For example, the subtraction A-B is equivalent to A+(invert B+1). FIG. 10A illustrates an exemplary subtraction operation with two conventional input operands using the 4-2 compressor illustrated in FIG. 7D and the extended carry save format. In one embodiment, the first input operand 1005 is mapped to the sum vector 1008 of the result. The second input operand 1010 is inverted and then mapped to the carry vector 1012 of the result. For the subtract operation, the C0in bit 1015 is set to 1. There is no computation performed in this operation except for small delay of the inverter 1018. The exemplary bit representations of the two input operands and the respective bit representations of the sum vector and the carry vector are shown on the right of FIG. 10A.FIG. 10B illustrates an exemplary subtraction operation with one redundant input operand and one conventional input operand using the 4-2 compressor illustrated in FIG. 7D and the extended carry save format. In this example, the first input operand is in redundant form and the second input operand is in the conventional form. The sum vector 1022 contains the inverted bits from the bit vector 1020. Instead of adding a 1 to the zero bit position of the sum vector, the 1 is inserted into the C0in bit 1025 of the result. The five bits in the zero position shown by the enclosing rectangle 1026 are added together. The other bit positions of the input operands have four bits to be added. The result of the add operation is stored in the sum vector 1027 and the carry vector 1028. The C0in bit 1029 of the second input operand is mapped to the zero position in the carry vector 1028 of the result. The exemplary bit representations of the two input operands and the respective bit representations of the sum vector and the carry vector are shown on the right of FIG. 10B. Only the sum vector bit representation is shown for the second input operand because the associated carry vector bit representation is all zero. Note that for subtract operation, the C0in bit 1025 of the result is a 1.FIG. 10C illustrates another exemplary subtraction operation with one redundant input operand and one conventional input operand using the extended carry save format. In this example, the first input operand is in the conventional form and the second input operand is in redundant form. With the second input operand in the redundant form, the following equivalence equations, where A is the first input operand in the conventional form and B is the second input operand in the redundant form, are observed.A-B=A-(Bs+Bc+Cbin)=A+{overscore (B)}s+{overscore (B)}c+1+{overscore (C)}bin where, when Cbin=0[mathematical formula - see original document][mathematical formula - see original document][mathematical formula - see original document]where Bs and Bc are the sum vector and the carry vector for the second input operand and the Cbin is the extra bit in the extended carry save format. In one embodiment, both of the sum vector 1030 and the carry vector 1032 associated with the second input operand are inverted into vectors 1034 and 1036. Instead of adding a 1 to the zero bit position of the inverted sum vector 1034, the 1 is inserted into the C0in bit 1040 of the result. The C0in bit 1033 of the second input operand is inverted and inserted in the C0in bit 1039. The exemplary bit representations of the two input operands and the respective bit representations of the sum vector and the carry vector are shown on the right of FIG. 10C. Note that for subtract operation, the C0in bit 1040 of the result is a 1.FIG. 10D illustrates an exemplary subtraction operation with two redundant input operands using the extended carry save format. The sum vector 1042 and the carry vector 1043 of the second input operand are inverted into vectors 1044 and 1045. Instead of adding a 1 to the zero bit position of the inverted sum vector 1044, the 1 is inserted into the C0in bit 1050 of the result. The C0in bit 1046 of the second input operand is inverted and inserted in the C0in bit 1047. An add operation is performed and the result is stored in the sum vector 1048 and the carry vector 1049. The C0in bit 1047 is then copied to the zero bit position in the carry vector 1049 of the result. The exemplary bit representations of the two input operands and the respective bit representations of the sum vector and the carry vector are shown on the right of FIG. 10D. Note that the justification for the inversion of the sum vector and the carry vector associated with the second operand is shown at the top of FIG. 10D. Consistent with the previous exemplary subtract operations, the C0in bit 1050 of the result is a 1. Because of the inversion of the input operand, the inverter delay for the subtract operations may need to be hidden to provide for zero cycle direct mapping.It would be appreciated by one skilled in the art that the redundant operands can also be used in executing other operations, such as, for example, compare, shift, etc, without departing from the scope of the invention. For example, to do comparisons of two input operands, the redundant subtract operation is first performed. Then, zero detection logic and conversion are applied. Conversion is needed to produce the sign bit of the redundant operands and the result of the subtraction.FIG. 11 illustrates an embodiment of a computer system that can be used with the present invention. The various components shown in FIG. 11 are provided by way of example. Certain components of the computer in FIG. 11 can be deleted from the addressing system for a particular implementation of the invention. The computer shown in FIG. 11 may be any type of computer including a general-purpose computer.FIG. 11 illustrates a system bus 1100 to which various components are coupled. A processor 1102 performs the processing tasks required by the computer. The processor 1102 may be any type of processing device capable of implementing the blocks necessary to perform the addressing and delivery operations discussed above. In one embodiment, the processor 1102 comprises of an arithmetic logic unit (ALU) 1103 implemented to perform redundant arithmetic operations. A read-only memory (ROM) 1106 and a random access memory (RAM) 1108 are coupled to bus 1100 and provide a storage mechanism for various data and information used by the computer. Although ROM 1106 and RAM 1108 are shown coupled to bus 1100, in alternate embodiments, ROM 1106 and RAM 1108 are coupled directly to processor 1102 or coupled to a dedicated memory bus (not shown). In one embodiment, the ROM 1106 may comprise code to monitor the execution sequence of instructions processed by the processor 1102. The ROM 1106 may also comprise code to form dependency chains using the execution sequence information and the redundant arithmetic information. In another embodiment, the monitoring function and the dependency chain formation function may be implemented in hardware logic. A video display 1110 is coupled to bus 1100 and displays various information and data to the user of the computer. A disk drive 1112 is coupled to bus 1100 and provides for the long-term mass storage of information. Disk drive 1112 may be used to store various profile data sets and other data generated by and used by the addressing and delivery system.From the above description and drawings, it will be understood by those of ordinary skill in the art that the particular embodiments shown and described are for purposes of illustration only and are not intended to limit the scope of the invention. Those of ordinary skill in the art will recognize that the invention may be embodied in other specific forms without departing from its spirit or essential characteristics. References to details of particular embodiments are not intended to limit the scope of the claims. |
Detailed herein are systems, apparatuses, and methods for a computer architecture with instruction set support to mitigate against page fault and/or cache-based side-channel attacks. In an embodiment, a processor includes a decoder to decode an instruction into a decoded instruction, the instruction comprising a first field that indicates an instruction pointer to a user-level event handler; and an execution unit to execute the decoded instruction to, after a swap of an instruction pointer that indicates where an event occurred from a current instruction pointer register into a user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto call stack storage, and change a current instruction pointer in the current instruction pointer register to the instruction pointer to the user-level event handler. |
A processor comprising:a decoder to decode an instruction into a decoded instruction, the instruction comprising a first field that indicates an instruction pointer to a user-level event handler; andan execution unit to execute the decoded instruction to, after a swap of an instruction pointer that indicates where an event occurred from a current instruction pointer register into a user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto call stack storage, and change a current instruction pointer in the current instruction pointer register to the instruction pointer to the user-level event handler.The processor of claim 1, wherein the instruction further comprises a second field that indicates a number of bits by which to change a stack pointer to the call stack storage, and the execution unit is to execute the decoded instruction to also change the stack pointer by the number of bits.The processor of any one of claims 1-2, wherein the execution unit is to execute the decoded instruction to also change a stack pointer to the call stack storage to protect a stack red zone from being overwritten by the instruction pointer that indicates where the event occurred.The processor of any one of claims 1-3, wherein the execution unit is to execute the decoded instruction only when the processor is not in an event-notify mode.The processor of claim 4, wherein the event-notify mode is set in an event-notify status register.The processor of any one of claims 1-5, wherein the execution unit is to execute the decoded instruction to also, after the swap of the instruction pointer that indicates where the event occurred from the current instruction pointer register into the user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto shadow stack storage.The processor of claim 6, wherein the shadow stack storage is not user-level writable.The processor of claim 6, wherein, on completion of execution of the user-level event handler, the processor is to pull a first instruction pointer from the call stack storage and a second instruction pointer from the shadow stack storage, and execute starting from the first instruction pointer only when the first instruction pointer and the second instruction pointer match.A method comprising:decoding an instruction into a decoded instruction with a decoder of a processor, the instruction comprising a first field that indicates an instruction pointer to a user-level event handler; andexecuting the decoded instruction with an execution unit of the processor to, after a swap of an instruction pointer that indicates where an event occurred from a current instruction pointer register into a user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto call stack storage, and change a current instruction pointer in the current instruction pointer register to the instruction pointer to the user-level event handler.The method of claim 9, wherein the instruction further comprises a second field that indicates a number of bits by which to change a stack pointer to the call stack storage, and the executing the decoded instruction with the execution unit is to also change the stack pointer by the number of bits.The method of any one of claims 9-10, wherein the executing the decoded instruction with the execution unit is also to change a stack pointer to the call stack storage to protect a stack red zone from being overwritten by the instruction pointer that indicates where the event occurred.The method of any one of claims 9-11, wherein the executing the decoded instruction with the execution unit is only when the processor is not in an event-notify mode.The method of claim 12, further comprising setting the event-notify mode in an event-notify status register of the processor.The method of any one of claims 9-13, wherein the executing the decoded instruction with the execution unit is also to, after the swap of the instruction pointer that indicates where the event occurred from the current instruction pointer register into the user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto shadow stack storage.The method of claim 14, further comprising, on completion of execution of the user-level event handler, pulling, by the processor, a first instruction pointer from the call stack storage and a second instruction pointer from the shadow stack storage, and executing starting from the first instruction pointer only when the first instruction pointer and the second instruction pointer match. |
FIELD OF THE INVENTIONThe disclosure relates generally to electronics, and, more specifically, an embodiment of the disclosure relates to circuitry to implement an instruction that causes a processor to operate in a side-channel protected mode.BACKGROUNDCertain classes of software-based side-channel attacks involve one software program (an attacker) obtaining information about another program (a victim) by exploiting a common underlying resource (e.g., a central processing unit (CPU)). Exemplary side-channel attacks include page fault-based attacks and cache-based attacks. Page fault-based attacks are side-channel attacks that target programs executed inside a trusted execution environment, in which the operating system (OS) is not in the trusted computing base. An attacker such as a malicious OS can perform a side-channel attack by observing the sequences of page faults during a program's execution by actively manipulating the page table or by passively observing changes in control bits of a page table entry. In this manner, the attacker can obtain the memory access pattern of the program during execution. If the memory access pattern depends on the secret information being processed, the attacker can infer the secret information indirectly. Cache-based side-channel attacks are more general attacks based on caches that are shared by programs executed by a CPU. The timing differences between a victim's cache misses and cache hits enables an attacker such as a malicious program to infer cache access patterns (e.g., which cache location is accessed and when it is accessed) of the victim. Based on those patterns, the attacker can infer secret information being processed by the victim program.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the accompanying figures, in which like references indicate similar elements.Figure 1 illustrates a high-level view of side-channel attack mitigation according to some embodiments.Figures 2A-2B illustrate an embodiment of hardware to process instructions to support side-channel attack mitigation.Figures 3A-3D illustrate an exemplary data cache unit, an exemplary instruction cache unit, an exemplary instruction translation lookaside buffer unit, and an exemplary data translation lookaside buffer during execution of a series of exemplary instructions.Figures 4A-4B illustrate an exemplary data cache unit, an exemplary instruction cache unit, an exemplary instruction translation lookaside buffer unit, and an exemplary data translation lookaside buffer before and after an eviction event.Figure 5 illustrates embodiments of an ENBEGIN instruction, an ENEND instruction, a MOVCIP instruction, a PRELOAD CACHE instruction, a PRELOAD TLB instruction, and an ENCALL instruction.Figure 6 illustrates an embodiment of hardware to process the exemplary instructions illustrated in Figure 5 .Figure 7 illustrates an embodiment of a method performed by a processor to process an ENBEGIN instruction.Figure 8 illustrates an embodiment of a method performed by a processor to process an ENEND instruction.Figure 9 illustrates an embodiment of a method performed by a processor to process a MOVCIP instruction.Figure 10 illustrates an embodiment of a method performed by a processor to process a PRELOAD CACHE instruction.Figure 11 illustrates an embodiment of method performed by a processor in response to an event that occurs while in an event-notify mode.Figure 12 illustrates an embodiment of a method performed by a processor to process a PRELOAD PAGE instruction.Figure 13 illustrates an embodiment of a method performed by a processor to process an ENCALL instruction.Figure 14 illustrates a computer system including a branch predictor and a branch address calculator (BAC) in a pipelined processor core according to embodiments of the disclosure.Figure 15 illustrates an example code flow for an event-notify mode according to embodiments of the disclosure.Figure 16 illustrates a stack used in an event-notify mode according to embodiments of the disclosure.Figure 17A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the disclosure.Figure 17B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the disclosure.Figure 18A is a block diagram illustrating fields for the generic vector friendly instruction formats in Figures 17A and 17B according to embodiments of the disclosure.Figure 18B is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 18A that make up a full opcode field according to one embodiment of the disclosure.Figure 18C is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 18A that make up a register index field according to one embodiment of the disclosure.Figure 18D is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 18A that make up the augmentation operation field 1750 according to one embodiment of the disclosure.Figure 19 is a block diagram of a register architecture according to one embodiment of the disclosure.Figure 20A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to embodiments of the disclosure.Figure 20B is an expanded view of part of the processor core in Figure 20A according to embodiments of the disclosure.Figure 21 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the disclosure.Figure 22 is a block diagram of a system in accordance with one embodiment of the present disclosure.Figure 23 is a block diagram of a more specific exemplary system in accordance with an embodiment of the present disclosure.Figure 24, shown is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present disclosure.Figure 25, shown is a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present disclosure.Figure 26 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the disclosure.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.Detailed herein are embodiments of a set of instructions and hardware support to detect and protection against side-channel attacks. In particular, the disclosed embodiments relate to a set of instructions that software programs can leverage to eliminate the ability of an attacker to obtain secret information. In particular, the instructions place a processor in a side-channel protected mode (referred to as an "event-notify mode"). In the event-notify mode, certain events that might be indicative of a side-channel attack cause user-level program execution to redirect through a user-level event handler. The user-level event handler allows the user-level program to prevent an attacker from observing cache or memory access patterns by pinning critical or sensitive information in a cache (to prevent cache-based attacks) or a translation lookaside buffer (TLB) (to prevent page fault-based attacks). With the instructions and hardware support, user-level programs can incorporate a lightweight protection mechanism against side-channel attacks.Figure 1 illustrates a high-level view of side-channel attack mitigation according to some embodiments. As illustrated, a program flow 100 of a user-level application includes an ENBEGIN instruction 105, a preamble routine 110, a security-critical routine 115, and an ENEND instruction 120. In this usage, the dashing of the preamble routine 110 and the security-critical routine 115 indicate that they are specific to the user-level application and may vary from one application to another. The preamble routine 110 and the security-critical routine 115 include instructions related to the user-level application being protected. Protection entails the processor executing the application entering an event-notify mode 106. A user-level application can instruct the processor to enter the event-notify mode 106 to proactively protect secret information from side-channel attacks. In particular, the user-level application wraps the preamble routine 110 and the security-critical routine 115 with the ENBEGIN instruction 105 and the ENEND instruction 120. Note that although Figure 1 illustrates the ENBEGIN instruction 105 as separate from the preamble routine 110, in some embodiments, the ENBEGIN instruction is included early in the preamble routine 110.Before the processor executes the security-critical code 115, the preamble code 110 causes the processor to load all security-critical code and/or data into cache(s) and/or TLB(s). Absent an eviction event 125 during the event-notify mode 106, the processor executes the preamble routine 110, the security critical routine 115, and then exits the event-notify mode 106 upon executing the ENEND instruction.If an eviction event 125 occurs while the processor is processing instructions in event-notify mode 106, the processor raises an exception to redirect the user-level application flow to a user-level event handler 130. Again, the dashing of the user-level event handler 130 indicates that it is specific to the user-level application and may vary from one application to another. When the redirection occurs, the processor exits the event-notify mode. Once in the user-level event handler, the software program can implement a variety of side-channel mitigation measures. One example of such a mitigation measure is for the user-level event handler to issue an ENBEGIN instruction 105 and then calling the preamble routine 110 (as illustrated) or calling the preamble routine 110 (if the preamble routine includes the ENBEGIN instruction 105). The preamble routine 110 causes the processor to reload the security-critical code and/or data into cache(s) and/or TLB(s). In this manner, the user-level event handler 130 and preamble routine 110 effectively "pin" the code and/or data in the cache(s)/TLB(s). Because a pre-condition of a successful page fault- or cache-based side-channel attack is causing evictions or monitoring evictions, an attacker cannot observe or manipulate the security-critical code and/or data. Furthermore, since the security critical code and/or data is pre-loaded into the cache(s)/TLB(s), an attacker cannot obtain information based on a victim's execution or cache footprint before and after execution.Exemplary Core ArchitectureFigures 2A-2B illustrate an embodiment of hardware to process instructions to support side-channel attack mitigation. In particular, Figure 2A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 2B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 2A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 2A , a processor pipeline 200 includes a fetch stage 202, a length decode stage 204, a decode stage 206, an allocation stage 208, a renaming stage 210, a scheduling (also known as a dispatch or issue) stage 212, a register read/memory read stage 214, an execute stage 216, a write back/memory write stage 218, an exception handling stage 222, and a commit stage 224.Figure 2B shows processor core 290 including a front end unit 230 coupled to an execution engine unit 250, and both are coupled to a memory unit 270. The core 290 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 290 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general-purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 230 includes a branch prediction unit 232 coupled to an instruction cache unit 234, which is coupled to an instruction TLB 236, which is coupled to an instruction fetch unit 238, which is coupled to a decode unit 240. The decode unit 240 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 240 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 290 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 240 or otherwise within the front end unit 230). The decode unit 240 is coupled to a rename/allocator unit 252 in the execution engine unit 250.The execution engine unit 250 includes the rename/allocator unit 252 coupled to a retirement unit 254 and a set of one or more scheduler unit(s) 256. The scheduler unit(s) 256 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 256 is coupled to the physical register file(s) unit(s) 258. Each of the physical register file(s) units 258 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 258 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) unit(s) 258 is overlapped by the retirement unit 254 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 254 and the physical register file(s) unit(s) 258 are coupled to the execution cluster(s) 260. The execution cluster(s) 260 includes a set of one or more execution units 262 and a set of one or more memory access units 264. The execution units 262 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 256, physical register file(s) unit(s) 258, and execution cluster(s) 260 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 264). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 264 is coupled to the memory unit 270, which includes a data TLB unit 272 coupled to a data cache unit 274 coupled to a level 2 (L2) cache unit 276. In one exemplary embodiment, the memory access units 264 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 272 in the memory unit 270. The instruction cache unit 234 is further coupled to a level 2 (L2) cache unit 276 in the memory unit 270. The L2 cache unit 276 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 200 as follows: 1) the instruction fetch 238 performs the fetch and length decoding stages 202 and 204; 2) the decode unit 240 performs the decode stage 206; 3) the rename/allocator unit 252 performs the allocation stage 208 and renaming stage 210; 4) the scheduler unit(s) 256 performs the schedule stage 212; 5) the physical register file(s) unit(s) 258 and the memory unit 270 perform the register read/memory read stage 214; the execution cluster 260 perform the execute stage 216; 6) the memory unit 270 and the physical register file(s) unit(s) 258 perform the write back/memory write stage 218; 7) various units may be involved in the exception handling stage 222; and 8) the retirement unit 254 and the physical register file(s) unit(s) 258 perform the commit stage 224.The core 290 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 290 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 234/274 and a shared L2 cache unit 276, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.The core 290 raises the user-level event handler whenever certain events that might be associated with an attack might occur ("security-critical events") during the event-notify mode. Exemplary security-critical events include eviction of entries in the data TLB unit 272, the data cache unit 274, the instruction cache unit 234, or the instruction TLB 236. In addition, the core 290 raises the user-level event handler in response to other security-critical events such as an external interrupt or exception.Several features of the core 290 support event-notify mode, including one or more registers, entry-level tracking of cache or TLB entries impacted by security-critical events (described with reference to Figures 3A-3D and 4A-4B ), and the addition of instructions including ENBEGIN and ENEND (described with reference to Figure 5 ). With regard to register-support, the core 290 includes one or more registers to support the flow and status of the event-notify mode. In an exemplary embodiment, each thread supported by the core 290 includes an event-notify status flag in a register, a user-level event handler (e.g., trampoline) pointer (ERIP) register, and a current instruction pointer (RIP) register. The event-notify status flag indicates whether the core 290 is operating in event-notify mode for the thread. In one embodiment, the RIP stores the current (e.g., next) instruction pointer of the processor. In one embodiment, a CIP register (e.g., separate from the RIP register) stores a copy of the instruction pointer (IP) of the program flow when a security-critical event occurs. In one embodiment, the ERIP register stores the location of (e.g., a trampoline to) the user-level event handler that the core 290 redirects program flow to after occurrence of a security-critical event, for example, and then stores a copy of the instruction pointer (IP) of the program flow where a security-critical event occurs (e.g., the IP for the instruction following the last instruction retired before the event).In certain embodiments, the event-notify status flag and the ERIP register need not be saved during a context-switch. In embodiment, when a security-critical event occurs, the core 290 uses the information in the ERIP register prior to any context-switch, and the event-notify status flag can be cleared without saving as the core 290 exits the event-notify mode upon occurrence of a security-critical event.Table 1 below illustrates an example instruction sequence in reference to Figures 3A-3D (where I1 is instruction one, I2 is instruction 2, etc.) and ENPF is to only set t-bits in cache and ENPFPG is to only set t-bits in TLB (e.g., instead of a single instruction setting t-bits in cache and TLB). Figure 3A illustrates these data structures after execution of I1, Figure 3B illustrates after execution of I2 and 13, 3C illustrates after execution of I4 and 15, and 3D illustrates after execution of I6.Table 1: Example instruction sequenceInstructionIP (e.g., PC)Accessed virtual addressI1: ENBEGIN0x1000I2: ENPF (%rsi)0x10040x4000I3: ENPFPG (%rsi)0x10080x4000I4: ENPF 0x1000(%rsi)0x100c0x5000I5: ENPFPG 0x1000(%rsi)0x10100x5000I6: ENEND0x1014Figures 3A-3D and 4A-4B illustrate an exemplary instruction TLB unit 236, an exemplary data TLB unit 272, an exemplary instruction cache unit 234, and an exemplary data cache unit 274. At a high-level, an entry in a cache or TLB includes a per-thread tracking bit that indicates whether the entry is tracked in the event-notify mode. Rather than redirect program flow to the user-level event handler 120 on any cache eviction, the tracking or "T" bits allow the core 290 to redirect program flow only when an eviction of security-critical code or data occurs. The T-bits can be set or cleared independently for each hardware thread even if multiple threads share code or data and access the shared code/data in the event-notify mode. In an exemplary embodiment, an access of an entry in a TLB or cache while operating in the event-notify mode brings the data into tracked state. In other words, if the program flow (e.g., preamble routine 110 and security critical code 115) cause a memory access that hits within the cache or TLB, the core 290 (e.g., logic controlling the cache or TLB) sets the T-bit of the associated entry for the thread associated with the access. Further, if the program flow causes a memory access that misses the cache or TLB and results in retrieval of data from another memory, the core 290 sets the T-bit of the cache/TLB entry in which the retrieved data is stored. In some embodiments, as described below, a mask value may enable or disable eviction tracking for a particular cache and/or TLB, thereby preventing the T-bit from being set in the masked cache(s)/TLB(s).Figures 3A-3D illustrate the state of the instruction TLB unit 236, the data TLB unit 272, the instruction cache unit 234, and the data cache unit 274 during execution of the exemplary set of instructions illustrated in Table 1. Figures 4A-4B illustrate the state of the caches and TLBs before and after an eviction. As illustrated in each of Figures 3A-3D and 4A-4B , and assuming a two-thread core 290: each entry in the instruction TLB unit 236 includes a virtual address 302, a physical address 304, a first thread T-bit ("TO") 306, and a second thread T-bit ("T1") 308; each entry in the data TLB unit 272 includes a virtual address 318, a physical address 320, a first thread T-bit ("TO") 322, and a second thread T-bit ("T1") 324; each entry in the instruction cache unit 234 includes a tag 310, data 312, a first thread T-bit ("TO") 314, and a second thread T-bit ("T1") 316; and each entry in the data cache unit 274 includes a tag 326, data 328, a first thread T-bit ("TO") 330, and a second thread T-bit ("T1") 330.Figure 3A illustrates the state of the TLBs 236, 272 and the caches 234, 275 after the processor executes the first instruction. Initially, the core 290 is processing the program and the program counter proceeds to a memory location that includes the ENBEGIN instruction. The core 290 fetches the instruction from memory resulting in updates to the instruction TLB unit 236 and the instruction cache unit 234. In this example, the instruction TLB unit 236 is updated with an entry having a virtual address 302 of 0x1000 and a physical address 304 of OxcOOO and the instruction cache unit 234 is updated with an entry having a tag 310 of 0xc000 and a data value that includes instructions beginning at virtual address 0x1000. Further, the T-bits 306, 308, 314, and 316 of the new entries for the first and second thread remain '0' as the core 290 has yet to enter event-notify mode. Because the instruction does not involve data, the data TLB unit 272 and the data cache unit 274 remain unchanged.Figure 3B illustrates the state of the TLBs 236, 272 and the caches 234, 275 after the processor executes the second (ENPF) instruction and third (ENPFPG) instruction. The second instruction causes the data cache unit 274 to be updated with an entry having a tag 326 of 0x9000. Further, the T-bit 330 corresponding to the new entry is set. The third instruction causes the data TLB unit 272 to be updated with an entry having a virtual address 318 of 0x4000 and a physical address 320 of 0x9000. Further, the T-bit 322 corresponding to the new entry is set.Figure 3C illustrates the state of the TLBs 236, 272 and the caches 234, 275 after the processor executes the fourth (ENPF) instruction and fifth (ENPHPG) instruction. The third instruction causes the data cache unit 274 to be updated with an entry having a tag 326 of 0xa000. Further, the T-bit 330 corresponding to the new entry is set. The fifth instruction causes the data TLB unit 272 to be updated with an entry having a virtual address 318 of 0x5000 and a physical address 320 of 0xa000. Further, the T-bit 322 corresponding to the new entry is set.Figure 3D illustrates the state of the TLBs 236, 272 and the caches 234, 275 after the processor executes the sixth (ENEND) instruction. When the core 290 processes the ENEND instruction, it clears (or causes logic associated with the cache(s)/TLB(s) to clear) all of the set T-bits along with other operations (e.g., clearing the event-notify status flag).Figures 4A-4B illustrate the state of the instruction TLB unit 236, the data TLB unit 272, the instruction cache unit 234, and the data cache unit 274 before and after a security-critical event, such as a cache eviction. Again assuming a two-thread core 290: each entry in the instruction TLB unit 236 includes a virtual address 302, a physical address 304, a first thread T-bit ("TO") 306, and a second thread T-bit ("T1") 308; each entry in the data TLB unit 272 includes a virtual address 318, a physical address 320, a first thread T-bit ("TO") 322, and a second thread T-bit ("T1") 324; each entry in the instruction cache unit 234 includes a tag 310, data 312, a first thread T-bit ("TO") 314, and a second thread T-bit ("T1") 316; and each entry in the data cache unit 274 includes a tag 326, data 328, a first thread T-bit ("TO") 330, and a second thread T-bit ("T1") 330.Figure 4A illustrates the state of the TLBs 236, 272 and the caches 234, 275 at some point in time, e.g., after executing the first five instructions in Table 1. If a security-critical event occurs prior to exiting the event-notify mode, the core 290 clears (or causes logic associated with the cache(s)/TLB(s) to clear all of the set T-bits along with other operations (e.g., clearing the event-notify status flag), as was the case with the ENEND instruction described above with reference to Figure 3D . The state after clearing T-bits is illustrated in Figure 4B . In addition to clearing T-bits, the core 290 performs additional operations such as redirecting program flow for any threads that have a set T-bit for the cache/TLB entry associated with the security-critical event. Additional detail regarding these operations is found below with reference to Figure 11 . In the example illustrated between Figures 4A and 4B , the evicted entry in the data cache unit 274 has a set T-bit for the first thread, so the core 290 would cause the program flow of the associated program executing within the first thread to redirect to the user-level event handler. Exemplary InstructionsFigure 5 illustrates embodiments of an ENBEGIN instruction 510, an ENEND instruction 520, a MOVCIP instruction 530, a PRELOAD (e.g., PREFETCH) CACHE instruction 540, PRELOAD (e.g., PREFETCH) TLB instruction 550, and an ENCALLinstruction 560. The ENBEGIN instruction 510 includes an operation code (OPCODE) field 502 that includes a value that decode circuitry can use to identify the ENBEGIN instruction 510. An immediate, register, or memory location operand 512 includes or identifies the entry-point (i.e., a memory address) of the user-level event handler. In some embodiments, the ENBEGIN instruction 540 also includes an immediate, register, or memory location operand 514 that includes or identifies a value that enables or disables (e.g., mask) security-critical events based on the affected cache or TLB. For example, the operand 514 may be a 4-bit value where each bit indicates whether to monitor for security-critical events in the instruction TLB unit 236, the data TLB unit 272, the instruction cache unit 234, and the data cache unit 274, respectively. In some embodiments, one or both of the operands 512 or 514 may be omitted and their respective contents located in implicit register(s) associated with the ENBEGIN instruction 510. Upon executing the ENBEGIN instruction 510, execution circuitry places the core 290 in event-notify mode by setting the event-notify status flag and loads the location of the entry-point of the user-level event handler in the ERIP register.The ENEND instruction 520 includes an opcode field 502 that includes a value that decode circuitry can use to identify the ENEND instruction 520. Upon executing the ENEND instruction 520, execution circuitry removes the core 290 from event-notify mode by clearing the event-notify status flag and clearing any set T-bits in the cache(s)/TLB(s) or causing any set T-bits to be cleared.The (optional) MOVCIP instruction 530 includes an opcode field 502 that includes a value that decode circuitry can use to identify the MOVCIP instruction 530. The MOVCIP instruction 530 allows a software program to store the value in the CIP register. In some embodiments, the MOVCIP instruction 530 includes a register or memory location operand 532 that identifies the location where the value in the CIP should be stored. Upon executing the MOVCIP instruction with operand 532, execution circuitry stores the value in the CIP register in the identified location. In some embodiments, the MOVCIP instruction 530 includes no operands, and upon executing the MOVCIP instruction 530, execution circuitry pushes the contents of the CIP register onto a stack for the software program. As described elsewhere herein, the user-level event handler can use the MOVCIP instruction 530 to store the location of the main flow of the software program so that it can be resumed after the core 290 redirects execution to the user-level event handler when a security-critical event occurs in event-notify mode. In certain embodiments, the MOVCIP instruction is not utilized when using an ENCALL instruction. In certain embodiments, a CIP register (e.g., separate from RIP and ERIP) is not utilized.The PRELOAD CACHE instruction 540 (e.g., ENPF instruction) includes an opcode field 502 that includes a value that decode circuitry can use to identify the PRELOAD CACHE instruction 540. The PRELOAD CACHE instruction 540 provides a simple way to preload code into the instruction cache unit 234 or data into the data cache unit 274. A register or memory location operand 542 includes or identifies a memory location of a data structure containing the data to be "pinned" into the cache (e.g., the memory address contained within the cache line to be prefetched and tracked). The data structure may correspond to the format of entries in the data cache unit 274. In one embodiment, an immediate, register, or memory location operand 544 includes or identifies whether the data structure is loaded into the instruction cache unit 234 or into the data cache unit 274. For example, a "1" might indicate that the data structure is to be loaded into the instruction cache unit 234 and a "0" might indicate that the data structure is to be loaded into the data cache unit 274. An immediate, register, or memory location operand 546 includes or identifies permissions associated with the cached entries (e.g., a "1" indicates read-only while a "0" indicates read or write permission). In some embodiments, one or more of operands 542, 544, and 546 may be omitted and their respective contents located in implicit register(s) associated with the PRELOAD CACHE instruction 540. For example, RAX might store a value associated with the description of operand 542, RBX might store a value associated with the description of operand 544, and RCX might store a value associated with the description of operand 546. Upon executing the PRELOAD CACHE instruction 540, execution circuitry loads the data in the data structure from memory into the the data cache unit with the identified permissions in certain embodiments.The PRELOAD TRANSLATION LOOKASIDE BUFFER (TLB) instruction 550 includes an opcode field 502 that includes a value that decode circuitry can use to identify the PRELOAD TLB instruction 550. The PRELOAD TLB instruction 550 (e.g., ENPFPG instruction) provides a simple way to preload a translation (e.g., a virtual address to physical address mapping) into the TLB (e.g., data TLB unit 272 in Figure 2 ). An immediate value, register, or memory location operand 552 includes or identifies a memory location of a data structure (e.g., memory page) that is to have its translation prefetched into the TLB (e.g., with specified permission) and sets the tracking bit (e.g., t-bit) for that entry in the TLB. Permission associated with a TLB entry may be such that a first value (e.g., a "1") indicates read-only while a second, different value (e.g., "0") indicates read or write permission. In some embodiments, operand 552 may be omitted and its respective contents located in implicit register(s) associated with the PRELOAD TLB instruction 550. For example, RAX might store a value associated with the description of operand 552. Upon executing the PRELOAD TLB instruction 550, execution circuitry loads the translation for the address into the designated TLB.The ENCALL instruction 560 includes an operation code (OPCODE) field 502 that includes a value that decode circuitry can use to identify the ENCALL instruction 560. An immediate, register, or memory location operand 562 includes or identifies the entry-point (e.g., a memory address or pointer to the memory address) of a user-level event handler (e.g., with the entry-point set by software to point to the user-level event handler). In one embodiment, operand 562 identifies the relative offset of the user-level event handler. In some embodiments, the ENCALL instruction 560 also includes an immediate, register, or memory location operand 564 that includes or identifies a number of bits (or bytes) by which to change (e.g., decrement or increment depending on the stack) a stack pointer to a (e.g., call) stack. In some embodiments, one or both of the operands 562 or 564 may be omitted and their respective contents located in implicit register(s) associated with the ENCALL instruction 560. Upon executing the ENCALL instruction 560, execution circuitry (e.g., for a core 290 in event-notify mode) is to (e.g., after a swap of an instruction pointer that indicates where an event occurred from a current instruction pointer register into a user-level event handler pointer register (e.g., ERIP)) push the instruction pointer that indicates where the event occurred onto call stack storage, and change a current instruction pointer in the current instruction pointer register (e.g., RIP) to the instruction pointer to the user-level event handler. In one embodiment, instead of using both a register (e.g., CIP) that stores a copy (e.g., but does not control execution) of the instruction pointer that indicates where the event occurred and a register (e.g., EIP) that indicates a trampoline to the user-level event handler, a single register (e.g., ERIP) is used to store an IP to a trampoline routine (e.g., the routine including an ENCALL instruction) to the user-level event handler, and then (e.g., after a swap of RIP and ERIP) that single register (e.g., ERIP) is used to store a copy of the instruction pointer that indicates where the event occurred (e.g., and that copy is then pushed to a stack). In one embodiment, a register swap operation to swap contents of RIP and ERIP is part of event detection routine (e.g., that also causes jump 1509 in Figure 15 ). Certain embodiments herein utilize an ENCALL instruction without also utilizing a MOVCIP instruction or a PRELOAD instruction (e.g., but can utilize PRELOAD CACHE or PRELOAD TLB discussed herein).Figure 6 illustrates an embodiment of hardware to process the exemplary instructions illustrated in Figure 5 . As illustrated, storage 601 stores instruction(s) 603 to be executed, including an ENBEGIN instruction, an ENEND instruction, a MOVCIP instruction, and a PRELOAD CACHE instruction. The instruction is received by decode circuitry 605. For example, the decode circuitry 605 receives this instruction from fetch logic/circuitry. Decode circuitry 605 may correspond to the decode unit 240 in Figure 2 , and the fetch logic/circuitry may correspond to the instruction fetch unit 238 in Figure 2 .As illustrated in Figure 5 , the instructions 603 include a field for an opcode and zero or more operands, depending on the instruction 603. The decode circuitry 605 decodes the instruction into one or more operations. In some embodiments, this decoding includes generating a plurality of micro-operations to be performed by execution circuitry (such as the execution engine unit 250 in Figure 2 ). For example, the decode circuitry 605 may break a preload operation into multiple memory read operations based on a known size of the data structure being loaded into the instruction cache unit 234 or the data cache unit 274.In some embodiments, register renaming, register allocation, and/or scheduling circuitry 607 provides functionality for one or more of: 1) renaming logical operand values to physical operand values (e.g., a register alias table in some embodiments), 2) allocating status bits and flags to the decoded instruction, and 3) scheduling the decoded instruction for execution on execution circuitry out of an instruction pool (e.g., using a reservation station in some embodiments). Registers (register file) and/or memory 608 store data as operands of the instruction to be operated on by execution circuitry, including the above-described RIP register 630 (e.g., current instruction pointer register), ERIP register 650, and register including the event-notify status flag 640. Exemplary register types include packed data registers, general-purpose registers, and floating-point registers. In one embodiment, a save of processor extended states (e.g., as caused by decoding and executing of an XSAVE instruction) is to save the contents of the ERIP register 650 but not save the contents of the event-notify status flag 640 storage. In one embodiment, setting an IP into RIP sets execution to begin next at the IP, e.g., but storing data into ERIP (or CIP) does not.Execution circuitry executes 609 the decoded instruction. The execution of the decoded instruction causes the execution circuitry to perform operations based on the decoded instruction, as detailed below with reference to Figures 7-13 .Write back (retirement) circuitry 611 commits the result of the execution of the decoded instruction (if any). In some embodiments, retirement/write back circuitry architecturally commits the destination register into the registers or memory and retires the instruction.Figure 7 illustrates an embodiment of method performed by a processor to process a ENBEGIN instruction. For example, the stages of the pipeline 200 in Figure 2A , the core 290 of Figure 2B , or the hardware illustrated in Figure 6 perform this method.At 701, an instruction is fetched. For example, an ENBEGIN instruction is fetched by the fetch stage 202 or the instruction fetch unit 238. As described above with reference to Figure 5 , the ENBEGIN instruction includes a field for an opcode, the opcode to indicate that execution circuitry is to set a flag in a first register that indicates an event-notify mode and to store the event handler entry point operand in a second register. The ENBEGIN instruction may include an explicit operand or be associated with an implicit operand that identifies an event handler entry point. The ENBEGIN instruction may further include an explicit operand or be associated with an implicit operand that identifies which cache(s) or TLB(s) should trigger redirection to the event handler upon the occurrence of a security-critical event.At 703, the fetched instruction is decoded. For example, the fetched ENBEGIN instruction is decoded by the decode stage 206, the decode unit 240, or the decode circuitry 605.At 705, data values associated with the explicit or implicit operands of the decoded instruction are retrieved. For example, if the implicit or explicit operand(s) include a reference to a register or a memory location that contains the entry-point address of the user-level event handler, the entry-point address is retrieved.At 707, the decoded instruction is executed by execution circuitry such as the execution stage 216, the execution engine unit 250 (e.g., an execution unit 262), or execution circuitry 609. For the ENBEGIN instruction, the execution will cause execution circuitry to set the event-notify status flag in a register to indicate that the core 290 is in event-notify mode and to store the address of the user-level event handler in the second register (e.g., the ERIP register, described above).At 709, the result of the executed instruction is written. For example, the write back/memory write stage 218, memory access unit(s) 264, execution unit(s) 262, or write back circuitry 611 sets the event-notify status flag in storage (e.g., register) 640 and writes the address of the user-level event handler to the ERIP register 650.Figure 8 illustrates an embodiment of method performed by a processor to process a ENEND instruction. For example, the stages of the pipeline 200 in Figure 2A , the core 290 of Figure 2B , or the hardware illustrated in Figure 6 perform this method.At 801, an instruction is fetched. For example, an ENEND instruction is fetched by the fetch stage 202 or the instruction fetch unit 238. As described above with reference to Figure 5 , the ENEND instruction includes a field for an opcode, the opcode to indicate that execution circuitry is to clear a flag in a first register that indicates an event-notify mode and to cause a tracking bit in at least one of a data TLB, a data cache, an instruction TLB, and an instruction cache to be cleared. For example, if operand 514 masked eviction events in the instruction TLB and instruction cache, the opcode may cause only T-bits in the data TLB and data cache to be cleared. In some embodiments, all the T-bits in the TLBs and caches are cleared upon exiting the event-notify mode.At 803, the fetched instruction is decoded. For example, the fetched ENEND instruction is decoded by the decode stage 206, the decode unit 240, or the decode circuitry 605.At 805, the decoded instruction is executed by execution circuitry such as the execution stage 216, the execution engine unit 250 (e.g., an execution unit 262), or execution circuitry 609. For the ENEND instruction, the execution will cause execution circuitry to clear a flag in the first register that indicates the event-notify mode (e.g., the event-notify status flag). The execution will further cause the T-bit in at least one of the data TLB, the data cache, the instruction TLB, and the instruction cache to be cleared. For example, the execution will cause the execution circuitry to reset or overwrite T-bits in the cache(s) or TLB(s), or cause logic associated with the cache(s) or TLB(s) to reset or overwrite the T-bits.At 807, the result of the executed instruction is written. For example, the w1rite back/memory write stage 218, memory access unit(s) 264, execution unit(s) 262, or write back circuitry 611 clears the event-notify status flag in storage (e.g., register) 640.Figure 9 illustrates an embodiment of method performed by a processor to process a MOVCIP instruction. For example, the stages of the pipeline 200 in Figure 2A , the core 290 of Figure 2B , or the hardware illustrated in Figure 6 perform this method.At 901, an instruction is fetched. For example, a MOVCIP instruction is fetched by the fetch stage 202 or the instruction fetch unit 238. As described above with reference to Figure 5 , the MOVCIP instruction includes a field for an opcode, the opcode to indicate that execution circuitry is to store a value stored in a first instruction pointer register (e.g., a CIP register). In some embodiments, the MOVCIP instruction further includes a register or memory location operand that identifies the location where the value in the first instruction pointer register should be stored. In other embodiments where the MOVCIP instruction does not include an operand, the instruction indicates that execution circuitry is to store the value in the first instruction pointer register onto a stack in a memory.At 903, the fetched instruction is decoded. For example, the fetched ENEND instruction is decoded by the decode stage 206, the decode unit 240, or the decode circuitry 605.At 905, data values associated with the decoded instruction are retrieved. In particular, the value of the first instruction pointer register (e.g., from a CIP register) is retrieved.At 907, the decoded instruction is executed by execution circuitry such as the execution stage 216, the execution engine unit 250 (e.g., an execution unit 262), or execution circuitry 609. For the MOVCIP instruction, the execution will cause execution circuitry to store the value stored in the first instruction pointer register (e.g., the CIP register as retrieved at 905) in the location identified by the operand (if the instruction includes an operand that identifies the location to store the pointer) or onto the stack in the memory (if the instruction does not include an operand identifying the location to store the pointer).At 909, the result of the executed instruction is written. For example, the write back/memory write stage 218, memory access unit(s) 264, execution unit(s) 262, or write back circuitry 611 writes the value from the first instruction pointer register to the stack in memory or to the location specified by the operand (if present). For example, if the execution circuitry buffered the write at 907, the buffered operation is performed at 909.Figure 10 illustrates an embodiment of method performed by a processor to process a PRELOAD (e.g., PRETETCH or ENPF) instruction. For example, the stages of the pipeline 200 in Figure 2A , the core 290 of Figure 2B , or the hardware illustrated in Figure 6 perform this method.At 1001, an instruction is fetched. For example, a PRELOAD CACHE instruction is fetched by the fetch stage 202 or the instruction fetch unit 238. As described above with reference to Figure 5 , the PRELOAD CACHE instruction includes a field for an opcode, the opcode to indicate that execution circuitry is to indicate that execution circuitry is to load a cache identified with a cache selector value with data at a location in a memory. The PRELOAD CACHE instruction may include an explicit operand or be associated with an implicit operand that identifies the location of the data in the memory as described herein. The PRELOAD CACHE instruction may further include an explicit operand or be associated with an implicit operand that includes or identifies the cache selector value (e.g., whether the destination for the data in memory is an instruction cache or a data cache) as described herein. The PRELOAD CACHE instruction may further include an explicit operand or be associated with an implicit operand that includes or identifies the read/write permissions associated with the loaded cache entries as described herein.At 1003, the fetched instruction is decoded. For example, the fetched PRELOAD CACHE instruction is decoded by the decode stage 206, the decode unit 240, or the decode circuitry 605.At 1005, data values associated with the decoded instruction are retrieved. In particular, the data in the memory that is to be loaded into the identified cache (i.e., instruction or data) is retrieved. Further, if any of the operands are implicit operands, the data stored in the location of each implicit operand is retrieved (e.g., the read/write permission value; the value indicating whether the data from memory is loaded into the data or the instruction cache).At 1007, the decoded instruction is executed by execution circuitry such as the execution stage 216, the execution engine unit 250 (e.g., an execution unit 262), or execution circuitry 609. For the PRELOAD CACHE instruction, the execution will cause execution circuitry to cause the retrieved data to load into the cache identified with the cache selector value. For example, the execution circuitry loads the data in the data cache or the instruction cache, as identified by the cache selector value, and subject to the read/write permissions, if specified.At 1009, the result of the executed instruction is written. For example, the write back/memory write stage 218, memory access unit(s) 264, execution unit(s) 262, or write back circuitry 611 writes the data from the memory into the identified cache.Figure 11 illustrates an embodiment of method performed by a processor in response to an event that occurs while in the event-notify mode. For example, the stages of the pipeline 200 in Figure 2A , the core 290 of Figure 2B , the hardware illustrated in Figure 6 , and/or any microcode associated thereof (collectively, firmware/hardware) perform this method.The method begins at 1101 upon the occurrence of a cache or TLB eviction, interrupt, or other security-critical event. For example, the method begins when the firmware/hardware is in the event-notify mode and identifies an eviction a cache or a TLB entry that has a set T-bit. Further, if the event-notify mode was enabled only for certain cache(s) or TLB(s), e.g., via operand 514, the method begins if all of the requisite conditions are satisfied (e.g., the core 290 is in event-notify mode, an eviction occurs of an entry with a set T-bit, the eviction is in a cache or TLB that is not masked).At 1103, the firmware/hardware clears the event-notify status flag in the register to take the processor out of the event-notify mode. At 1105, the firmware/hardware clears the set T-bits in the cache(s) and/or TLB(s). At 1107, the firmware/hardware stores the instruction pointer of the software program flow in an instruction pointer register (e.g., swaps RIP and ERIP). For example, if the instruction pointer (e.g., in RIP) of the main software program flow was at 0x10F0 when the security-critical event occurred, the firmware/hardware writes the value 0x10F0 to the ERIP register and writes the IP for the trampoline routine (e.g., the routine including an ENCALL instruction) into RIP (e.g., by swapping the contents of RIP and ERIP). At 1109, once the instruction pointer of the software program flow is stored, the firmware/hardware loads the instruction pointer register (e.g., the program counter) (e.g., RIP) with the entry-point of the trampoline (e.g., the value initially stored in the ERIP register) to the user-level event handler for the software program which will cause the software program to redirect its program flow to the event handler. In one embodiment, the trampoline includes an ENCALL instruction, that when decoded and executed, causes a jump of the execution to the user-level event handler (e.g., after pushing the security-critical event IP from ERIP onto a stack). At 1111, the firmware/hardware (and any supporting software) handle the interrupt or other exception that was the trigger of the security-critical event (e.g., for external interrupts or exceptions) to defeat a possible side-channel attack. Once the firmware/hardware returns to the software program, the firmware/hardware begins execution of the software program at the instruction that caused the user-level event handler to be invoked, as indicated at 1113. By storing the instruction pointer of the software program when the security-critical event occurs in the ERIP register and loading the current instruction pointer with the trampoline IP (e.g., with the trampoline routine including an ENCALL instruction), when the software program continues execution, the firmware/hardware redirects the execution back to the instruction that caused the user-level event handler to be invoked .Figure 12 illustrates an embodiment of a method performed by a processor to process a PRELOAD TLB instruction. At 1201, an instruction is fetched. For example, a PRELOAD TLB instruction is fetched by the fetch stage 202 or the instruction fetch unit 238. As described above with reference to Figure 5 , the PRELOAD TLB instruction includes a field for an opcode, the opcode to indicate that execution circuitry is to indicate that execution circuitry is to access a page at a specified address and prefetch its translation into a translation lookaside buffer (TLB) (e.g., with a specified permission). The PRELOAD TLB instruction may include an explicit operand or be associated with an implicit operand that identifies the location of the specified address in the memory as described herein. The PRELOAD TLB instruction may further include an explicit operand or be associated with an implicit operand that includes or identifies the read/write permissions associated with the loaded TLB entry as described herein.At 1203, the fetched instruction is decoded. For example, the fetched PRELOAD TLB instruction is decoded by the decode stage 206, the decode unit 240, or the decode circuitry 605.At 1205, data values associated with the decoded instruction are retrieved. In particular, the memory page for the specified address is retrieved. Further, if any of the operands are implicit operands, the data stored in the location of each implicit operand is retrieved (e.g., the read/write permission value).At 1207, the decoded instruction is executed by execution circuitry such as the execution stage 216, the execution engine unit 250 (e.g., an execution unit 262), or execution circuitry 609. For the PRELOAD TLB instruction, the execution will cause execution circuitry to cause an access of the page at the specified address, prefetch its translation as an entry in the translation lookaside buffer (TLB) with the specified permission, and set a tracking bit for the entry in the TLB. For example, the execution circuitry loads the translation for the specified (virtual) address into an entry in a TLB and sets the tracking bit (t-bit) according to the disclosure herein.At 1209, the result of the executed instruction is written. For example, the write back/memory write stage 218, memory access unit(s) 264, execution unit(s) 262, or write back circuitry 611 writes the data into the TLB.In certain embodiments, the A ("accessed") bit is set in the page-table entry for each page that is prefetched by a PRELOAD TLB instruction (e.g., ENPFPG instruction). In certain embodiments, if the page is writable, its D ("dirty") bit is also set, e.g., to mitigate controlled-channel attacks against secure enclaves.Figure 13 illustrates an embodiment of a method performed by a processor to process an ENCALL instruction. At 1301, an instruction is fetched. For example, an ENCALL instruction is fetched by the fetch stage 202 or the instruction fetch unit 238. As described above with reference to Figure 5 , the ENCALL instruction includes a field for an opcode, the opcode to indicate that execution circuitry is to indicate that execution circuitry is to push an instruction pointer that indicates where an event (e.g., exception) occurred onto call stack storage, and change a current instruction pointer in a current instruction pointer register to an instruction pointer to a user-level event handler. The ENCALL instruction may include an explicit operand or be associated with an implicit operand that identifies the entry-point (e.g., a memory address or pointer to the memory address) of a user-level event handler. The ENCALL instruction may include an explicit operand or be associated with an implicit operand that identifies a number of bits (or bytes) by which to change (e.g., decrement or increment depending on the stack) a stack pointer to a (e.g., call) stack.At 1303, the fetched instruction is decoded. For example, the fetched ENCALL instruction is decoded by the decode stage 206, the decode unit 240, or the decode circuitry 605.At 1305, data values associated with the decoded instruction are retrieved. In particular, the contents of the ERIP register are retrieved in one embodiment. Further, if any of the operands are implicit operands, the data stored in the location of each implicit operand is retrieved (e.g., the number of bits to change the stack pointer).At 1307, the decoded instruction is executed by execution circuitry such as the execution stage 216, the execution engine unit 250 (e.g., an execution unit 262), or execution circuitry 609. For one embodiment of the ENCALL instruction, the execution causes execution circuitry to cause a push of the instruction pointer that indicates where the event occurred onto the call stack storage, and a change of the current instruction pointer in the current instruction pointer register to the instruction pointer to the user-level event handler.At 1309, the result of the executed instruction is written. For example, the write back/memory write stage 218, memory access unit(s) 264, execution unit(s) 262, or write back circuitry 611 writes the data to the stack and changes the current IP (RIP) register to point to the user-level event handler (e.g., so that it is executed next).Figure 14 illustrates a computer system 1400 including a branch predictor 1420 and a branch address calculator 1442 (BAC) in a pipelined processor core 1409(1)-1409(N) according to embodiments of the disclosure. Referring to Figure 14 , a pipelined processor core (e.g., 1409(1)) includes an instruction pointer generation (IP Gen) stage 1411, a fetch stage 1430, a decode stage 1440, and an execution stage 1450. In one embodiment, each core of processor 100 in Figure 1 is an instance of processor core 1409(1-N), where N is any positive integer. In certain embodiments, each processor core 1409(1-N) instance supports multithreading (e.g., executing two or more parallel sets of operations or threads on a first and second logical core), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (e.g., where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter). In the depicted embodiment, each single processor core 1409(1) to 1409(N) includes an instance of branch predictor 1420. Branch predictor 1420 may include a branch target buffer (BTB) 1424. In certain embodiments, branch target buffer 1424 stores (e.g., in a branch predictor array) the predicted target instruction corresponding to each of a plurality of branch instructions (e.g., branch instructions of a section of code that has been executed multiple times). In the depicted embodiment, a branch address calculator (BAC) 1442 is included which accesses (e.g., includes) a call stack 1444, e.g., a return stack buffer (RSB) embodiment of a call stack. In certain embodiments, return stack buffer 1444 is to store (e.g., in a stack data structure of last data in is the first data out (LIFO)) the return addresses, e.g., of any CALL instructions (e.g., that push their return address on the stack).In certain embodiments, a branch address calculator (BAC) 1442 is to calculate addresses for certain types of branch instructions and/or to verify branch predictions made by a branch predictor (e.g., BTB). In certain embodiments, the branch address calculator performs branch target and/or next sequential linear address computations. In certain embodiments, the branch address calculator performs static predictions on branches based on the address calculations.In certain embodiments, the branch address calculator 1442 contains a return stack buffer 1444 to keep track of the return addresses of the CALL instructions. In one embodiment, the branch address calculator attempts to correct any improper prediction made by the branch predictor 1420 to reduce branch misprediction penalties. As one example, the branch address calculator verifies branch prediction for those branches whose target can be determined solely from the branch instruction and instruction pointer.In certain embodiments, the branch address calculator 1442 maintains the return stack buffer 1444 utilized as a branch prediction mechanism for determining the target address of return instructions, e.g., where the return stack buffer operates by monitoring all "call subroutine" and "return from subroutine" branch instructions. In one embodiment, when the branch address calculator detects a "call subroutine" branch instruction, the branch address calculator pushes the address of the next instruction onto the return stack buffer, e.g., with a top of stack pointer marking the top of the return stack buffer. By pushing the address immediately following each "call subroutine" instruction onto the return stack buffer, the return stack buffer contains a stack of return addresses in this embodiment. When the branch address calculator later detects a "return from subroutine" branch instruction, the branch address calculator pops the top return address off of the return stack buffer, e.g., to verify the return address predicted by the branch predictor 1420. In one embodiment, for an indirect branch type, the branch address calculator is to (e.g., always) predict taken for a conditional branch, for example, and if the branch predictor does not predict taken for the indirect branch, the branch address calculator overrides the branch predictor's missed prediction or improper prediction.Turning to the specific circuitry in Figure 14 , certain features are provided to validate branch predictions made by the branch predictor 1420. Each branch predictor 1420 entry (e.g., in BTB 1424) may further includes a valid field and a bundle address (BA) field which are used to increase the accuracy and validate branch predictions performed by the branch predictor 1420, as is discussed in more detail below. In one embodiment, the valid field and the BA field each consist of one bit fields. In other embodiments, however, the size of the valid and BA fields may vary. In one embodiment, a fetched instruction is sent (e.g., by BAC 1442 from line 1437) to the decoder 1446 to be decoded, and the decoded instruction is sent to the execution unit 1454 to be executed.Depicted computer system 1400 includes a network device 1401, input/output (I/O) circuit 1403 (e.g., keyboard), display 1405, and a system bus (e.g., interconnect) 1407.In one embodiment, the branch instructions stored in the branch predictor 1420 are pre-selected by a compiler as branch instructions that will be taken. In certain embodiments, the compiler code 1404, as shown stored in the memory 1402 of Figure 14 , includes a sequence of code that, when executed, translates source code of a program written in a high-level language into executable machine code. In one embodiment, the compiler code 1404 further includes additional branch predictor code 1406 that predicts a target instruction for branch instructions (for example, branch instructions that are likely to be taken (e.g., pre-selected branch instructions)). The branch predictor 1420 (e.g., BTB 1424 thereof) is thereafter updated with target instruction for a branch instruction. As discussed below, depicted core (e.g., branch predictor 1420 thereof) includes access to one or more registers (e.g., registers in any figure herein). In certain embodiments, a core includes one or more of general purpose register(s) 1408, flag storage register(s) 1412, user-level event handler trampoline pointer (ERIP) register 1414, or current instruction pointer (RIP) register 1416. In one embodiment, each logical core has its own flag storage register(s) 1412, user-level event handler pointer (ERIP) register 1414, current instruction pointer (RIP) register 1416, or any combination thereof.In certain embodiments, each entry for the branch predictor 1420 (e.g., in BTB 1424 thereof) includes a tag field and a target field. In one embodiment, the tag field of each entry in the BTB stores at least a portion of an instruction pointer (e.g., memory address) identifying a branch instruction. In one embodiment, the tag field of each entry in the BTB stores an instruction pointer (e.g., memory address) identifying a branch instruction in code. In one embodiment, the target field stores at least a portion of the instruction pointer for the target of the branch instruction identified in the tag field of the same entry. Moreover, in other embodiment, the entries for the branch predictor 1420 (e.g., in BTB 1424 thereof) includes one or more other fields. In certain embodiments, an entry does not include a separate field to assist in the prediction of whether the branch instruction is taken, e.g., if a branch instruction is present (e.g., in the BTB), it is considered to be taken.As shown in Figure 14 , the IP Gen mux 1413 of IP generation stage 1411 receives an instruction pointer from line 1414A. The instruction pointer provided via line 1415A is generated by the incrementer circuit 1415, which receives a copy of the most recent instruction pointer (e.g., from RIP register 1416) from the path 1413A. The incrementer circuit 1415 may increment the present instruction pointer by a predetermined amount, to obtain the next sequential instruction from a program sequence presently being executed by the core.In one embodiment, upon receipt of the IP from IP Gen mux 1413, the branch predictor 1420 compares a portion of the IP with the tag field of each entry in the branch predictor 1420 (e.g., BTB 1424). If no match is found between the IP and the tag fields of the branch predictor 1420, the IP Gen mux will proceed to select the next sequential IP as the next instruction to be fetched in this embodiment. Conversely, if a match is detected, the branch predictor 1420 reads the valid field of the branch predictor entry which matches with the IP. If the valid field is not set (e.g., has logical value of 0) the branch predictor 1420 considers the respective entry to be "invalid" and will disregard the match between the IP and the tag of the respective entry in this embodiment, e.g., and the branch target of the respective entry will not be forwarded to the IP Gen Mux. On the other hand, if the valid field of the matching entry is set (e.g., has a logical value of 1), the branch predictor 1420 proceeds to perform a logical comparison between a predetermined portion of the instruction pointer (IP) and the branch address (BA) field of the matching branch predictor entry in this embodiment. If an "allowable condition" is present, the branch target of the matching entry will be forwarded to the IP Gen mux, and otherwise, the branch predictor 1420 disregards the match between the IP and the tag of the branch predictor entry.More specifically, in one embodiment, the BA field indicates where the respective branch instruction is stored within a line of cache memory 1432. In certain embodiments, a processor is able to initiate the execution of multiple instructions per clock cycle, wherein the instructions are not interdependent and do not use the same execution resources.For example, each line of the instruction cache 1432 shown in Figure 14 includes multiple instructions (e.g., six instructions). Moreover, in response to a fetch operation by the fetch unit 1434, the instruction cache 1432 responds (e.g., in the case of a "hit") by providing a full line of cache to the fetch unit 1434 in this embodiment. The instructions within a line of cache may be grouped as separate "bundles." For example, as shown in Figure 14 , the first three instructions in a cache line 1433 may be addressed as bundle 0, and the second three instructions may be address as bundle 1. Each of the instructions within a bundle are independent of each other (e.g., can be simultaneously issued for execution). The BA field provided in the branch predictor 1420 entries is used to identify the bundle address of the branch instruction which corresponds to the respective entry in certain embodiments. For example, in one embodiment, the BA identifies whether the branch instruction is stored in the first or second bundle of a particular cache line.In one embodiment, the branch predictor 1420 performs a logical comparison between the BA field of a matching entry and a predetermined portion of the IP to determine if an "allowable condition" is present. For example, in one embodiment, the fifth bit position of the IP (e.g. IP[4]) is compared with the BA field of a matching (e.g., BTB) entry. In one embodiment, an allowable condition is present when IP [4] is not greater than the BA. Such an allowable condition helps prevent the apparent unnecessary prediction of a branch instruction, which may not be executed. That is, when less than all of the IP is considered when doing a comparison against the tags of the branch predictor 1420, it is possible to have a match with a tag, which may not be a true match. Nevertheless, a match between the IP and a tag of the branch predictor indicates a particular line of cache, which includes a branch instruction corresponding to the respective branch predictor entry, may about to be executed. Specifically, if the bundle address of the IP is not greater than the BA field of the matching branch predictor entry, then the branch instruction in the respective cache line is soon to be executed. Hence, a performance benefit can be achieved by proceeding to fetch the target of the branch instruction in certain embodiments.As discussed above, if an "allowable condition" is present, the branch target of the matching entry will be forwarded to the IP Gen mux in this example. Otherwise, the branch predictor will disregard the match between the IP and the tag. In one embodiment, the branch target forwarded from the branch predictor is initially sent to a Branch Prediction (BP) resteer mux 128, before it is sent to the IP Gen mux. The BP resteer mux 1428, as shown in Figure 14 , may also receive instruction pointers from other branch prediction devices. In one embodiment, the input lines received by the BP resteer mux will be prioritized to determine which input line will be allowed to pass through the BP resteer mux onto the IP Gen mux.In addition to forwarding a branch target to the BP resteer mux, upon detecting a match between the IP and a tag of the branch predictor, the BA of the matching branch predictor entry is forwarded to the Branch Address Calculator (BAC) 1442. The BAC 1442 is shown in Figure 14 to be located in the decode stage 1440, but may be located in other stage(s). The BAC of may also receive a cache line from the fetch unit 1434 via line 1437.The IP selected by the IP Gen mux is also forwarded to the fetch unit 1434, via data line 1435 in this example. Once the IP is received by the fetch unit 1434, the cache line corresponding to the IP is fetched from the instruction cache 1432. The cache line received from the instruction cache is forwarded to the BAC, via data line 1437.Upon receipt of the BA in this example, the BAC will read the BA to determine where the pre-selected branch instruction (e.g., identified in the matching branch predictor entry) is located in the next cache line to be received by the BAC (e.g., the first or second bundle of the cache line). In one embodiment, it is predetermined where the branch instruction is located within a bundle of a cache line (e.g., in a bundle of three instructions, the branch instruction will be stored as the second instruction).In alternative embodiments, the BA includes additional bits to more specifically identify the address of the branch instruction within a cache line. Therefore, the branch instruction would not be limited to a specific instruction position within a bundle.After the BAC determines the address of the pre-selected branch instruction within the cache line, and has received the respective cache line from the fetch unit 1434, the BAC will decode the respective instruction to verify the IP truly corresponds to a branch instruction. If the instruction addressed by BA in the received cache line is a branch instruction, no correction for the branch prediction is necessary. Conversely, if the respective instruction in the cache line is not a branch instruction (i.e., the IP does not correspond to a branch instruction), the BAC will send a message to the branch predictor to invalidate the respective branch predictor entry, to prevent similar mispredictions on the same branch predictor entry. Thereafter, the invalidated branch predictor entry will be overwritten by a new branch predictor entry.In addition, in one embodiment, the BAC will increment the IP by a predetermined amount and forward the incremented IP to the BP resteer mux 1428, via data line 1445, e.g., the data line 1445 coming from the BAC will take priority over the data line from the branch predictor. As a result, the incremented IP will be forwarded to the IP Gen mux and passed to the fetch unit in order to correct the branch misprediction by fetching the instructions that sequentially follow the IP.Updating the Branch Predictor EntriesIn one embodiment, the branch predictor is updated by the BAC and the Branch Resolution Unit (BRU) 1456. For example, when the compiler translates a "high-level" branch instruction into a machine level instruction for execution, the compiler will provide a "predict instruction" to be executed prior to the respective branch instruction. The predict instruction can be used to update the branch predictor.In one embodiment, the predict instruction includes two immediate operands. The first immediate operand is an offset of the respective branch instruction's memory address. The second immediate operand is an offset of the branch instruction's target address. Alternatively, the predict instruction may identify a branch register (BR) 1458 (or a general purpose register (GPR) 1408) storing the address of the branch instruction and/or the branch target.The predict instruction may also include an "important hint" (ih) field, which when set by the branch predictor of the compiler, indicates the respective branch instruction is likely to be taken. The branch prediction of the compiler may statically set the ih field of a predict instruction based on the operation (op) code of the respective branch instruction (e.g., unconditional branch, return branch, conditional branch, etc.). Alternatively, the branch predictor may generate a profile for the respective branch instruction, and set the ih field of the predict instruction, according to the history of the respective branch instruction.As a result, in one embodiment, when the BAC receives a predict instruction which has an ih field that is set, the BAC will forward, via data path 1452, at least part of the branch instruction's memory address and the target of the branch instruction to branch predictor, as shown in Figure 14 . Upon receipt of the data, the branch predictor will proceed to update an entry of the branch predictor, with the data received from the BAC in this example.In addition, the branch predictor entries can also be updated by the Branch Resolution Unit (BRU) 1456, which is shown in Figure 14 to be included in the 1452. More specifically, certain branch instructions are referred to as indirect branching instructions, wherein the branch target is stored in a branch register(s) 1458. In one embodiment, the branch registers are provided in the BRU 1456 as shown in Figure 14 .Registers in computer system 1400 (e.g., internal registers 1410) may include one or more of flag storage register(s) 1412, user-level event handler pointer (ERIP) register 1414, or current instruction pointer (RIP) register 1416, e.g., in addition to other control registers. In one embodiment, each logical core has its own respective flag storage register(s) 1412, user-level event handler pointer (ERIP) register 1414, current instruction pointer (RIP) register 1416. or any combination thereof. In one embodiment, a plurality of logical cores share a single register, e.g., share one or more general purpose (e.g., data) registers 1408.In certain embodiments, special instructions, prior to the indirect branch instructions, are used to store the branch targets in the branch registers. That is, when the compiler is translating a higher level indirect branch instruction into a machine level instruction, the compiler generates a set branch register (set_BR) instruction, that is to be executed prior the actual indirect branch instruction. When executed, the set_BR instructions will write the target address of an indirect branch instruction into a branch register.For example, the set_BR instruction may transfer the value of the branch target value from a register (e.g., GPR) 1408 to a branch register 1458. Alternatively, the branch target may be included in the set BR instruction as an offset, which could be added to the memory address of the set BR instruction to obtain the address of the respective branch target. The address of the branch target could then be written into the BR to be used by the indirect branch instruction which follows.In one embodiment, the set BR instruction further identifies the address of the respective indirect branch instruction. For example, the address may be included as an offset which, once again, can be added to the memory address of the respective set BR instruction to obtain the address of the indirect branch instruction. In one embodiment, the set_BR instruction includes the "important hint" (ih) field, as described above.In one embodiment, when the BRU receives a set_BR instruction, the BRU sends to the branch predictor, via data path 1455, at least part of the respective branch instruction's memory address and at least part of the branch instruction's target. In one embodiment, the BRU also sends the ih field of the set BR instruction. If the ih field is set, the branch predictor will proceed to update an entry of the branch predictor with the data received from the BRU in this example. Otherwise, the branch predictor will disregard the data received from the BRU. Alternatively, the BRU may read the ih field of the set BR instruction to determine whether to transmit the data to the branch predictor.In addition to running user applications and an operating system, a processor (e.g., core) may run a virtual machine monitor (VMM) which in turn manages multiple virtual machines (VMs) running on the processor.Example utilizing an ENCALL instruction:A side channel may generally refer to an unintentional transfer of information through a hardware or software mechanism not specifically designed to transfer information. For example, logical processor (P0) (e.g., on a core) may evict a cache slot that was occupied by memory being used by another logical processor P1 (e.g., on the same core as P0). In certain embodiments, P1's next access to the same memory will then be measurably slower-a hint that P0 may have accessed memory corresponding to the evicted cache slot.In certain embodiments, a side-channel attack occurs when a malicious OS, hypervisor, or user-space software application is able to capture secret information from a victim application through a side channel. The ability of a software application to defend itself against side-channel attacks may be constrained by its limited view of architectural and microarchitectural platform state. In the prior example, the victim application cannot feasibly determine whether or not a cache line is present at a particular cache level before it attempts to access data on that line.A malicious OS/hypervisor may abuse its privileged responsibilities to mount a controlled-channel attack against a secure enclave. For example, the host can selectively evict a subset of the enclave's pages from memory. When the enclave attempts to access any page in this set, the hardware will signal a page fault exception to the host with the address of the page that faulted, and the enclave will exit. Thus, the host is able to learn which page the enclave attempted to access in this example. By repeatedly forcing page faults in this manner, the host can construct a complete trace of enclave execution at page granularity. In certain embodiments, controlled-channel attacks against secure enclaves include (i) using programmable interrupts (via an advanced programmable interrupt controller (APIC)) to single-step enclave execution, (ii) observing any update to the access/dirty (A/D) bits in the enclave's page tables, then pausing the enclave and resetting the bits, or (iii) launching a cache side-channel attack against the enclave's page tables while the enclave is running to recover the trace of any page walk(s), then pause the enclave to flush its TLB entries, and thus force additional page walks when the enclave resumes.The embodiments herein describe Instruction Set Architecture (ISA) extensions that allows application software to subscribe to notifications for architectural events (e.g., interrupts/exceptions) and microarchitectural events (e.g., cache eviction). Software can use these notifications to deploy countermeasures against side-channel attacks.Certain embodiments herein do not require a complete redesign of the cache architecture, which would be expensive in terms of design and validation efforts, and may have negative performance and power impact for non-security-critical applications. Certain embodiments herein can mitigate all of the attacks, e.g., including attacks other than those that exploit secret-dependent control flow instead of secret-dependent data flow, e.g., without degrading performance significantly. Certain embodiments herein mitigate side-channel attacks against secure enclaves (or any shielded execution), e.g., without requiring system software being in the trusted computed base (e.g., which is incompatible with certain secure enclave threat models). Certain embodiments herein provide a way for an application to protect itself from side channel attacks, e.g., without being implementation specific, with addressing multiple attacks (e.g., hyperthreading attacks, attacks that exploit TLB misses, etc.), without introducing unnecessary overhead, without only addressing a subset of the cache-based attacks, and/or with being applicable to controlled-channel attacks.Certain embodiments herein address side-channel attacks with minimum software and/or hardware overhead by adding ISA extensions, e.g., as discussed in reference to Figure 5 . In one embodiment, the ISA extensions provide user-space applications with a new execution mode: event-notify mode. Within certain embodiments of event-notify mode, a software thread can subscribe to notifications for architectural and microarchitectural events such as interrupts, exceptions, cache evictions, and TLB evictions. Applications can use these notifications to effectively pin their security-critical code and data into the TLB/cache, and thus mitigating program secrets from being leaked through an architectural or microarchitectural side channel.Certain embodiments herein provide user-space applications with a lightweight mechanism of ISA extensions to protect themselves from cache-based side channel attacks, e.g., allowing secure enclaves to defeat controlled-channel attacks. The ISA extensions may include any of the following instructions: ENBEGIN, ENEND, ENCALL, PRELOAD (e.g., ENPF), or PRELOAD TLB (e.g., ENPFPG). The ISA extensions may utilize FLAG and ERIP registers. The ISA extensions may utilize track bits (t-bits) to mark security-critical entries in the CPU's caching structures, e.g., for each logical processor. In one embodiment, if a tracked resource is evicted, the thread executing on that logical processor will be notified directly by the processor (e.g., ISA instruction(s) executing on the processor), allowing the thread to react to the event (e.g., by taking corrective measures). In one embodiment, threads use these ISA extensions to enforce security invariants to defeat cache-based and controlled-channel attacks.As one example, the decoding and execution of an ENBEGIN instruction puts the current thread into event-notify mode. In one embodiment, the ENBEGIN instruction takes a single operand: the effective address of an event handler (e.g., the address of an event handler trampoline routine that includes an ENCALL instruction) which is stored in ERIP (e.g., ERIP in 1414 in Figure 14 ). In certain embodiments, the thread will exit event-notify mode if any of the following occurs: (i) a subscribed event is detected on the logical processor on which the thread is executing, (ii), the thread invokes an instruction that causes the ISA to no longer be able to track events for the thread (e.g., a SYSCALL instruction that when decoded and executed switches from user mode to kernel mode), or (iii) the thread explicitly exits event notify mode by issuing an ENEND instruction (e.g., that is decoded and executed by the processor). In certain embodiments, occurrence of (a), (b), or (c) cause all tracking bits (e.g., T-bits) to be cleared for the logical processor. In one embodiment, either of (i) or (ii) additionally causes the processor (e.g., CPU) to jump to the event handler (e.g., event handler trampoline) (e.g., by jumping to the address in the ERIP instead of the address in the RIP) when the thread resumes user-mode execution, e.g., RIP and ERIP are swapped. In the case of an interrupt, exception, or SYSCALL event, in one embodiment the OS is to perform a state save (e.g., by decoding and executing an XSAVE instruction that saves a processor's extended states) for the ERIP register to preserve its contents. In one embodiment, the event handler trampoline is a single ENCALL instruction which takes the effective address of the event handler and, optionally, a fixed number of bytes to push onto the (e.g., call) stack before jumping to the event handler (e.g., to protect the stack red zone). In one embodiment (for example, after decrementing the stack pointer (e.g., return stack pointer (RSP)), decoding and execution of an ENCALL instruction pushes the value from ERIP (e.g., which because of the above-mentioned swap is now the pre-swap value of RIP) onto the (e.g., user-space) stack, and then jumps execution to the event handler (e.g., the address for the event handler being an operand of the ENCALL instruction). An embodiment of the memory layout of an event handler and event handler trampoline code is depicted in Figure 15 .Figure 15 illustrates an example code flow 1500 for an event-notify mode according to embodiments of the disclosure. Depicted code flow 1500 includes a main program flow 1502, an event handler trampoline 1504, and an event handler 1506. In the depicted embodiment, main program flow 1502 includes an ENBEGIN instruction 1508 before security-critical code 1510 and an ENEND instruction 1512 following the security-critical code 1510, e.g., to turn on and the off, respectively, the notifications discussed herein.In one embodiment, on detection of either of (i) a subscribed event occurring on the logical processor on which the thread is executing or (ii) the thread invoking (e.g., decoding and executing) an instruction that causes the processor (e.g., ISA thereof) to no longer be able to track events for the thread: the logical processor (e.g., logical core) jumps 1509 execution to the event handler trampoline 1504 (e.g., by swapping contents of an ERIP that stored the IP to the event handler trampoline 1502 into RIP with contents of an RIP that stored the IP when the event occurred). Depicted event handler trampoline 1504 includes an ENCALL instruction 1514. Decoding and execution of ENCALL instruction 1514 causes a jump of execution to event handler 1506 (e.g., and a push of a pointer where (i) or (ii) were detected). Depicted event handler 1506 (e.g., when executed) is to save 1516 the volatile register (e.g., ERIP) and flag (e.g., from flag register) state, reenter event-notify mode by invoking another ENBEGIN instruction 1518, handle the event(s) 1520, restore 1522 the volatile register and flag state, invoke a return instruction (e.g., RET instruction) (e.g., where N is the number of bytes that was pushed onto the stack by ENCALL instruction 1514). In one embodiment, where an ENCALL instruction has pushed the previous value of RIP (e.g., via pushing the data from the ERIP register storing that IP for when the event occurred) onto the stack as the return address for the call, the return instruction will resume execution at the point where the event was detected. In one embodiment, an ENCALL instruction is to, when decoded and executed, check the flag bit to determine that the logical processor is not already executing in event-notify mode, for example, and cause a general protection fault (GP) to issue if the logical processor is already executing in event-notify mode. In one embodiment, a CALL instruction pushes the RIP register contents onto the stack and an ENCALL instruction pushes the ERIP register contents onto the stack. In one embodiment (e.g., when in event-notify mode), ENCALL instruction (e.g., also) pushes the ERIP register contents to the shadow stack.Figure 16 illustrates a stack 1600 used in an event-notify mode according to embodiments of the disclosure. In one embodiment, decoding and execution of an ENCALL instruction is to push the same data to two instances of stack 1600 (e.g., a call stack and a shadow stack). Depicted stack 1600 shows an example of the data that decoding and execution of an ENCALL instruction pushes onto the stack 1600 and the changing of the pointer (e.g., return stack pointer (RSP)) from a value before an event (e.g., (i) or (ii) discussed in reference to Figure 15 ) is detected 1604, and after the event is detected 1606. In certain embodiments, decoding and execution of an ENCALL instruction pushes the RIP 1610 (e.g., the IP when the event occurred) onto stack 1600, but that IP is sourced from ERIP. Optionally, a stack 1600 may include a red zone 1608 (e.g., stack space used as a scratch space without moving the pointer). Thus, the decoding an executing of an ENCALL instruction may include moving the pointer to prevent any overwriting of RIP 1610 (e.g., the "current" IP when the event occurred) as well as the stack red zone 1608. Previous data on the stack may be main program flow stack 1602.In one embodiment, by default, if a thread executing in event-notify mode is interrupted or if it triggers an exception, execution will resume at the event handler trampoline after the interrupt/exception has been serviced by the OS. Thus, in this embodiment, the thread is notified that it was interrupted. This feature may be essential where the ISA cannot track other events for the thread while it is suspended. Thus, in this embodiment, when a thread is resumed (and its event handler is invoked), the thread should assume that any combination or number of events to which it had subscribed may have occurred while it was suspended.In certain embodiments, a thread subscribes to cache and TLB eviction event notifications by using the PRELOAD CACHE instruction and a PRELOAD TLB instruction, respectively. In one embodiment, the operand to a PRELOAD CACHE instruction is an effective memory address, whose corresponding cache line is prefetched into all levels of the cache hierarchy. In one embodiment, the operand to a PRELOAD TLB instruction is an effective memory address, whose corresponding address translation is loaded into all levels of the TLB. In certain embodiments, cache lines and TLB entries prefetched in this manner are tracked by having their T bits set. In one embodiment, if any resource with its T bit set is evicted while the thread is executing in event-notify mode, the logical processor will exit event-notify mode, clear the thread's watch set, and jump to the event handler trampoline. Figures 3A-4B depict an example on how the T bits in the cache structures are changed when executed normally (e.g., in Figures 3A-3D ) and when there is a cache eviction event within event notify mode (e.g., in Figures 4A-4B ). Note that when a tracked resource is evicted, all the threads that have the T bit set for that resource will be signaled in certain embodiments.Example Mitigation Strategy 1: A program can use event-notify mode and its primitives to proactively protect security-critical code and data from side channel information leakage. In particular, a security-critical code segment can be wrapped within the ENBEGIN and ENEND primitives. Before executing the security-critical code segment, all the security-critical code/data can be preloaded into cache structures-including TLB and cache-using the prefetching instructions. Whenever a subscribed event (e.g., including interrupt/exception) is detected, the event handler is invoked. This handler can then re-enter event-notify mode, reload (e.g., prefetch) the security-critical code/data, and then resume execution at the instruction where the event was originally detected. In certain embodiments, this strategy causes security-critical code/data and translations to always be resident in the L1 caches and TLBs while the program is executing in event notify mode.Example Mitigation Strategy 2: User-space software can use event-notify mode to detect when it may be under attack. For instance, the event handler can increment a counter each time an event is detected, and the user can define a security policy in terms of this counter (e.g., when exceeding a threshold). Some example policies include, "terminate after n events are detected while the enclave is running," "if n events occur during the call to Function(), terminate", or "if at any point during enclave execution, the ratio of events to memory accesses exceeds 1:1000, terminate." In one embodiment, the last policy uses compiler assistance to count the number of memory accesses made within each basic block, and updates another counter accordingly.Security Analysis: in some cases, a necessary pre-condition of controlled-channel and cache-based side channel attacks is that the attacker is able to evict/flush the victim's TLB entry/cache line (e.g., either from another thread on the same core, or by interrupting the victim), and then observe a subsequent secret-dependent access to one of those structures. In certain embodiments, if the eviction occurs while the victim is in event-notify mode, the victim will be notified. If the eviction occurs while victim thread is interrupted (e.g., not in event-notify mode), the victim will also be notified as soon as the thread is resumed in one embodiment. In both cases, the victim will deterministically reload its security-critical code/data, before making any secret-dependent accesses to it (e.g., assuming Mitigation Strategy 1) in certain embodiments. Hence the adversary will not be able to observe any secret-dependent accesses made by the victim, since all of those accesses will hit in the L1 caches and TLBs in this embodiment.Exemplary architectures, systems, etc. that the above may be used in are detailed below.Exemplary Program FlowsVarious program flows can leverage the instructions and associated firmware/hardware features disclosed herein to protect security-critical code and/or data. The following examples illustrate a sequence of operations performed by software programs that include a preamble routine and a security-critical routine in conjunction with hardware and/or firmware to prevent leakage of security-critical code or data.In a first example, a processor (e.g., the core 290) executes a software program without interruption by a security-critical event. In a main flow of the software program, the software program calls the preamble routine. The preamble routine includes an ENBEGIN instruction, followed by a PRELOAD CACHE instruction to preload an instruction or data cache and/or PRELOAD TLB instruction to preload a TLB. When the preamble routine completes (i.e., the cache is loaded), program flow returns to the main flow. In the main flow, the software program performs the security-critical processing (or calls a security-critical routine). Once the security-critical processing is complete, the software program issues the ENEND instruction.In a second example, a processor (e.g., the core 290) executes a software program and a security-critical event occurs during the security-critical routine of the software program (e.g., during the security-critical routine 115). In this example, the security-critical event is a cache eviction of a tracked cache entry (e.g., with a T-bit). The software program flow calls the preamble routine, which includes the ENBEGIN instruction and performs the cache preloading with the PRELOAD CACHE instruction. The software program flow continues to the security-critical routine, during the processing of which an eviction of a tracked cache or TLB entry occurs. In response, the firmware/hardware clears the event-notify status flag, clears any set T-bits in the cache(s) and/or TLB(s), stores the instruction pointer of the software program in the ERIP register, and loads the instruction pointer for the software program with the entry-point of the software program's user-level event handler to redirect the program flow to the user-level event handler (e.g., by swapping RIP and ERIP). The ENCALL instruction stores the value stored in the ERIP register onto the program stack in certain embodiments.. Storing the value (e.g., RIP value when the event occurred) stored in the ERIP register onto the stack allows the software program to resume its security-critical routine where it left off when the eviction occurred. The user-level exception then calls the preamble routine to "re-pin" the security-critical code and/or data in the cache(s). Before calling the preamble routine, the user-level event handler may save any flags or other registers on the stack to enable the software program to resume the security-critical routine where it left off after the preamble returns. The preamble routine is executed, including re-initiating the event-notify mode by issuing the ENBEGIN instruction. Once the preamble routine completes, the software program flow returns to the user-level event handler. The user-level event handler restores and flags or registers it preserved before calling the preamble routine from the stack and redirects the program flow to the value of the preserved RIP register. In this manner, the software program resumes executing the security-critical routine having re-loaded the cache(s). Further, by re-loading the cache(s), any observer or attacker cannot ascertain any patterns in the security-critical routine based on cache fills/evictions.In a third example, a processor (e.g., the core 290) executes a software program and a security-critical event occurs during the preamble routine of the software program (e.g., during the preamble routine 110). In this example, the security-critical event is a cache eviction of a tracked cache entry (e.g., with a T-bit). The software program flow calls the preamble routine, which includes the ENBEGIN instruction. In this example, the preamble routine begins cache preloading with the PRELOAD CACHE instruction. Prior to completing the cache preloading, an eviction of a tracked cache or TLB entry occurs. In response, the firmware/hardware clears the event-notify status flag, clears any set T-bits in the cache(s) and/or TLB(s), stores the instruction pointer of the software program in the ERIP register, and loads the instruction pointer for the software program with the entry-point of the software program's user-level event handler to redirect the program flow to the user-level event handler (e.g., by swapping RIP and ERIP). The ENCALL instructions stores the value stored in the ERIP register onto the program stack (or another specified location) in certain embodiments. The user-level exception then calls the preamble routine to "re-pin" the security-critical code and/or data in the cache(s). The preamble routine is executed, including re-initiating the event-notify mode by issuing the ENBEGIN instruction. The preamble routine can checkpoint its first execution and check whether it was previously interrupted based on the existence of a checkpoint. If the preamble routine determines it was interrupted, the preamble routine can revert the program flow to the checkpoint so that preamble routine is executed from the beginning to completion only once. After the preamble routine is executed, the program flow continues to the security-critical routine.Note that a first security-critical event could occur within the security-critical routine and subsequently a second security-critical event could occur within the preamble routine that was initiated by the user-level event handler that was handling the first security-critical event. In this case, the user-level event handler called in response to the first event would call the ENCALL instruction to store the instruction pointer of the security-critical program flow and subsequent calls to the user-level event handler (e.g., from the preamble routine) would not. Once the preamble routine has completed once without interruption, the user-level event handler called in response to the first event would issue a RET instruction to allow the software program to resume security-critical routine execution with the re-loaded cache(s). Again, by re-loading the cache(s), any observer or attacker cannot ascertain any patterns in the security-critical routine based on cache fills/evictions.In a fourth example, a processor (e.g., the core 290) executes a software program and a security-critical event occurs during the security-critical routine of the software program. In this example, the security-critical event is an external interrupt. The software program flow calls the preamble routine, which includes the ENBEGIN instruction and performs the cache preloading with the PRELOAD CACHE instruction. The software program flow continues to the security-critical routine, during the processing of which an external interrupt occurs. In response, the firmware/hardware clears the event-notify status flag, clears any set T-bits in the cache(s) and/or TLB(s), stores the instruction pointer of the software program in the ERIP register, and loads the instruction pointer for the software program with the entry-point of the software program's user-level event handler to redirect the program flow to the user-level event handler. After servicing the external interrupt, the program flow resumes with the user-level event handler. The user-level event handler stores the value stored in the ERIP register onto the program stack. Storing the value in the ERIP register allows the software program to resume its security-critical routine where it left off when the interrupt occurred. The user-level exception then calls the preamble routine to "re-pin" the security-critical code and/or data in the cache(s). Before calling the preamble routine, the user-level event handler may save any flags or other registers on the stack to enable the software program to resume the security-critical routine where it left off after the preamble returns. The preamble routine is executed, including re-initiating the event-notify mode by issuing the ENBEGIN instruction. Once the preamble routine completes, the software program flow returns to the user-level event handler. The user-level event handler restores and flags or registers it preserved before calling the preamble routine from the stack and redirects the program flow to the value of the preserved RIP register. In this manner, the software program resumes executing the security-critical routine. Again, by re-loading the cache(s), any observer or attacker cannot ascertain any patterns in the security-critical routine based on cache fills/evictions.The side-channel protected mode can be implemented across a variety of different core and computer architectures, including in emulation environments, such as those illustrated and described with reference to Figures 17-23 .An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, November 2018 ; and see Intel® Architecture Instruction Set Extensions Programming Reference, October 2018 ).Exemplary Instruction FormatsEmbodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.Figures 17A-17B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the disclosure. Figure 17A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the disclosure; while Figure 17B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the disclosure. Specifically, a generic vector friendly instruction format 1700 for which are defined class A and class B instruction templates, both of which include no memory access 1705 instruction templates and memory access 1720 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.While embodiments of the disclosure will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).The class A instruction templates in Figure 17A include: 1) within the no memory access 1705 instruction templates there is shown a no memory access, full round control type operation 1710 instruction template and a no memory access, data transform type operation 1715 instruction template; and 2) within the memory access 1720 instruction templates there is shown a memory access, temporal 1725 instruction template and a memory access, non-temporal 1730 instruction template. The class B instruction templates in Figure 17B include: 1) within the no memory access 1705 instruction templates there is shown a no memory access, write mask control, partial round control type operation 1712 instruction template and a no memory access, write mask control, vsize type operation 1717 instruction template; and 2) within the memory access 1720 instruction templates there is shown a memory access, write mask control 1727 instruction template.The generic vector friendly instruction format 1700 includes the following fields listed below in the order illustrated in Figures 17A-17B.Format field 1740 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.Base operation field 1742 - its content distinguishes different base operations.Register index field 1744 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).Modifier field 1746 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 1705 instruction templates and memory access 1720 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.Augmentation operation field 1750 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the disclosure, this field is divided into a class field 1768, an alpha field 1752, and a beta field 1754. The augmentation operation field 1750 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.Scale field 1760 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale ∗ index + base).Displacement Field 1762A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale ∗ index + base + displacement).Displacement Factor Field 1762B (note that the juxtaposition of displacement field 1762A directly over displacement factor field 1762B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale ∗ index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 1774 (described later herein) and the data manipulation field 1754C. The displacement field 1762A and the displacement factor field 1762B are optional in the sense that they are not used for the no memory access 1705 instruction templates and/or different embodiments may implement only one or none of the two.Data element width field 1764 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.Write mask field 1770 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 1770 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the disclosure are described in which the write mask field's 1770 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 1770 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 1770 content to directly specify the masking to be performed.Immediate field 1772 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.Class field 1768 - its content distinguishes between different classes of instructions. With reference to Figures 17A-B, the contents of this field select between class A and class B instructions. In Figures 17A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 1768A and class B 1768B for the class field 1768 respectively in Figures 17A-B).Instruction Templates of Class AIn the case of the non-memory access 1705 instruction templates of class A, the alpha field 1752 is interpreted as an RS field 1752A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1752A.1 and data transform 1752A.2 are respectively specified for the no memory access, round type operation 1710 and the no memory access, data transform type operation 1715 instruction templates), while the beta field 1754 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1705 instruction templates, the scale field 1760, the displacement field 1762A, and the displacement scale filed 1762B are not present.No-Memory Access Instruction Templates - Full Round Control Type OperationIn the no memory access full round control type operation 1710 instruction template, the beta field 1754 is interpreted as a round control field 1754A, whose content(s) provide static rounding. While in the described embodiments of the disclosure the round control field 1754A includes a suppress all floating point exceptions (SAE) field 1756 and a round operation control field 1758, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 1758).SAE field 1756 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 1756 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.Round operation control field 1758 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 1758 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the disclosure where a processor includes a control register for specifying rounding modes, the round operation control field's 1750 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type OperationIn the no memory access data transform type operation 1715 instruction template, the beta field 1754 is interpreted as a data transform field 1754B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).In the case of a memory access 1720 instruction template of class A, the alpha field 1752 is interpreted as an eviction hint field 1752B, whose content distinguishes which one of the eviction hints is to be used (in Figure 17A, temporal 1752B. 1 and non-temporal 1752B.2 are respectively specified for the memory access, temporal 1725 instruction template and the memory access, non-temporal 1730 instruction template), while the beta field 1754 is interpreted as a data manipulation field 1754C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 1720 instruction templates include the scale field 1760, and optionally the displacement field 1762A or the displacement scale field 1762B.Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - TemporalTemporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-TemporalNon-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class BIn the case of the instruction templates of class B, the alpha field 1752 is interpreted as a write mask control (Z) field 1752C, whose content distinguishes whether the write masking controlled by the write mask field 1770 should be a merging or a zeroing.In the case of the non-memory access 1705 instruction templates of class B, part of the beta field 1754 is interpreted as an RL field 1757A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1757A. 1 and vector length (VSIZE) 1757A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 1712 instruction template and the no memory access, write mask control, VSIZE type operation 1717 instruction template), while the rest of the beta field 1754 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1705 instruction templates, the scale field 1760, the displacement field 1762A, and the displacement scale filed 1762B are not present.In the no memory access, write mask control, partial round control type operation 1710 instruction template, the rest of the beta field 1754 is interpreted as a round operation field 1759A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).Round operation control field 1759A -just as round operation control field 1758, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 1759A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the disclosure where a processor includes a control register for specifying rounding modes, the round operation control field's 1750 content overrides that register value.In the no memory access, write mask control, VSIZE type operation 1717 instruction template, the rest of the beta field 1754 is interpreted as a vector length field 1759B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).In the case of a memory access 1720 instruction template of class B, part of the beta field 1754 is interpreted as a broadcast field 1757B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 1754 is interpreted the vector length field 1759B. The memory access 1720 instruction templates include the scale field 1760, and optionally the displacement field 1762A or the displacement scale field 1762B.With regard to the generic vector friendly instruction format 1700, a full opcode field 1774 is shown including the format field 1740, the base operation field 1742, and the data element width field 1764. While one embodiment is shown where the full opcode field 1774 includes all of these fields, the full opcode field 1774 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 1774 provides the operation code (opcode).The augmentation operation field 1750, the data element width field 1764, and the write mask field 1770 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the disclosure, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the disclosure). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the disclosure. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.Exemplary Specific Vector Friendly Instruction FormatFigure 18 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the disclosure. Figure 18 shows a specific vector friendly instruction format 1800 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 1800 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 17 into which the fields from Figure 18 map are illustrated.It should be understood that, although embodiments of the disclosure are described with reference to the specific vector friendly instruction format 1800 in the context of the generic vector friendly instruction format 1700 for illustrative purposes, the disclosure is not limited to the specific vector friendly instruction format 1800 except where claimed. For example, the generic vector friendly instruction format 1700 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 1800 is shown as having fields of specific sizes. By way of specific example, while the data element width field 1764 is illustrated as a one bit field in the specific vector friendly instruction format 1800, the disclosure is not so limited (that is, the generic vector friendly instruction format 1700 contemplates other sizes of the data element width field 1764).The generic vector friendly instruction format 1700 includes the following fields listed below in the order illustrated in Figure 18A.EVEX Prefix (Bytes 0-3) 1802 - is encoded in a four-byte form.Format Field 1740 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 1740 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the disclosure).The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.REX field 1805 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 1757BEX byte 1, bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using Is complement form, i.e. ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.REX' field 1710 - this is the first part of the REX' field 1710 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment of the disclosure, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments of the disclosure do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.Opcode map field 1815 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (0F, OF 38, or OF 3).Data element width field 1764 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).EVEX.vvvv 1820 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.ww may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in Is complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vwv field 1820 encodes the 4 low-order bits of the first source register specifier stored in inverted (Is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 1768 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.Prefix encoding field 1825 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.Alpha field 1752 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also illustrated with α) - as previously described, this field is context specific.Beta field 1754 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.REX' field 1710 - this is the remainder of the REX' field and is the EVEX.V' bit field (EVEX Byte 3, bit [3] - V') that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Write mask field 1770 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment of the disclosure, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).Real Opcode Field 1830 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 1840 (Byte 5) includes MOD field 1842, Reg field 1844, and R/M field 1846. As previously described, the MOD field's 1842 content distinguishes between memory access and non-memory access operations. The role of Reg field 1844 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 1846 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 1750 content is used for memory address generation. SIB.xxx 1854 and SIB.bbb 1856 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.Displacement field 1762A (Bytes 7-10) - when MOD field 1842 contains 10, bytes 7-10 are the displacement field 1762A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.Displacement factor field 1762B (Byte 7) - when MOD field 1842 contains 01, byte 7 is the displacement factor field 1762B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 1762B is a reinterpretation of disp8; when using displacement factor field 1762B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8∗N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 1762B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 1762B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8∗N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 1772 operates as previously described.Full Opcode FieldFigure 18B is a block diagram illustrating the fields of the specific vector friendly instruction format 1800 that make up the full opcode field 1774 according to one embodiment of the disclosure. Specifically, the full opcode field 1774 includes the format field 1740, the base operation field 1742, and the data element width (W) field 1764. The base operation field 1742 includes the prefix encoding field 1825, the opcode map field 1815, and the real opcode field 1830.Register Index FieldFigure 18C is a block diagram illustrating the fields of the specific vector friendly instruction format 1800 that make up the register index field 1744 according to one embodiment of the disclosure. Specifically, the register index field 1744 includes the REX field 1805, the REX' field 1810, the MODR/M.reg field 1844, the MODR/M.r/m field 1846, the VVVV field 1820, xxx field 1854, and the bbb field 1856.Augmentation Operation FieldFigure 18D is a block diagram illustrating the fields of the specific vector friendly instruction format 1800 that make up the augmentation operation field 1750 according to one embodiment of the disclosure. When the class (U) field 1768 contains 0, it signifies EVEX.U0(class A 1768A); when it contains 1, it signifies EVEX.U1 (class B 1768B). When U=0 and the MOD field 1842 contains 11 (signifying a no memory access operation), the alpha field 1752 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 1752A. When the rs field 1752A contains a 1 (round 1752A.1), the beta field 1754 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 1754A. The round control field 1754A includes a one bit SAE field 1756 and a two bit round operation field 1758. When the rs field 1752A contains a 0 (data transform 1752A.2), the beta field 1754 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 1754B. When U=0 and the MOD field 1842 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 1752 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 1752B and the beta field 1754 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 1754C.When U=1, the alpha field 1752 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 1752C. When U=1 and the MOD field 1842 contains 11 (signifying a no memory access operation), part of the beta field 1754 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 1757A; when it contains a 1 (round 1757A.1) the rest of the beta field 1754 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 1759A, while when the RL field 1757A contains a 0 (VSIZE 1757.A2) the rest of the beta field 1754 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 1759B (EVEX byte 3, bit [6-5]- L1-0). When U=1 and the MOD field 1842 contains 00, 01, or 10 (signifying a memory access operation), the beta field 1754 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 1759B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 1757B (EVEX byte 3, bit [4]-B).Exemplary Register ArchitectureFigure 19 is a block diagram of a register architecture 1900 according to one embodiment of the disclosure. In the embodiment illustrated, there are 32 vector registers 1910 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 1800 operates on these overlaid register file as illustrated in the below tables.Adjustable Vector LengthClassOperationsRegistersInstruction Templates that do not include the vector length field 1759BA ( Figure 17A ; U=0)1710, 1715, 1725, 1730zmm registers (the vector length is 64 byte)B ( Figure 17B ; U=1)1712zmm registers (the vector length is 64 byte)Instruction templates that do include the vector length field 1759BB ( Figure 17B ; U=1)1717, 1727zmm, ymm, or xmm registers (the vector length is 64 byte, 32 byte, or 16 byte) depending on the vector length field 1759BIn other words, the vector length field 1759B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 1759B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 1800 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.Write mask registers 1915 - in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 1915 are 16 bits in size. As previously described, in one embodiment of the disclosure, the vector mask register k0 cannot be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction.General-purpose registers 1925 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 1945, on which is aliased the MMX packed integer flat register file 1950 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments of the disclosure may use wider or narrower registers. Additionally, alternative embodiments of the disclosure may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagramFigure 2A discussed above is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the disclosure. Figure 2B discussed above is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the disclosure. The solid lined boxes in Figures 2A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.Specific Exemplary In-Order Core ArchitectureFigures 20A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 20A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 2002 and with its local subset of the Level 2 (L2) cache 2004, according to embodiments of the disclosure. In one embodiment, an instruction decode unit 2000 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 2006 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 2008 and a vector unit 2010 use separate register sets (respectively, scalar registers 2012 and vector registers 2014) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 2006, alternative embodiments of the disclosure may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 2004 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 2004. Data read by a processor core is stored in its L2 cache subset 2004 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 2004 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.Figure 20B is an expanded view of part of the processor core in Figure 20A according to embodiments of the disclosure. Figure 20B includes an L1 data cache 2006A part of the L1 cache 2004, as well as more detail regarding the vector unit 2010 and the vector registers 2014. Specifically, the vector unit 2010 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 2028), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 2020, numeric conversion with numeric convert units 2022A-B, and replication with replication unit 2024 on the memory input. Write mask registers 2026 allow predicating resulting vector writes.Figure 21 is a block diagram of a processor 2100 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the disclosure. The solid lined boxes in Figure 21 illustrate a processor 2100 with a single core 2102A, a system agent 2110, a set of one or more bus controller units 2116, while the optional addition of the dashed lined boxes illustrates an alternative processor 2100 with multiple cores 2102A-N, a set of one or more integrated memory controller unit(s) 2114 in the system agent unit 2110, and special purpose logic 2108.Thus, different implementations of the processor 2100 may include: 1) a CPU with the special purpose logic 2108 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 2102A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 2102A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 2102A-N being a large number of general purpose in-order cores. Thus, the processor 2100 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 2100 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 2106, and external memory (not shown) coupled to the set of integrated memory controller units 2114. The set of shared cache units 2106 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 2112 interconnects the integrated graphics logic 2108, the set of shared cache units 2106, and the system agent unit 2110/integrated memory controller unit(s) 2114, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 2106 and cores 2102-A-N.In some embodiments, one or more of the cores 2102A-N are capable of multi-threading. The system agent 2110 includes those components coordinating and operating cores 2102A-N. The system agent unit 2110 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 2102A-N and the integrated graphics logic 2108. The display unit is for driving one or more externally connected displays.The cores 2102A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 2102A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer ArchitecturesFigures 22-25 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 22, shown is a block diagram of a system 2200 in accordance with one embodiment of the present disclosure. The system 2200 may include one or more processors 2210, 2215, which are coupled to a controller hub 2220. In one embodiment the controller hub 2220 includes a graphics memory controller hub (GMCH) 2290 and an Input/Output Hub (IOH) 2250 (which may be on separate chips); the GMCH 2290 includes memory and graphics controllers to which are coupled memory 2240 and a coprocessor 2245; the IOH 2250 is couples input/output (I/O) devices 2260 to the GMCH 2290. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 2240 and the coprocessor 2245 are coupled directly to the processor 2210, and the controller hub 2220 in a single chip with the IOH 2250. Memory 2240 may include an event-notify mode module 2240A, for example, to store code that when executed causes a processor to perform any method of this disclosure.The optional nature of additional processors 2215 is denoted in Figure 22 with broken lines. Each processor 2210, 2215 may include one or more of the processing cores described herein and may be some version of the processor 2100.The memory 2240 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 2220 communicates with the processor(s) 2210, 2215 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as Quickpath Interconnect (QPI), or similar connection 2295.In one embodiment, the coprocessor 2245 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 2220 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 2210, 2215 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 2210 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 2210 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 2245. Accordingly, the processor 2210 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 2245. Coprocessor(s) 2245 accept and execute the received coprocessor instructions.Referring now to Figure 23, shown is a block diagram of a first more specific exemplary system 2300 in accordance with an embodiment of the present disclosure. As shown in Figure 23, multiprocessor system 2300 is a point-to-point interconnect system, and includes a first processor 2370 and a second processor 2380 coupled via a point-to-point interconnect 2350. Each of processors 2370 and 2380 may be some version of the processor 2100. In one embodiment of the disclosure, processors 2370 and 2380 are respectively processors 2210 and 2215, while coprocessor 2338 is coprocessor 2245. In another embodiment, processors 2370 and 2380 are respectively processor 2210 coprocessor 2245.Processors 2370 and 2380 are shown including integrated memory controller (IMC) units 2372 and 2382, respectively. Processor 2370 also includes as part of its bus controller units point-to-point (P-P) interfaces 2376 and 2378; similarly, second processor 2380 includes P-P interfaces 2386 and 2388. Processors 2370, 2380 may exchange information via a point-to-point (P-P) interface 2350 using P-P interface circuits 2378, 2388. As shown in Figure 23, IMCs 2372 and 2382 couple the processors to respective memories, namely a memory 2332 and a memory 2334, which may be portions of main memory locally attached to the respective processors.Processors 2370, 2380 may each exchange information with a chipset 2390 via individual P-P interfaces 2352, 2354 using point to point interface circuits 2376, 2394, 2386, 2398. Chipset 2390 may optionally exchange information with the coprocessor 2338 via a high-performance interface 2339. In one embodiment, the coprocessor 2338 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 2390 may be coupled to a first bus 2316 via an interface 2396. In one embodiment, first bus 2316 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.As shown in Figure 23, various I/O devices 2314 may be coupled to first bus 2316, along with a bus bridge 2318 which couples first bus 2316 to a second bus 2320. In one embodiment, one or more additional processor(s) 2315, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 2316. In one embodiment, second bus 2320 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 2320 including, for example, a keyboard and/or mouse 2322, communication devices 2327 and a storage unit 2328 such as a disk drive or other mass storage device which may include instructions/code and data 2330, in one embodiment. Further, an audio I/O 2324 may be coupled to the second bus 2320. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 23, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 24, shown is a block diagram of a second more specific exemplary system 2400 in accordance with an embodiment of the present disclosure. Like elements in Figures 23and24 bear like reference numerals, and certain aspects of Figure 23 have been omitted from Figure 24 in order to avoid obscuring other aspects of Figure 24.Figure 24 illustrates that the processors 2370, 2380 may include integrated memory and I/O control logic ("CL") 2372 and 2382, respectively. Thus, the CL 2372, 2382 include integrated memory controller units and include I/O control logic. Figure 24 illustrates that not only are the memories 2332, 2334 coupled to the CL 2372, 2382, but also that I/O devices 2414 are also coupled to the control logic 2372, 2382. Legacy I/O devices 2415 are coupled to the chipset 2390.Referring now to Figure 25, shown is a block diagram of a SoC 2500 in accordance with an embodiment of the present disclosure. Similar elements in Figure 21 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 25, an interconnect unit(s) 2502 is coupled to: an application processor 2510 which includes a set of one or more cores 202A-N and shared cache unit(s) 2106; a system agent unit 2110; a bus controller unit(s) 2116; an integrated memory controller unit(s) 2114; a set or one or more coprocessors 2520 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 2530; a direct memory access (DMA) unit 2532; and a display unit 2540 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 2520 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments (e.g., of the mechanisms) disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 2330 illustrated in Figure 23, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the disclosure also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Figure 26 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the disclosure. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 26 shows a program in a high level language 2602 may be compiled using an x86 compiler 2604 to generate x86 binary code 2606 that may be natively executed by a processor with at least one x86 instruction set core 2616. The processor with at least one x86 instruction set core 2616 represents any processor that can perform substantially the same functions as an Intel® processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel® x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel® processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel® processor with at least one x86 instruction set core. The x86 compiler 2604 represents a compiler that is operable to generate x86 binary code 2606 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 2616. Similarly, Figure 26 shows the program in the high level language 2602 may be compiled using an alternative instruction set compiler 2608 to generate alternative instruction set binary code 2610 that may be natively executed by a processor without at least one x86 instruction set core 2614 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 2612 is used to convert the x86 binary code 2606 into code that may be natively executed by the processor without an x86 instruction set core 2614. This converted code is not likely to be the same as the alternative instruction set binary code 2610 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 2612 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 2606.ExamplesExemplary embodiments of apparatuses, methods, and non-transitory machine readable medium are detailed as follows:Example 1. An apparatus comprising: a decoder to decode a first instruction, the first instruction having at least a first field for a first opcode to indicate that execution circuitry is to set a first flag in a first register to indicate a mode of operation that is to cause a redirection of program flow to an event handler upon the occurrence of an event; and execution circuitry to execute the decoded first instruction to set the first flag in the first register to indicate the mode of operation and to store an address of an event handler in a second register.Example 2. The apparatus of example 1, wherein the first instruction has a second field for the address of the event handler.Example 3. The apparatus of example 1, further comprising: a cache, an entry in the cache including a second flag that, when set, identifies an entry that, upon eviction, causes the first flag in the first register to be cleared and the second flag in the entry to be cleared.Example 4. The apparatus of example 1, the decoder to decode a second instruction, the second instruction having a second field for a second opcode to indicate that the execution circuitry is to clear the first flag in the first register, and the execution circuitry is to execute the second decoded instruction to clear the first flag in the first register.Example 5. The apparatus of example 1, the decoder to decode a second instruction, the second instruction having a second field for a second opcode to indicate that the execution circuitry is to store a value stored in a first instruction pointer register to a location in a memory, and the execution circuitry is to execute the second decoded instruction to store the value stored in the first instruction pointer register to the location in the memory.Example 6. The apparatus of example 1, the decoder to decode a second instruction, the second instruction having a second field for a second opcode to indicate that execution circuitry is to load a cache identified with a value with data at a location in a memory, and the execution circuitry to execute the second decoded instruction to load the cache identified with the value with data at the location in the memory.Example 7. The apparatus of example 1, the execution circuitry to copy an address in a first instruction pointer register into a second instruction pointer register and to copy the address of the event handler to the first instruction pointer register.Example 8. A method comprising: decoding a first instruction, the first instruction having a first field for a first opcode that indicates that execution circuitry is to set a first flag in a first register that indicates a mode of operation that redirects program flow to an event handler upon the occurrence of an event; and executing the decoded first instruction to set the first flag in the first register that indicates the mode of operation and to store an address of an event handler in a second register.Example 9. The method of example 8, wherein the first instruction has a second field for the address of the event handler.Example 10. The method of example 8, further comprising: setting a second flag in an entry in a cache; and clearing the first flag in the first register and the second flag upon eviction of the entry from the cache.Example 11. The method of example 8, further comprising: decoding a second instruction, the second instruction having a second field for a second opcode that indicates that execution circuitry is to clear the first flag in the first register; and executing the second decoded instruction to clear the first flag in the first register.Example 12. The method of example 8, further comprising: decoding a second instruction, the second instruction having a second field for a second opcode that indicates that execution circuitry is to store a value stored in a first instruction pointer register to a location in in a memory; and executing the second decoded instruction to store the value stored in the first instruction pointer register to the location in the memory.Example 13. The method of example 8, further comprising: decoding a second instruction, the second instruction having a second field for a second opcode that indicates that execution circuitry is to load a cache identified with a value with data at a location in a memory; and executing the second decoded instruction to load the cache identified with the value with data at the location in the memory.Example 14. The method of example 8, further comprising: copying an address in a first instruction pointer register into a second instruction pointer register; and copying the address of the event handler to the first instruction pointer register.Example 15. A non-transitory machine-readable medium storing at least one instruction, which when executed causes a processor to perform a method, the method comprising: decoding a first instruction, the first instruction having a first field for a first opcode that indicates that execution circuitry is to set a first flag in a first register that indicates a mode of operation that redirects program flow to an event handler upon the occurrence of an event; and executing the decoded first instruction to set the first flag in the first register that indicates the mode of operation and to store an address of an event handler in a second register.Example 16. The non-transitory machine-readable medium of example 15, wherein the first instruction has a second field for the address of the event handler.Example 17. The non-transitory machine-readable medium of example 15, further comprising:setting a second flag in an entry in a cache; and clearing the first flag in the first register and the second flag upon eviction of the entry from the cache.Example 18. The non-transitory machine-readable medium of example 15, further comprising:decoding a second instruction, the second instruction having a second field for a second opcode that indicates that execution circuitry is to clear the first flag in the first register; and executing the second decoded instruction to clear the first flag in the first register.Example 19. The non-transitory machine-readable medium of example 15, further comprising:decoding a second instruction, the second instruction having a second field for a second opcode that indicates that execution circuitry is to store a value stored in a first instruction pointer register to a location in a memory; and executing the second decoded instruction to store the value stored in the first instruction pointer register to the location in in the memory.Example 20. The non-transitory machine-readable medium of example 15, further comprising:decoding a second instruction, the second instruction having a second field for a second opcode that indicates that execution circuitry is to load a cache identified with a value with data at a location in a memory; and executing the second decoded instruction to load the cache identified with the value with data at the location in the memory.Example 21. A processor comprising:a decoder to decode an instruction into a decoded instruction, the instruction comprising a first field that indicates an instruction pointer to a user-level event handler; andan execution unit to execute the decoded instruction to, after a swap of an instruction pointer that indicates where an event occurred from a current instruction pointer register into a user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto call stack storage, and change a current instruction pointer in the current instruction pointer register to the instruction pointer to the user-level event handler.Example 22. The processor of Example 21, wherein the instruction further comprises a second field that indicates a number of bits by which to change a stack pointer to the call stack storage, and the execution unit is to execute the decoded instruction to also change the stack pointer by the number of bits.Example 23. The processor of Example 21, wherein the execution unit is to execute the decoded instruction to also change a stack pointer to the call stack storage to protect a stack red zone from being overwritten by the instruction pointer that indicates where the event occurred.Example 24. The processor of Example 21, wherein the execution unit is to execute the decoded instruction only when the processor is not in an event-notify mode.Example 25. The processor of Example 24, wherein the event-notify mode is set in an event-notify status register.Example 26. The processor of Example 21, wherein the execution unit is to execute the decoded instruction to also, after the swap of the instruction pointer that indicates where the event occurred from the current instruction pointer register into the user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto shadow stack storage.Example 27. The processor of Example 26, wherein the shadow stack storage is not user-level writable.Example 28. The processor of Example 26, wherein, on completion of execution of the user-level event handler, the processor is to pull a first instruction pointer from the call stack storage and a second instruction pointer from the shadow stack storage, and execute starting from the first instruction pointer only when the first instruction pointer and the second instruction pointer match.Example 29. A method comprising:decoding an instruction into a decoded instruction with a decoder of a processor, the instruction comprising a first field that indicates an instruction pointer to a user-level event handler; andexecuting the decoded instruction with an execution unit of the processor to, after a swap of an instruction pointer that indicates where an event occurred from a current instruction pointer register into a user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto call stack storage, and change a current instruction pointer in the current instruction pointer register to the instruction pointer to the user-level event handler.Example 30. The method of Example 29, wherein the instruction further comprises a second field that indicates a number of bits by which to change a stack pointer to the call stack storage, and the executing the decoded instruction with the execution unit is to also change the stack pointer by the number of bits.Example 31. The method of Example 29, wherein the executing the decoded instruction with the execution unit is also to change a stack pointer to the call stack storage to protect a stack red zone from being overwritten by the instruction pointer that indicates where the event occurred.Example 32. The method of Example 29, wherein the executing the decoded instruction with the execution unit is only when the processor is not in an event-notify mode.Example 33. The method of Example 32, further comprising setting the event-notify mode in an event-notify status register of the processor.Example 34. The method of Example 29, wherein the executing the decoded instruction with the execution unit is also to, after the swap of the instruction pointer that indicates where the event occurred from the current instruction pointer register into the user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto shadow stack storage.Example 35. The method of Example 34, wherein the shadow stack storage is not user-level writable.Example 36. The method of Example 34, further comprising, on completion of execution of the user-level event handler, pulling, by the processor, a first instruction pointer from the call stack storage and a second instruction pointer from the shadow stack storage, and executing starting from the first instruction pointer only when the first instruction pointer and the second instruction pointer match.Example 37. A non-transitory machine readable medium that stores code that when executed by a machine causes the machine to perform a method comprising:decoding an instruction into a decoded instruction with a decoder of a processor, the instruction comprising a first field that indicates an instruction pointer to a user-level event handler; andexecuting the decoded instruction with an execution unit of the processor to, after a swap of an instruction pointer that indicates where an event occurred from a current instruction pointer register into a user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto call stack storage, and change a current instruction pointer in the current instruction pointer register to the instruction pointer to the user-level event handler.Example 38. The non-transitory machine readable medium of Example 37, wherein the instruction further comprises a second field that indicates a number of bits by which to change a stack pointer to the call stack storage, and the executing the decoded instruction with the execution unit is to also change the stack pointer by the number of bits.Example 39. The non-transitory machine readable medium of Example 37, wherein the executing the decoded instruction with the execution unit is also to change a stack pointer to the call stack storage to protect a stack red zone from being overwritten by the instruction pointer that indicates where the event occurred.Example 40. The non-transitory machine readable medium of Example 37, wherein the executing the decoded instruction with the execution unit is only when the processor is not in an event-notify mode.Example 41. The non-transitory machine readable medium of Example 40, further comprising setting the event-notify mode in an event-notify status register of the processor.Example 42. The non-transitory machine readable medium of Example 37, wherein the executing the decoded instruction with the execution unit is also to, after the swap of the instruction pointer that indicates where the event occurred from the current instruction pointer register into the user-level event handler pointer register, push the instruction pointer that indicates where the event occurred onto shadow stack storage.Example 43. The non-transitory machine readable medium of Example 42, wherein the shadow stack storage is not user-level writable.Example 44. The non-transitory machine readable medium of Example 42, further comprising, on completion of execution of the user-level event handler, pulling, by the processor, a first instruction pointer from the call stack storage and a second instruction pointer from the shadow stack storage, and executing starting from the first instruction pointer only when the first instruction pointer and the second instruction pointer match.In yet another embodiment, an apparatus comprises a data storage device that stores code that when executed by a hardware processor causes the hardware processor to perform any method disclosed herein. An apparatus may be as described in the detailed description. A method may be as described in the detailed description.In the foregoing specification, the embodiments of invention have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.Flow diagrams as illustrated herein provide examples of sequences of various process actions. The flow diagrams can indicate operations to be executed by a software or firmware routine, as well as physical operations. In one embodiment, a flow diagram can illustrate the state of a finite state machine (FSM), which can be implemented in hardware and/or software. Although shown in a particular sequence or order, unless otherwise specified, the order of the actions can be modified. Thus, the illustrated embodiments should be understood only as an example, and the process can be performed in a different order, and some actions can be performed in parallel. Additionally, one or more actions can be omitted in various embodiments; thus, not all actions are required in every embodiment. Other process flows are possible.Embodiments of the invention may include various steps, which have been described above. The steps may be embodied in machine-executable instructions which may be used to cause a general-purpose or special-purpose processor to perform the steps. Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.As described herein, instructions may refer to specific configurations of hardware such as application specific integrated circuits (ASICs) configured to perform certain operations or having a predetermined functionality or software instructions stored in memory embodied in a non-transitory computer readable medium. Thus, the techniques shown in the Figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network element, etc.). Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer machine-readable media, such as non-transitory computer machine-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer machine-readable communication media (e.g., electrical, optical, acoustical or other form of propagated signals - such as carrier waves, infrared signals, digital signals, etc.). In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network traffic respectively represent one or more machine-readable storage media and machine-readable communication media. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. In certain instances, well known structures and functions were not described in elaborate detail in order to avoid obscuring the subject matter of the present invention. Accordingly, the scope and spirit of the invention should be judged in terms of the claims which follow. |
A computer system and a method are provided that reduce the amount of time and computing resources that are required to perform a hardware table walk (HWTW) in the event that a translation lookaside buffer (TLB) miss occurs. If a TLB miss occurs when performing a stage 2 (S2) HWTW to find the physical address (PA) at which a stage 1 (SI) page table is stored, the MMU uses the intermediate physical address (IPA) to predict the corresponding PA, thereby avoiding the need to perform any of the S2 table lookups. This greatly reduces the number of lookups that need to be performed when performing these types of HWTW read transactions, which greatly reduces processing overhead and performance penalties associated with performing these types of transactions. |
CLAIMSWhat is claimed is:1. A computer system that reduces processing overhead associated with performing a hardware table walk (HWTW), the system comprising:at least one central processing unit (CPU), the CPU running a host operating system (OS) and a hypervisor, the hypervisor controlling execution of at least a first guest OS on the CPU, the hypervisor running at least a first virtual machine (VM) associated with the first guest OS;a physical memory in communication with the CPU, the physical memory having physical memory locations that are addressable by physical addresses (PAs), wherein at least one page table is stored at physical memory locations of the physical memory, the page table comprising page table entries corresponding to mappings for mapping an intermediate physical address (IP A) into an actual PA of the physical memory;at least one translation lookaside buffer (TLB) that stores a subset of the page table entries; andat least one memory management unit (MMU) in communication with the CPU, with the physical memory and with the TLB, wherein the MMU determines whether or not page table entries associated with an PA are stored in the TLB, wherein if page table entries associated with the IPA are not stored in the TLB, then a TLB miss has occurred, and wherein if a TLB miss occurs, the MMU predicts a PA of the physical memory at which data associated with the IPA is stored.2. The computer system of claim 1, wherein the MMU predicts the PA as a function, f, of the ΓΡΑ as: PA = f(IPA).3. The computer system of claim 2, wherein the function, f, is selected from a plurality of functions, and wherein each function of said plurality of functions provides a one-to- one mapping between the IPA and the predicted PA.4. The computer system of claim 3, wherein the function, f, is a polynomial.5. The computer system of claim 3, wherein the function, f, is a unity function such that PA = IPA.6. The computer system of claim 3, wherein the hypervisor is running at least a second VM associated with a digital rights manager (DRM) computer program, and wherein the function, f, is ΓΡΑ _ Offset_function(VMID), where VMID is a unique identifier across the first and second VMs that identifies the VM associated with the TLB miss, and where IPA_Offset_function is a function having an output that is selected based on a particular offset value associated with the VMID of the first or second VM that was using the ΓΡΑ to access memory when the TLB miss occurred, and wherein the predicted PA is predicted as:PA = ΓΡΑ _ Offset_function(VMID).7. The computer system of claim 3, wherein the hypervisor is running at least a second VM associated with a digital rights manager (DRM) computer program, and wherein the function, f, is ΓΡΑ XOR Extended_VMID, where XOR represents and exclusive OR operation and Extended_VMTD is an extended VMID, and wherein the predicted PA is predicted as:PA = ΓΡΑ XOR Extended_VMID.8. The computer system of claim 1, wherein the computer system is part of a mobile device.9. The computer system of claim 8, wherein the mobile device is a mobile phone.10. The computer system of claim 9, wherein the mobile phone is a smart phone.11. A method reducing processing overhead associated with performing a hardware table walk (HWTW), the method comprising:providing at least one central processing unit (CPU), at least one physical memory, at least one translation lookaside buffer (TLB), and at least one memory management unit (MMU), the CPU, the physical memory, the TLB, and the MMU being in communication with one another, the CPU running a host operating system (OS) and a hypervisor, the hypervisor controlling execution of at least a first guest OS on the CPU, the hypervisor running at least a first virtual machine (VM) associated with the first guest OS, the physical memory having physical memory locations that are addressable by physical addresses (PAs), wherein at least one page table is stored at physical memory locations of the physical memory, the page table comprising page table entries corresponding to mappings for mapping an intermediate physical address (PA) into an actual PA of the physical memory, the TLB storing a subset of the page table entries; andin the MMU:determining whether or not page table entries associated with an IPA are stored in the TLB,if a determination is made that page table entries associated with the IPA are not stored in the TLB, then deciding that a TLB miss has occurred, andif a decision was made that a TLB miss has occurred, predicting a PA of the physical memory at which data associated with the IPA is stored.12. The method of claim 11, wherein the MMU predicts the PA as a function, f, of the ΓΡΑ as: PA = f(IPA).13. The method of claim 12, wherein the function, f, is selected from a plurality of functions, and wherein each function of said plurality of functions provides a one-to-one mapping between the PA and the predicted PA.14. The method of claim 13, wherein the function, f, is a polynomial.15. The method of claim 13, wherein the function, f, is a unity function such that PA = IPA.16. The method of claim 13, wherein the hypervisor is running at least a second VM associated with a digital rights manager (D M) computer program, and wherein the function, f is ΓΡΑ _ Offset_function(VMID), where VMID is a unique identifier across the first and second VMs that identifies the VM associated with the TLB miss, and where IPA_Offset_function is a function having an output that is selected based on a particular offset value associated with the VMID of the first or second VM that was using the ΓΡΑ to access memory when the TLB miss occurred, and wherein the predicted PA is predicted as:PA = IPA _ Offset_function(VMID).17. The method of claim 13, wherein the hypervisor is running at least a second VM associated with a digital rights manager (DRM) computer program, and wherein the function, f, is ΓΡΑ XOR Extended_VMID, where XOR represents and exclusive OR operation and Extended_VMID is an extended VMID, and wherein the predicted PA is predicted as:PA = ΓΡΑ XOR Extended_VMID.18. The method of claim 13, wherein the hypervisor controls execution of at least first and second guest OSs on the CPU, and wherein the hypervisor is also running at least a second VM associated with the second guest OS, and wherein the function, f, that is used by the MMU to predict PAs predicts PAs that are in first range of PAs for a miss that is associated with the first VM and predicts PAs that are in second range of PAs for a miss that is associated with the second VM, and wherein the first and second ranges of PAs are different from one another.19. The method of claim 13, wherein the method is performed by the computer system of a mobile device.20. The method of claim 19, wherein the mobile device is a mobile phone.21. The method of claim 20, wherein the mobile phone is a smart phone.22. A non-transitory computer-readable medium (CRM) having a computer code stored thereon for execution by one or more processors for reducing processing overhead associated with performing a hardware table walk (HWTW), the computer code comprising:a first code portion for determining whether or not page table entries associated with an intermediate physical address (IP A) are stored in the TLB, wherein if a determination is made that page table entries associated with the ΓΡΑ are not stored in the TLB, then the first code portion decides that a TLB miss has occurred; anda second code portion for predicting a physical address (PA) of a physical memory at which data associated with the IPA is stored if the first code portion decides that a TLB miss has occurred.23. The non-transitory CRM of claim 21, wherein the second code portion predicts the PA as a function, f, of the IPA as: PA = ί(ΓΡΑ).24. The non-transitory CRM of claim 23, wherein the second code portion selects the function, f, is from a plurality of functions, and wherein each function of said plurality of functions provides a one-to-one mapping between the IPA and the predicted PA.25. The non-transitory CRM of claim 24, wherein the function, f, is a polynomial.26. The non-transitory CRM of claim 24, wherein the function, f, is a unity function such that PA = IPA.27. The non-transitory CRM of claim 24, wherein the function, f, is ΓΡΑ Offset_function(VMID), where VMID is a unique identifier across first and second virtual machines (VMs) that identifies one of the first and second VMs as the VM associated with the TLB miss, and where IPA_Offset_function is a function having an output that is selected based on a particular offset value associated with the VMID of the first or second VM that was using the ΓΡΑ to access memory when the TLB miss occurred, and wherein the predicted PA is predicted as:PA = ΓΡΑ _ Offset_function(VMID).28. The non-transitory CRM of claim 24, wherein the function, f, is IPA XOR Extended_VMTD, where XOR represents and exclusive OR operation and ExtendedJVMTD is an extended VMID, and wherein the predicted PA is predicted as: PA = IPA XOR Extended_VMID. |
METHODS AND SYSTEMS FOR REDUCING THE AMOUNT OF TIME ANDCOMPUTING RESOURCES THAT ARE REQUIRED TO PERFORM A HARDWARE TABLE WALKTECHNICAL FIELD OF THE INVENTION[0001] The invention relates to computer systems, and more particularly, to computer systems and methods for use in computer system for reducing the amount of time and computing resources that are required to perform a hardware table walk (HWTW).BACKGROUND OF THE INVENTION[0002] Modern computer systems use memory management units (MMUs) to manage writing data to and reading data from one or more physical memory devices, such as solid state memory devices, for example. The MMU of a computer system provides a virtual memory to the central processing unit (CPU) of the computer system that allows the CPU to run each application program in its own dedicated, contiguous virtual memory address space rather than having all of the application programs share the physical memory address space, which is often fragmented, or non-contiguous. The purpose of the MMU is to translate virtual memory addresses (VAs) into physical memory addresses (PAs) for the CPU. The CPU indirectly reads and writes PAs by directly reading and writing VAs to the MMU, which translates them into PAs and then writes or reads the PAs.[0003] In order to perform the translations, the MMU accesses page tables stored in the system main memory. The page tables are made up of page table entries. The page table entries are information that is used by the MMU to map the VAs into PAs. The MMU typically includes a translation lookaside buffer (TLB), which is a cache memory element used to cache recently used mappings. When the MMU needs to translate a VA into a PA, the MMU first checks the TLB to determine whether there is a match for the VA. If so, the MMU uses the mapping found in the TLB to compute the PA and then accesses the PA (i.e., reads or writes the PA). This is known as a TLB "hit." If the MMU does not find a match in the TLB, this is known as a TLB "miss."[0004] In the event of a TLB miss, the MMU performs what is known as a hardware table walk (HWTW). A HWTW is a time-consuming and computationally-expensive process that involves performing a "table walk" to find the corresponding page table in the MMU and then reading multiple locations in the page table to find the corresponding VA-to-PA address mapping. The MMU then uses the mapping to compute the corresponding PA and writes the mapping back to the TLB.[0005] In computer systems that implement operating system (OS) virtualization, a virtual memory monitor (VMM), also commonly referred to as a hypervisor, is interposed between the hardware of the computer system and the system OS of the computer system. The hypervisor executes in privileged mode and is capable of hosting one or more guest high-level OSs. In such systems, application programs running on the OSs use VAs of a first layer of virtual memory to address memory, and the OSs running on the hypervisor use intermediate physical addresses (IPAs) of a second layer of virtual memory to address memory. In the MMU, stage 1 (SI) translations are performed to translate each VA into an IPA and stage 2 (S2) translations are performed to translate each IPA into a PA.[0006] If a TLB miss occurs when performing such translations, a multi-level, two- dimensional (2-D) HWTW is performed to obtain the table entries that are needed to compute the corresponding IPA and PA. Performing these multi-level, 2-D HWTWs can result in a significant amount of computational overhead for the MMU, which typically results in performance penalties.[0007] Fig. 1 is a pictorial illustration of a known three-level, 2-D HWTW that is performed when a TLB miss occurs while performing a read transaction. The HWTW shown in Fig. 1 represents a worst case scenario for a three-level, 2-D HWTW that requires the performance of fifteen table lookups to obtain the PA where the data is stored in physical memory. For this example, the MMU of the computer system is running a hypervisor that is hosting at least one guest high-level OS (HLOS), which, in turn, is running at least one application program. In such a configuration, the memory that is being allocated by the guest HLOS is not the actual physical memory of the system, but instead is the aforementioned intermediate physical memory. The hypervisor allocates actual physical memory. Therefore, each VA is translated into an IPA, which is then translated into a PA of the actual physical memory where the data being read is actually stored.[0008] The process begins with the MMU receiving a SI page global directory (PGD) IPA 2. For this worst case scenario example, it will be assumed that a TLB miss occurs when the MMU checks the TLB for a match. Because of the miss, the MMU must perform a HWTW. The HWTW involves performing three S2 table lookups 3, 4 and 5 to obtain the mapping needed to convert the IPA 2 into a PA and one additional lookup 6 to read the PA. The table lookups 3, 4 and 5 involve reading the S2 PGD, page middle directory (PMD) and page table entry (PTE), respectively. Reading the PA at lookup 6 provides the MMU with a SI PMD IPA 7. For this worst case scenario example, it will be assumed that a TLB miss occurs when the MMU checks the TLB for a match with the SI PMD IPA 7. Because of the miss, the MMU must perform another HWTW. The HWTW involves performing three S2 table lookups 8, 9 and 11 to obtain the mapping needed to convert the S 1 PMD IPA 7 into a PA and one additional lookup 12 to read the PA. The table lookups 8, 9 and 1 1 involve reading the S2 PGD, PMD and PTE, respectively. Reading the PA at lookup 12 provides the MMU with a SI PTE IPA 13.[0009] For this worst case scenario example, it will be assumed that a TLB miss occurs when the MMU checks the TLB for a match with the SI PTE IPA 13. Because of the miss, the MMU must perform another HWTW. The HWTW involves performing three S2 table lookups 14, 15 and 16 to obtain the mapping needed to convert the SI PTE IPA 13 into a PA and one additional lookup 17 to read the PA. The table lookups 14, 15 and 16 involve reading the S2 PGD, PMD and PTE, respectively. Reading the PA at lookup 17 provides the MMU with the actual IPA 18. For this worst case scenario example, it will be assumed that a TLB miss occurs when the MMU checks the TLB for a match with the actual IPA 18. Because of the miss, the MMU must perform another HWTW. The HWTW involves performing three S2 table lookups 19, 21 and 22 to obtain the mapping needed to convert the actual IPA 18 into a PA. The table lookups 19, 21 and 22 involve reading the S2 PGD, PMD and PTE, respectively. The PA is then read to obtain the corresponding read data. Reading the PA at lookup 18 provides the MMU with a SI PTE IPA 13.[0010] Thus, it can be seen that in the worst case scenario for a three-level, 2-D HWTW, twelve S2 table lookups and three SI table lookups are performed, which is a large amount of computational overhead that consumes are large amount of time and results in performance penalties. A variety of techniques and architectures have been used to reduce the amount of time and processing overhead that is involved in performing HWTWs, including, for example, increasing the size of the TLB, using multiple TLBs, using flat nested page tables, using shadow paging or speculative shadow paging, and using page walk cache. While all of these techniques and architectures are capable of reducing processing overhead associated with performing HWTWs, they often result in an increase in processing overhead somewhere else in the computer system.[0011] Accordingly, a need exists for computer systems and methods that reduce the amount of time and computing resources that are required to perform a HWTW.SUMMARY OF THE INVENTION[0012] The invention is directed to a computer system and a method for use in a computer system for reducing the amount of time and computing resources that are required to perform a HWTW. The computer system comprises at least one central processing unit (CPU), at least one physical memory, at least one TLB, and at least one MMU. The CPU runs a host OS and a hypervisor. The hypervisor controls execution of at least a first guest OS on the CPU. The hypervisor runs at least a first VM associated with the first guest OS. The physical memory has physical memory locations that are addressable by PAs. At least one page table is stored at physical memory locations of the physical memory. The page table comprises page table entries corresponding to mappings for mapping an IPA into an actual PA of the physical memory. The TLB stores a subset of the page table entries. When a memory access is being performed, the MMU determines whether or not page table entries associated with an IPA are stored in the TLB. If page table entries associated with the IPA are not stored in the TLB, then a TLB miss has occurred. If a TLB miss occurs, the MMU predicts a PA of the physical memory at which data associated with the IPA is stored, thereby obviating the need to perform a HWTW to compute the PA.[0013] The method comprises:in the MMU:determining whether or not page table entries associated with an IPA are stored in the TLB;if a determination is made that page table entries associated with the IPA are not stored in the TLB, then deciding that a TLB miss has occurred; andif a decision was made that a TLB miss has occurred, predicting a PA of the physical memory at which data associated with the IPA is stored.[0014] The invention also provides a computer-readable medium (CRM) that stores computer code for execution by one or more processors for reducing processing overhead associated with performing a HWTW. The computer code comprises first and second code portions. The first code portion determines whether or not page table entries associated with an IPA are stored in the TLB. If a determination is made that page table entries associated with the IPA are not stored in the TLB, then the first code portion decides that a TLB miss has occurred, the second code portion predicts a PA of physical memory at which data associated with the IPA is stored if the first code portion decides that a TLB miss has occurred.[0015] These and other features and advantages will become apparent from the following description, drawings and claims.BRIEF DESCRIPTION OF THE DRAWINGS[0016] Fig. 1 is a block diagram of a computer system in accordance with an illustrative embodiment of the invention.[0017] Fig. 2 illustrates a block diagram of a computer system in accordance with an illustrative, or exemplary, embodiment configured to perform the method for reducing the amount of time and computing resources that are required to perform a HWTW.[0018] Fig. 3 is a flowchart that represents the method, in accordance with an illustrative embodiment, performed by the hypervisor shown in Fig. 2 to reduce the amount of time and processing overhead that is required to perform a HWTW read transaction.[0019] Fig. 4 is a pictorial diagram that demonstrates the manner in which a HWTW read transaction is performed using the method represented by the flowchart shown in Fig. 3 in accordance with an illustrative embodiment.[0020] Fig. 5 is a block diagram of a hardware predictor in accordance with an illustrative embodiment that performs the method represented by the flowchart shown in Fig. 3.[0021] Fig. 6 illustrates a block diagram of a mobile smartphone in which the computer system shown in Fig. 2 is incorporated.DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS[0022] In accordance with illustrative embodiments described herein, a computer system and a method for use in a computer system are provided for reducing the amount of time and computing resources that are required to perform a HWTW. In accordance with embodiments described herein, when a TLB miss occurs when performing a S2 HWTW to find the PA at which a SI page table is stored, the MMU uses the IPA to predict the corresponding PA, thereby avoiding the need to perform any of the S2 table lookups. This greatly reduces the number of lookups that need to be performed when performing these types of HWTW read transactions, which greatly reduces processing overhead and performance penalties associated with performing these types of transactions.[0023] Fig. 2 illustrates a block diagram of a computer system 100 in accordance with an illustrative, or exemplary, embodiment configured to perform the method for reducing the amount of time and computing resources that are required to perform a S2 HWTW to find the PA at which a S 1 page table is stored. The example of the computer system 100 shown in Fig. 2 includes a CPU cluster 1 10, a main memory 120, a video camera display 130, a graphical processing unit (GPU) 140, a peripheral connect interface express (PCIe) input/output (10) device 150, a plurality of 10 TLBs (lOTLBs) 160, and a system bus 170. The CPU cluster 1 10 has a plurality of CPU cores 110a, each of which has an MMU 110b. Each CPU core 110a may be a microprocessor or any other suitable processor. The video camera display 130 has a system MMU (SMMU) 130a. The GPU 140 has its own SMMU 140a. Likewise, the PCIe 10 device 150 has its own SMMU 150a.[0024] The MMUs 110b of the processor cores 110a are configured to perform the tasks of translating VAs into IPAs and translating IPAs into PAs. The page tables are stored in main memory 120. Each of the MMUs 1 10b and the SMMUs 130a, 140a and 150a has its own TLB (not shown for purposes of clarity) that store subsets of the page tables that are stored in main memory 120. In accordance with this illustrative embodiment, after the occurrence of a TLB miss, the MMUs 1 10b perform a prediction algorithm that processes an IPA to predict a PA. The prediction algorithm may be mathematically expressed as:PA = f(IPA), (Equation 1) where f represents a mathematical function. The functions f that may be used for this purpose are described below in detail with reference to Fig. 5. The phrase "to predict," as that phrase is used herein, means "to determine," and does not imply a stochastic or probabilistic determination, although stochastic or probabilistic determinations are not necessarily excluded from the scope of the invention. The predictions that are made by the prediction algorithm are typically, but not necessarily, deterministic.[0025] The CPU cluster 1 10 runs a system OS 200 and a virtual machine monitor (VMM), or hypervisor, 210. The hypervisor 210 manages the translation tasks, which includes, in addition to performing the translations, updating the page tables stored in the MMUs 1 10b and the SMMUs 130a, 140a and 150a. The hypervisor 210 also runs a guest HLOS 220 and/or a guest digital rights manager (DRM) 230. The HLOS 220 may be associated with the video camera display 130 and the DRM 230 may be associated with the GPU 140. The hypervisor 210 manages the HLOS 220 and the DRM 230.[0026] After a TLB miss occurs, the hypervisor 210 configures the MMUs 1 10b and the SMMUs 130a, 140a and 150a to perform the prediction algorithm to convert the IPA into a PA. In such cases the starting IPA for the VA associated with the TLB miss is obtained from a hardware base register (not shown for purposes of clarity) of the CPU cluster 110 in the typical manner in which an SI translation normally begins. The prediction algorithm then predicts the PA in accordance with Equation 1, as will be described below in more detail. To manage and update the SMMUs 130a, 140a and 150a, the CPU MMU 1 10b sends distributed virtual memory (DVM) messages over the bus 170 to the SMMUs 130a, 140a, and 150a. The MMUs 110b and the SMMUs 130a, 140a and 150a access main memory 120 to perform HWTWs.[0027] In accordance with an illustrative embodiment, the CPU MMU 1 10b classifies MMU traffic into three transaction classes, namely: (1) S2 HWTW read transactions to find the PA at which a SI page table is stored; (2) Client transactions; and (3) address fault (AF)/dirty flag write transactions. In accordance with this illustrative embodiment, the prediction algorithm only converts IP As into PAs for class 1 transactions, i.e., HWTW read transactions. For all other classes of transactions, in accordance with this illustrative embodiment, the MMUs 110b and SMMUs 130a, 140a and 150a performs all other translations (e.g., SI and client transaction S2 translations) in the typical manner.[0028] Fig. 3 is a flowchart that represents the method, in accordance with an illustrative embodiment, performed by the CPU MMU 1 10b to reduce the amount of time and processing overhead that is required to perform a HWTW read transaction. Block 301 represents the method starting, which typically occurs when the CPU cluster 1 10 boots up and begins running the system OS 200 and the hypervisor 210. The MMUs 110b classify traffic into the aforementioned transaction classes (1), (2) and (3), as indicated by block 302. The classification process may classify transactions into more or less than these three classes, but at least one of the classifications will be class (1) transactions, i.e., S2 HWTW read transactions to find the PA at which a SI page table is stored. At the step represented by block 303, a determination is made as to whether a TLB miss has occurred when performing a class (1) transaction. If not, the method proceeds to block 306, at which the MMU 1 10b or SMMU 130a, 140a or 150a perform a HWTW in the normal manner.[0029] I¾ at the step represented by block 303, the CPU MMU 110b determines that the miss occurred when performing a class (1) transaction, then the method proceeds to the step represented by block 305. At the step represented by block 305, the aforementioned prediction algorithm is performed to convert or translate the IPA into a PA.[0030] Fig. 4 is a pictorial diagram that demonstrates the manner in which a HWTW read transaction is performed in accordance with an illustrative embodiment. For this illustrative embodiment, it is assumed for exemplary purposes that the page tables are three-level page tables and that HWTWs are 2-D HWTWs. The example also assumes a TLB miss worst case scenario. The process begins with the MMU receiving a VA and then retrieving SI PGD IPA 401 from a control register (not shown for purposes of clarity). The MMU then checks the TLB for a match with SI PGD IPA 401. For this worst case scenario example, it will be assumed that a TLB miss occurs when the MMU checks the TLB for a match. Because of the miss, the MMU performs the prediction algorithm to convert SI PGD IPA 401 into a PA 402 at which an SI PMD IPA 403 is stored. Thus, a single lookup is used to convert SI PGD IPA 401 into PA 402.[0031] For this worst case scenario example, it will be assumed that a TLB miss occurs when the MMU checks the TLB for a match with the SI PMD IPA 403. Because of the miss, the MMU performs the prediction algorithm to convert SI PMD IPA 403 into a PA 404 at which SI PTE IPA 405 is stored. Thus, a single lookup is used to convert SI PMD IPA 403 into PA 404. For this worst case scenario example, it will be assumed that a TLB miss occurs when the MMU checks the TLB for a match with the SI PTE IPA 405. Because of the miss, the MMU performs the prediction algorithm to convert SI PTE IPA 405 into a PA 406 at which IPA1 407 is stored. Once IPA1 407 has been obtained, three lookups 408, 409 and 41 1 are performed to obtain the ultimate PA 412 where the data to be read is stored.[0032] Thus, in accordance with this embodiment, it can be seen that the total number of lookups has been reduced from fifteen (Fig. 1) to six, which represents a 60% reduction in processing overhead. Of course, the invention is not limited to MMU configurations that have a particular number of levels or a particular number of HWTW dimensions. Those skilled in the art will understand that the concepts and principles of the invention apply regardless of the configuration of the page tables. Also, although the method and system are being described herein with reference to IPA-to-PA conversion, they are equally applicable to direct VA-to-PA conversions in systems that do not use IP As.[0033] Fig. 5 is a block diagram of an illustrative embodiment of a predictor 500 that performs the prediction algorithm. The predictor 500 is typically implemented in the MMUs 1 10b and in the SMMUs 130a, 140a and 150a. As indicated above, in accordance with the illustrative embodiment, the prediction algorithm is only performed when performing a class 1 read transaction. The configuration of the predictor 500 shown in Fig. 5 is an example of one configuration that allows the predictor 500 to be enabled for class 1 transactions and to be disabled for all other classes of transactions, including class 2 and 3 transactions.[0034] The configuration of the predictor 500 shown in Fig. 5 also allows the predictor 500 to select the function, f, that is used in Equation 1 above to compute the PA based on the IPA. Each virtual machine (VM) may be using a different set of functions, f, so it is important that the sets of functions that are used ensure that there is a one-to-one mapping between IPA and PA over the range of IPA. The hypervisor 210 may be managing multiple HLOSs or DRMs, each of which will have a corresponding VM running in the hypervisor 210. The sets of functions that are used ensure that the predicted PA does not overlap a predicted PA allocated to another VM.[0035] Examples of the function f are:PA=IPA;PA=IPA + Offset_function(VMID), where VMID is a unique identifier across all VMs that identifies the VM associated with the HWTW read transaction, and Offset function is a function having an output that is selected based on a particular offset value associated with the VMID; andPA=IPA XOR Extended VMID, where XOR represents and exclusive OR operation and Extended_VMID is an extended VMID. The hypervisor 210 selects the function f such that collisions between VMs are avoided.[0036] In Fig. 5, it is assumed that the function f is a polynomial and that the hypervisor 210 selects a polynomial to be used as the function f from a plurality of polynomials. The polynomial that is selected may be based on, for example, the VMID of the VM for which the HWTW read transaction is being performed. A configuration register 510 of the predictor 500 holds one or more prediction enable bits 510a and one or more polynomial selection bits 510b. Polynomial calculation hardware 520 of the predictor 500 comprises hardware that selects a polynomial function based on the value of the polynomial selection bits 510b received from register 510. The polynomial calculation hardware 520 also receives an IPA-to-PA translation request and processes the request in accordance with the selected polynomial function to produce a predicted PA.[0037] The prediction enable bit 510a and a class 1 enable bit are received at the inputs of an AND gate 530. The class 1 enable bit is asserted when a miss has occurred when performing a class 1 read transaction. A multiplexer (MUX) 540 of the predictor 500 receives the output of the AND gate 530 at a selector port of the MUX 540 and receives the predicted PA and the IPA-to-PA translation result obtained in the normal manner. When both the prediction enable bit 510a and the class 1 enable bit are asserted, the S2 WALK Control Logic And State Machine 550 is disabled and the MUX 540 selects the predicted PA to be output from the MUX 540.[0038] When the prediction enable bit 510a and/or the class 1 enable bit is deasserted, the S2 Walk Control Logic And State Machine 550 is enabled. When the S2 Walk Control Logic And State Machine 550 is enabled, other types of S2 walks (e.g., class 2 and class 3) may be performed in main memory 120 by the S2 Walk Control Logic And State Machine 550. Thus, when the S2 Walk Control Logic And State Machine 550 is enabled, the MUX 540 outputs the IPA-to-PA translation result that is output from the S2 Walk Control Logic And State Machine 550.[0039] It should be noted that the predictor 500 may have many different configurations. The configuration of the predictor 500 shown in Fig. 5 is merely one of many suitable configurations for performing the prediction algorithm. Persons of skill in that art will understand that many configurations other that shown in Fig. 5 may be used to perform the prediction algorithm.[0040] The computer system 100 shown in Fig. 2 may be implemented in any type of system in which memory virtualization is performed, including, for example, desktop computers, servers and mobile smartphones. Fig. 6 illustrates a block diagram of a mobile smartphone 600 in which the computer system 100 is incorporated. The smartphone 600 is not limited to being any particular type of smartphone or having any particular configuration, except that it must be capable of performing methods described herein. Also, the smartphone 600 illustrated in Fig. 6 is intended to be a simplified example of a cellular telephone having context awareness and processing capability for performing methods described herein. One having ordinary skill in the art will understand the operation and construction of a smartphone, and, as such, implementation details have been omitted.[0041] In accordance with this illustrative embodiment, the smartphone 600 includes a baseband subsystem 610 and a radio frequency (RF) subsystem 620 connected together over a system bus 612. The system bus 612 typically comprises physical and logical connections that couple the above-described elements together and enable their interoperability. The RF subsystem 620 may be a wireless transceiver. Although details are not shown for clarity, the RF subsystem 620 generally includes a transmit (Tx) module 630 having modulation, upconversion and amplification circuitry for preparing a baseband information signal for transmission, includes a receive (Rx) module 640 having amplification, filtering and downconversion circuitry for receiving and downconverting an RF signal to a baseband information signal to recover data, and includes a front end module (FEM) 650 that includes diplexer circuitry, duplexer circuitry, or any other circuitry that can separate a transmit signal from a receive signal, as is known to those skilled in the art. An antenna 660 is connected to the FEM 650.[0042] The baseband subsystem 610 generally includes the computer system 100, analog circuit elements 616, and digital circuit elements 618, electrically coupled together via the system bus 612. The system bus 612 typically comprises the physical and logical connections to couple the above-described elements together and enable their interoperability.[0043] An input/output (I/O) element 621 is connected to the baseband subsystem 610 via connection 624. The I/O element 621 typically includes, for example, a microphone, a keypad, a speaker, a pointing device, user interface control elements, and any other devices or systems that allow a user to provide input commands and receive outputs from the smartphone 600. A memory 628 is connected to the baseband subsystem 610 via connection 629. The memory 628 may be any type of volatile or non-volatile memory. The memory 628 may be permanently installed in the smartphone 600, or may be a removable memory element, such as a removable memory card.[0044] The analog circuitry 616 and the digital circuitry 618 include the signal processing, signal conversion, and logic that convert an input signal provided by the I/O element 621 to an information signal that is to be transmitted. Similarly, the analog circuitry 616 and the digital circuitry 618 include the signal processing elements used to generate an information signal that contains recovered information from a received signal. The digital circuitry 618 may include, for example, a digital signal processor (DSP), a field programmable gate array (FPGA), or any other processing device. Because the baseband subsystem 610 includes both analog and digital elements, it may be referred to as a mixed signal device (MSD).[0045] The smartphone 600 may include one or more of a variety of sensors such as, for example, a camera 661, a microphone 662, a Global Positioning System (GPS) sensor 663, an accelerometer 665, a gyroscope 667, and a digital compass 668. These sensors communicate with the baseband subsystem 610 via bus 612.[0046] Having the computer system 100 embedded in the smartphone 600 allows multiple OSs and multiple respective VMs to run on the smartphone 600. In this environment, the hypervisor 210 (Fig. 2) of the computer system 100 provides a secure separation between the hardware of the smartphone 600 and the application software being executed by the VMs.[0047] The method described above with reference to Fig. 3 may be implemented solely in hardware or in a combination of hardware and software or hardware and firmware. Likewise, many of the components of the computer system 100 shown in Fig. 2 may be implemented solely in hardware or in a combination of hardware and software or firmware. For example, the hypervisor 210 may be implemented solely in hardware or in a combination of hardware and software or firmware. In cases where the method or a component of the computer system 100 is implemented in software or firmware, the corresponding code is stored in the main memory 120 (Fig. 2), which is a computer-readable medium. The main memory 120 is typically is a solid state computer-readable medium, such as a non-volatile random access memory (RAM), read only memory (ROM) device, programmable ROM (PROM), erasable PROM (EPROM), etc. However, other types of computer-readable mediums may be used for storing the code, such as, for example, magnetic and optical storage devices.[0048] It should also be noted that many variations may be made to the methods described above with reference to Figs. 2 - 6 without deviating from the scope of the invention. For example, the configuration of the computer system 100 shown in Fig. 2 may be modified in a number of ways, as will be understood by those of skill in the art. Also, the smartphone 600 shown in Fig. 6 is merely one example of a mobile device that has a suitable configuration and functionality for performing the method. Persons of skill in the art will understand, in view of the description provided herein, that many variations may be made to the smartphone 600 shown in Fig. 6 without deviating from the scope of the invention. These and other variations are within the scope of the invention. The illustrative embodiments described herein are intended to demonstrate the principles and concepts of the invention, but the invention is not limited to these embodiments, as will be understood by those of skill in the art. |
Embodiments for dynamically mitigating speculation vulnerabilities are disclosed. In an embodiment, an apparatus includes decode circuitry and execution circuitry coupled to the decode circuitry. The decode circuitry is to decode a register hardening instruction to mitigate vulnerability to a speculative execution attack. The execution circuitry is to be hardened in response to the register hardening instruction. |
An apparatus comprising:decode circuitry to decode a register hardening instruction to mitigate vulnerability to a speculative execution attack; andexecution circuitry, coupled to the decode circuitry, to be hardened in response to the register hardening instruction.The apparatus of claim 1, wherein the execution circuitry is to be hardened to fence a register.The apparatus of claim 1, wherein the execution circuitry is to be hardened to prevent speculative execution of an instruction to load a register or to use content of the register.The apparatus of claim 1, wherein the execution circuitry is to be hardened to prevent a speculative operation from using content of a register.The apparatus of claim 1, wherein the execution circuitry is to be hardened to prevent data forwarding from a register to a dependent operation.The apparatus of claim 1, wherein the execution circuitry is to be hardened to prevent execution of an instruction using content of a register to leave a side channel.The apparatus of claim 1, wherein the execution circuitry is to be hardened to prevent allocation of a cache line based on execution of an instruction using content of a register.The apparatus of claim 1, wherein hardening of the execution circuitry is to be relaxed in response to retirement of an instruction to load a register or to use content of the register.The apparatus of claim 1, wherein hardening of the execution circuitry is to be relaxed in response to a register load operation or an operation to use content of a register becoming non-speculative.The apparatus of claim 1, wherein hardening of the execution circuitry is to be relaxed in response to resolution of a branch or fence condition.The apparatus of claim 1, wherein the execution circuitry is to be hardened to prevent dependence of latency of an operation on data stored in a register.A method comprising:decoding, by a processor, a register hardening instruction to mitigate vulnerability to a speculative execution attack; andhardening, in response to the register hardening instruction, execution circuitry in the processor.The method of claim 12, wherein hardening the execution circuitry includes fencing a register or preventing a speculative operation from using content of a register.A non-transitory machine-readable medium storing a plurality of instructions, including a first instruction and a second instruction, wherein execution of the plurality of instructions by a machine causes the machine to perform a method comprising:hardening execution circuitry in the machine in response to the first instruction to mitigate vulnerability to a speculative execution attack;preventing a speculative operation to be performed in response to the second instruction from using content of a register.The non-transitory machine-readable medium of claim 14, wherein the method comprises relaxing hardening in response to the speculative operation becoming non-speculative. |
FIELD OF INVENTIONThe field of invention relates generally to computers, and, more specifically, to computer system security.BACKGROUNDComputer systems may be vulnerable to attempts by adversaries to obtain confidential, private, or secret information. For example, attacks such as MDS (Microarchitectural Data Sampling), Spectre, and Meltdown exploit speculative and out-of-order execution capabilities of processors to illicitly read data through side-channel analysis.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:Figure 1A illustrates a system for mitigation of speculation vulnerabilities according to embodiments;Figure 1B illustrates a method for mitigation of speculation vulnerabilities according to embodiments;Figure 1C illustrates a method for mitigation of speculation vulnerabilities according to embodiments;Figure 1D illustrates a method for mitigation of speculation vulnerabilities according to embodiments;Figure 2A illustrates a memory access topology diagram created according to embodiments;Figure 2B illustrates hardware for access distancing according to embodiments;Figure 2C illustrates a method of access distancing according to embodiments;Figure 3A illustrates a system for hybrid-key-based web browsing according to embodiments;Figure 3B illustrates a method for hybrid-key-based web browsing according to embodiments;Figure 4A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments;Figure 4B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments;Figure 5A is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments;Figure 5B is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the full opcode field according to embodiments;Figure 5C is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the register index field according to embodiments;Figure 5D is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the augmentation operation field according to embodiments;Figure 6 is a block diagram of a register architecture according to embodiments;Figure 7A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiment;Figure 7B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments;Figure 8A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to embodiments;Figure 8B is an expanded view of part of the processor core in Figure 8A according to embodiments;Figure 9 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments;Figure 10 shows a block diagram of a system according to embodiments;Figure 11 is a block diagram of a first more specific exemplary system according to embodiments;Figure 12 is a block diagram of a second more specific exemplary system according to embodiments;Figure 13 is a block diagram of a System-on-a-Chip (SoC) according to embodiments; andFigure 14 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description.References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.As used in this specification and the claims and unless otherwise specified, the use of the ordinal adjectives "first," "second," "third," etc. to describe an element merely indicates that a particular instance of an element or different instances of like elements are being referred to, and is not intended to imply that the elements so described must be in a particular sequence, either temporally, spatially, in ranking, or in any other manner. Also, as used in descriptions of embodiments, a "/" character between terms may mean that what is described may include or be implemented using, with, and/or according to the first term and/or the second term (and/or any other additional terms).Also, the terms "bit," "flag," "field," "entry," "indicator," etc., may be used to describe any type or content of a storage location in a register, table, database, or other data structure, whether implemented in hardware or software, but are not meant to limit embodiments to any particular type of storage location or number of bits or other elements within any particular storage location. For example, the term "bit" may be used to refer to a bit position within a register and/or data stored or to be stored in that bit position. The term "clear" may be used to indicate storing or otherwise causing the logical value of zero to be stored in a storage location, and the term "set" may be used to indicate storing or otherwise causing the logical value of one, all ones, or some other specified value to be stored in a storage location; however, these terms are not meant to limit embodiments to any particular logical convention, as any logical convention may be used within embodimentsThe term "core" may mean any processor or execution core, as described and/or illustrated in this specification and its drawings and/or as known in the art, and the terms "processor core," "execution core," and "core" are meant to be synonymous. The term "uncore" may mean any circuitry, logic, sub-systems, etc. (e.g., an integrated memory controller (iMC), power management unit, performance monitoring unit, system and/or I/O controllers, etc.) in/on a processor or system-on-chip (SoC) but not within a core, as described and/or illustrated in this specification and its drawings and/or as known in the art (e.g., by the name uncore, system agent, etc.). However, use of the terms core and uncore in in the description and figures does not limit the location of any circuitry, hardware, structure, etc., as the location of circuitry, hardware, structure, etc. may vary in various embodiments.For example, the term "MSR" may be used as an acronym for model or machine specific register, but may be used more generally to refer to and/or represent one or more registers or storage locations, one or more of which may be in a core, one or more of which may be in an uncore, etc. MSRs included in embodiments, as described below, may correspond to any one or more model specific registers, machine specific registers, etc. to control and report on processor performance, handle system related functions, etc. Accordingly, descriptions of embodiments including MSRs may not be limited to the use of MSRs as described; embodiments may in addition or instead use any other storage for control, configuration, state, etc. information. In various embodiments, MSRs (or any set or subset of MSRs) may or may not be accessible to application and/or user-level software. In various embodiments, MSRs (or any set or subset of MSRs) may be within and/or accessible by a core (core-scoped) or within an uncore and/or accessible by more than one core (package-scoped).Many processors and processor cores support capabilities to increase performance, such as caching, multithreading, out-of-order execution, branch prediction, and speculative execution. Adversaries have found ways to exploit capabilities of these processors to illicitly read data. For example, a speculation vulnerability (SV) may arise when different execution paths are taken at a speculation point in the executing code. In particular, a speculation vulnerability may arise because, for example, two different execution paths may be taken after a speculation point in the process flow. A first path may eventually be determined to be a correct path, so instructions on this path may be retired and allowed to modify the architectural state of the processor. A second path may eventually be determined to be an incorrect path, so instructions on this path would be squashed. However, some changes to the microarchitectural state, such as changes to a cache, may persist and/or be observable.For example, an adversary might intentionally attempt to read data (e.g., secret data) from a memory location that should not be readable by it (i.e., out-of-bounds). The read might be allowed to proceed speculatively until it is determined whether the access is out-of-bounds. The architectural correctness of the system might be ensured by not committing any results until the determination is made. In such cases, the speculative execution might cause the microarchitectural state of the processor to change before the determination is made, and the adversary might be able to perform side-channel analysis to infer the value of the secret data from differences in the microarchitectural state of the processor. Many variants of this type of speculative attack are possible. In one scenario, the adversary might speculatively use the secret data as part of a memory address, and, using a timing analysis to determine what memory locations are being loaded into a cache, infer the value.As a more specific example, with a cache line size of 64 bytes, a change to any of the six least-significant bits of a memory address does not cause the address to refer to a different cache line, but a change to the seventh least-significant bit does cause the address to refer to a different cache line. Therefore, an adversary might repeatedly (e.g., to eliminate noise and/or achieve a statistically significant result) flush and/or fill a cache to a known or predictable state, use a speculative flow to cause a processor to speculatively access secret data, speculatively apply a bit of the secret data to the seventh least-significant bit of a known memory address stored in a register (e.g., using shift and/or other bit manipulation instructions), speculatively access their own memory space with the manipulated memory address, use a timing side-channel analysis to determine if a new cache line loaded, and infer whether the value of the secret bit was the same as or different from the value of the seventh least-significant bit of the known memory address.Embodiments include systems, methods, and apparatuses providing features or characteristics that may be desirable for use in a variety of computer systems for a variety of reasons, including to reduce vulnerability to attacks based on speculation or side-channel analysis; to reduce vulnerability to such analysis with less cost, in performance or otherwise, than an alternative approach; and/or to improve security in general. Embodiments may provide dynamic full-stack security to enhance safe, efficient speculation. For example, a comprehensive hardware and software co-design may include hardware mitigation mechanisms and detection capabilities to help decide how to mitigate and software may determine when to apply mitigation. That is, software may decline to apply hardware mitigation mechanisms when the software and/or hardware determine(s) that it may be unsafe to speculate. Embodiments may also include software-visible instructions to allow software to trigger application of hardware mitigation mechanisms (one, all, or in any combination, as may be specified by the instruction(s) and/or programming/configuration by software/firmware/hardware on a per mechanism, per vulnerability/attack type basis, and/or a combined/group basis, as may be further described below). Such an instruction set architecture design may project a new software safety speculation model onto the microarchitecture.Usages of embodiments may be desired because they may provide dynamic SV mitigation capabilities that may be effective in balancing tradeoffs between security and performance, particularly when observable side effects of speculative execution are transient. Embodiments may provide for varying and/or custom levels of mitigation to increase security when speculation vulnerabilities are present and/or likely to be present and to increase performance when speculation vulnerabilities are not present and/or not likely to be present.Aspects of some embodiments are illustrated in Figure 1A, which shows system 100 including hardware (HW) 110 and software (SW) 120. In embodiments, HW 110 and SW 120 may work together to provide for applications on and/or users of system 100 to create their own SV mitigation experience.Hardware 110 includes SV mitigation HW 130, which represents any one or more hardware mechanisms or switches to mitigate SVs, including known hardware mechanisms and/or novel hardware mechanisms described in this specification. Such hardware mechanisms may include any one or more execution modes that may be referred to as restricted speculative execution (RSE), that may be opted into or out of by software, and that may provide protection against and/or mitigation of persistent side effects left during or following speculative execution.HW 110 also includes SV detection HW 150, which represents any one or more known or novel hardware mechanisms to dynamically detect SVs and/or the conditions under which they may occur. SV detection HW 150 may detect conditions or anomalies that may be used to predict, with various levels of confidence, speculation vulnerabilities. In embodiments, SV detection HW 150 may use machine learning and/or data analytics techniques, implemented in hardware 152, for SV detection, prediction, and/or prediction confidence level determination.SW 120 includes system SW 140, such as an operating system (OS), that may use information, such as predictions of SV and corresponding confidence levels of the predictions, from SV detection HW 150 to dynamically decide when to use SV mitigation HW 130 and which of its capabilities to use. System SW 140 may interface with SV detection HW 150 via registers, such as model or machine specific registers (MSRs). System SW 140 may also or instead utilize instruction set architecture (ISA) instructions to invoke capabilities of the hardware 110. Example embodiments of some such instructions are discussed below.In embodiments, one or more registers (e.g., MSRs) 154 may be used to store information, generated by SV detection HW 150, about the category of an attack and the associated prediction confidence, which system SW 140 could read and use to balance and attempt to optimize a tradeoff between security and performance. For example, system SW 140 might turn on no mitigation based on a low confidence prediction of a first category of attack (e.g., Spectre), but turn on RSE (e.g., using one or more novel instructions as described below) based on a high confidence prediction of a second category of attack.SW 120 also includes application SW 160. Application SW 160 and/or system 100 may be protected from an attack (e.g., malicious code leveraging application SW 160 through injection, hijacking, etc.).As shown in Figure 1A, HW 110 also includes processor core 112, including instruction decoder 114, execution circuitry 116, and memory controller 118. Execution circuitry 116 may include load circuitry 132, store circuitry 134, and branch circuitry 136. Execution circuitry 116, load circuitry 132, store circuitry 134, and/or branch circuitry 136 (and/or structures of, micro-architecture within, etc.) may pre-configured, configured, and/or reconfigured to implement SV mitigation, for example as described above and below, according to embodiments.Instruction decoder 114 may be implemented in decode circuitry and may be to receive, decode, translate, convert, and/or otherwise process instructions, e.g., from system software 140 and application software 160. Memory controller 118 may be to couple processor core 112 to a memory, e.g., a system memory to store instructions from system software 140 and application software 160.In various embodiments, various arrangement and/or integration of the hardware shown in Figure 1A, in/on one or more substrates, chiplets, multichip modules, packages, etc., are possible. For example, all of the hardware shown may be fabricated on the same substrate (e.g., semiconductor chip or die, SoC, etc.), along with additional hardware not shown (e.g., additional processor cores, which may be additional instances of core 112 or instances of any other core). A system memory may be on one or more separate substrates and/or in one or more packages separate from the package containing HW 110.Various embodiments may include any or all of the aspects illustrated in Figure 1A, some with additional aspects. For example, aspects of core 112 may be implemented in core 1490 in embodiments as shown in Figure 7B, a core in embodiments as shown in Figure 8A/8B, cores 1602A/1602N in embodiments as shown in Figure 9, processors 1710/1715 in embodiments as shown in Figure 10, processors 1870/1880 in embodiments as shown in Figures 11and12, and/or application processor 2010 in embodiments as shown in Figure 13.Figure 1B illustrates a method 170 according to embodiments. In 172, one or more default mitigation switches in SV mitigation HW 130 are set (e.g., based on defaults configured in SV detection HW 150 by design, a basic input-output system (BIOS), etc.). In 174, vulnerability to a speculative execution attack is detected (e.g., by SV detection HW 150). In 176, an indication of speculative execution attack vulnerability, which may include SV detection information, such as a prediction of an attack, a category of an attack, and/or a confidence level of an attack, is provided by SV detection HW 150 to system SW 140. In 178, determining, by system SW 140 based on the SV detection indication/information from SV detection HW 150, a mitigation switch policy and/or settings to apply to SV mitigation HW 130.In 180, the hardware 110 receives configuration information, as determined by system SW 140, which may be, include, and or be based on the policy and/or settings determined by system SW 140 in 178. In 182, SV mitigation may be implemented by using the configuration information directly to reconfigure SV mitigation HW 130 and/or indirectly, through an interface (e.g., implemented in SV detection HW 150) such as weights vector 156, which may represent any one or more vectors or other datatypes corresponding to any one or more SV mitigation mechanisms or switches, each with any number of settings to provide a range of mitigation levels.In embodiments, the configuration information may include one or more weights vectors 156 provided by software (e.g., by programming an SV mitigation weights register). In 184, SV mitigation HW 130 may be dynamically reconfigured (e.g., by flipping one or more SV mitigation switches) based on weights vector 156 to provide dynamically varying levels of SV mitigation (e.g., in response to signals from SV detection HW 150).In embodiments, configuring and/or setting switches in SV mitigation HW 130, directly or indirectly, may be performed by system SW 140 using novel instructions, as further described below.Thus, a potential attack may be detected and mitigated against dynamically based on the category of attack, the predicted probability of the attack, the level of security needed/desired by application SW 160 and/or its user, the level of performance needed/desired by application SW 160 and/or its user, etc.In embodiments, one or more instructions added to an ISA or in an extension to an ISA may provide for software (e.g., SW 120) to indicate to hardware (e.g., HW 110) which microarchitectural structures to harden against SVs and under what conditions. In embodiments, such instructions may indicate that any one or more microarchitectural changes may be allowed or not allowed to proceed during speculative execution, including but not limited to: updates to the data cache hierarchy, reads from the data cache hierarchy (including updates to metadata and/or replacement states), updates to the instruction cache and prefetch buffers, changes to metadata and/or replacement states of the instruction cache and prefetch buffers, changes to memory ordering structures (load buffer, store address buffer, store data buffer, etc.), changes to branch predictor state, changes to register state (physical register file, register alias table, etc.), changes to all front-end structures, changes to all back-end structures, changes to all execution resources. In embodiments, each such indication may be used to indicate that the hardware should enforce the hardening (e.g., a hint) or that the hardware must enforce the hardening (e.g., a requirement).In embodiments, different instructions, different encodings of mode bits within or associated with instructions, different segment selectors associated with instructions, different values in registers associated with instructions, different prefixes or suffixes associated with instructions, etc. may be used to differentiate between which microarchitectural structures to harden (or relax/loosen hardening of) and/or which microarchitectural changes to prevent (or allow) for various instances of speculative execution. An instruction used in this way according to embodiments may be referred to as an SV harden, SV hardening, or SV mitigation instruction.In various embodiments, SV harden/mitigation instructions may have various formats; be included in an instruction set architecture corresponding to various register architectures; and/or be decoded, translated, converted, etc., according to a variety of approaches. For example, Figures 4A,4B,5A, 5B, 5C, and5D illustrate embodiments of a format that may be used for an SV harden/mitigation instruction; Figure 6 illustrates embodiments of a register architecture corresponding to an instruction set architecture including one or more SV harden/mitigation instructions; and Figure 14 illustrates embodiments for conversion/translation of harden/mitigation instructions.In embodiments, instructions following the SV harden instruction may be executed with the microarchitecture configured as specified by the SV harden instruction, until, for example, a subsequent SV harden instruction is received, decoded, and/or executed, or until, for example, a speculation frontier is reached (where a speculation frontier may be defined as a dynamic boundary between instructions that are being executed speculatively (e.g., might be on a wrong path) and instructions that are being executed non-speculatively (e.g., known to be on the correct path)).In embodiments, software may fine tune SV mitigation to enable SV mitigation at lower performance cost. In embodiments, program analysis, compiler technology, etc. may be used to determine or suggest which hardware structures should or need to be hardened under which conditions.In embodiments, a mode bit field may be included in the format of or otherwise associated with an SV harden instruction to indicate which microarchitectural structures to harden (or remove/relax hardening of) and/or which microarchitectural changes to prevent (or allow) for various instances of speculative execution.In embodiments, mode bits in the mode bit field may specify multiple microarchitectural structures (coarse-grained mode bits). For example, in a mode bit field, a first bit position may correspond to all (or a designated subset of all) front-end structures, a second bit position may correspond to all (or a designated subset of all) back-end structures, a third bit position may correspond to all (or a designated subset of all) memory structures, a fourth bit position may correspond all (or a designated subset of all) branch-prediction-related structures, a fifth bit position may correspond to all (or a designated subset of all) execution structures, etc.In embodiments, mode bits in the mode bit field may specify particular changes to microarchitectural structures (fine-grained mode bits). For example, different bit positions may correspond to data cache updates, data cache metadata/replacement updates, data cache reads, instruction cache updates, prefetch buffer updates, instruction cache metadata/replacement updates, decoded instruction buffer updates, prefetcher updates (may be separate bits per prefetcher), branch history updates, branch target buffer updates, load buffer updates, store address buffer updates, store data buffer updates, physical register file updates, register alias table updates, instruction translation lookaside buffer (TLB) updates, instruction TLB metadata/replacement updates, data TLB updates, data TLB metadata/replacement updates, secondary TLB updates, secondary TLB metadata/replacement updates, etc.Embodiments may include any combination of coarse-grained and/or fine-grained mode bits associated with any one or more SV harden instruction. Embodiments may include a hardening mode register having any number of bit positions to store information from the mode bit field of an SV harden instruction, for example, one hardening mode register bit per bit of the mode bit field. A mode bit field and/or a hardening mode register may also include any number of bits to represent groups of any other bits, for example, a single global bit that may be used to enable or disable all hardening mechanisms or all hardening mechanisms for which an individual hardening bit is set (or clear).In embodiments, setting protections may include hardening (or removing/relaxing hardening of) any one or more microarchitectural structures and/or preventing (or allowing) any number of changes to microarchitectural state, based on values in a mode bit field of one or more SV harden instructions and/or one or more hardening mode registers, examples of which are described below. Removing and/or relaxing the application of hardening mechanisms and/or allowing previously blocked/prevented changes (e.g., specific changes, types of changes, etc.), whether by hardware and/or or by software (e.g., using an SV harden instruction), may also be referred to as lifting restrictions.In embodiments, an SV harden instruction may be a prefix instruction (e.g., a new instruction or a prefix to an existing instruction) to set (or relax) protections for following instructions and/or the instruction(s) to which the prefix is added. For example:HARDEN_PREFIX<MODE_BITS>In embodiments, an SV harden instruction may be used as one of a pair of instructions to set and reset protections for instructions between the pair of instructions. For example:HARDEN_SET <MODE_BITS>// set specific hardening bits using// logical OR with mode bits register... Hardened CodeHARDEN_RESET <MODE_BITS>// reset specific hardening bits using// logical AND with mode bits registerIn embodiments, a pair of instructions may have opposite syntax to set protections and then reset the protections to the values in place before the most recent corresponding instruction of the pair, thus providing for nested hardening levels. For example:HARDEN_PUSH <MODE_BITS>... Hardened CodeHARDEN_POP <MODE_BITS>// revert hardening bits to their values before// the latest HARDEN_PUSHIn embodiments, a pair of instructions may set some protections at the beginning of a code region and then reset all protections at the end of the code region. For example:HARDEN_REGION_START <MODE_BITS>.. Hardened CodeHARDEN_REGION_END// reset all hardening bitsFigure 1C illustrates a method 180 of configuring SV mitigation mechanisms (e.g., execution circuitry 116) using one or more instructions (e.g., invoked by system SW 140 and/or received/decoded by instruction decoder 114) according to embodiments. In 181, a first invocation of a single instruction to mitigate vulnerability to a speculative execution attack is decoded. In 182, in response to the first invocation of the single instruction, one or more micro-architectural structures in the processor is hardened.In 183, another instruction (e.g., a load instruction, store instruction, branch instruction, instruction to use content (e.g., data, flags, etc.) of a register, etc.) may decoded. The processor may be designed to execute the decoded instruction by performing one or more operations, which may include a first operation that does not leave a side channel (e.g., changes of the state of the microarchitecture that remain after the speculation window closes and are software observable (e.g., effects that can be measured via software methods) or other persistent observable side effects) and/or a second operation that, if performed (e.g., speculatively), would leave a side channel. The second operation may be included in the execution of the instruction to improve performance, in some cases only to improve performance.In 184, in response to the other instruction, the first operation is performed and/or the second operation (because of the hardening applied in 182) is prevented. In some embodiments, the second operation may be delayed until it would no longer leave a side channel.In 185, a second invocation of the single instruction may be decoded. In 186, in response to the second invocation of the single instruction, the hardening of the one or more micro-architectural structures may be relaxed.The single instruction may indicate one or more conditions under which the one or more of the micro-architectural structures are to be hardened, one or more micro-architectural, and/or a hardening mode vector including a plurality of fields, each field corresponding to one of a plurality of hardening mechanisms. The hardening may include preventing changes to a cache, a buffer, or a register.In various embodiments, the single instruction and/or an invocation of the single instruction (e.g., as may be indicated by leaves, operands, parameters, etc. of the single instruction) may be or correspond to a load hardening instruction, a store hardening instruction, a branch hardening instruction, or a register hardening instruction, each as described below.In embodiments, mechanisms to harden microarchitecture against SVs may include any one or more or any combination of any known and/or novel (examples of which may be described below) hardening mechanisms, including but not limited to load hardening, store hardening, branch hardening, and register hardening. The terms "harden" and "hardening" may be used to refer to changing a microarchitectural structure in some way, for example to change it to prevent it from performing or allowing particular operations, some of which may be associated with instructions. Therefore, for convenience, the terms "harden" and "hardening" may also be used to refer to operations and instructions, to mean that these operations and instructions are impacted by the hardening of a micro-architectural structure.In embodiments, load hardening may include determining, predicting, specifying, indicating, etc. which loads to harden, under what conditions loads are to be hardened (and/or hardening is to be removed/relaxed), what type/technique of load hardening is to be performed, etc. For example, loads may be hardened by not allowing speculative load instructions to execute and/or not allowing speculative load operations to proceed, by allowing speculative load instruction to execute and/or allowing speculative load operations to proceed but not allowing the loaded data to be forwarded, by allowing speculative load instructions to execute and/or speculative load operations to proceed but not allowing the loaded data to be forwarded to dependent instructions/operations, etc., until the load is known or presumed to be safe (e.g., known to be on the correct, no longer speculative, execution path).In embodiments, hardware (e.g., SV detection HW 150 as described above) may determine or predict a type or category of attack, and software (e.g., system SW 140, as described above, using an SV hardening instruction as described above) may choose a type or category of load hardening based on the information from the hardware.For example, hardware may predict a Spectre v1 attack, and, in response, software may choose one of the following load hardening mechanisms: do not allow loads to execute/proceed, allow loads to execute/proceed but do not allow them to leave a side channel based on the data returned, do not allow instructions dependent on the loaded data to leave a side channel (e.g., by not allocating cache lines or by not executing), etc. Conditions for removing/relaxing the load hardening, by hardware and/or as specified by software, may include any one of or combination of: when the load is no longer speculative due to older branches (conditional, indirect, implicit, etc.), at retirement of the load instruction, when specific older instructions/operations have completed execution or are retired (e.g., only block-listed/non-safe-listed branches or block-listed/non-safe-listed conditional branches), etc.As another example, in response to hardware predicting a Spectre v2 attack, conditions for removing/relaxing the load hardening may include when indirect branches have completed execution or are retired.As another example, in response to hardware predicting a Spectre v4 attack, software may choose a load hardening mechanism in which a load is prevented from bypassing an older unknown, incomplete, or unretired store.As another example, a mechanism for transient load value hardening may include preventing a load from returning speculative data due to a speculative store bypass, memory renaming, and/or other value speculation schemes.As another example, a mechanism for data oblivious load hardening may include preventing the latency of the load from depending on the value being returned.In embodiments, store hardening may include determining, predicting, specifying, indicating, etc. which stores to harden, under what conditions stores are to be hardened (and/or hardening is to be removed/relaxed), what type/technique of store hardening is to be performed, etc. For example, stores may be hardened by not allowing speculative store instructions to execute and/or not allowing speculative store operations to proceed until the store is known or presumed to be safe (e.g., known to be on the correct, no longer speculative, execution path).In embodiments, hardware (e.g., SV detection HW 150 as described above) may determine or predict a type or category of attack, and software (e.g., system SW 140, as described above, using an SV hardening instruction as described above) may choose a type or category of store hardening based on the information from the hardware.For example, hardware may predict a Spectre v1 attack, and, in response, software may choose one of the following store hardening mechanisms: do not allow stores to execute, allow stores to execute but do not allow them to leave a side channel based on the data stored, do not allow instructions dependent on data from store-to-load forwarding to leave a side channel (e.g., by not allocating cache lines or by not executing), etc. Conditions for removing/relaxing the store hardening, by hardware and/or as specified by software, may include any one of or combination of: when the store is no longer speculative due to older branches (conditional, indirect, implicit, etc.), at retirement of the store instruction, when specific older operations have completed execution (e.g., only block-listed/non-safe-listed branches or block-listed/non-safe-listed conditional branches), etc.As another example, in response to hardware predicting a Spectre v4 attack, software may choose a store hardening mechanism in which younger loads are prevented from bypassing a store.As another example, a mechanism for data oblivious store hardening may include preventing the latency of the store from depending on the value being stored.In embodiments, branch hardening may include determining, predicting, specifying, indicating, etc. which branches to harden, under what conditions branches are to be hardened (and/or hardening is to be removed/relaxed), what type/technique of branch hardening is to be performed, etc. For example, branches may be hardened by not allowing speculative branch instructions to execute and/or not allowing speculative branch operations to proceed, not allowing branch prediction (e.g., instead, stall, mispredict to a known safe location, etc.), harden loads (e.g., as described above) in the shadow of the branch, delaying branch prediction until retirement, checking for a branch termination instruction (e.g., ENDBRANCH), etc., until the branch is known or presumed to be safe (e.g., known to be on the correct, no longer speculative, execution path).In embodiments, hardware (e.g., SV detection HW 150 as described above) may determine or predict a type or category of attack, and software (e.g., system SW 140, as described above, using an SV hardening instruction as described above) may choose a type or category of branch and/or load hardening based on the information from the hardware.For example, hardware may predict a Spectre v1 or v2 attack, and, in response, software may choose a load hardening mechanism (e.g., as described above) for all loads in the shadow of the branch and/or not lifting restrictions set by harden operations younger than the branch or branch condition until the branch is determined to be safe/correct.In embodiments, register hardening may include determining, predicting, specifying, indicating, etc. which registers to harden, under what conditions registers are to be hardened (and/or hardening is to be removed/relaxed), what type/technique of register hardening is to be performed, etc. In embodiments, the hardening may be applied to the output register and/or the flags of an instruction.For example, registers may be hardened by fencing a register, not allowing speculative instructions that load a register to execute and/or not allowing speculative operations that load a register to proceed, not allowing speculative instructions that use the content of a register to execute and/or not allowing speculative operations that use the content of a register to proceed, not performing or allowing data forwarding from a register to data dependent operations, not allowing instructions dependent on the register or flags to leave a side channel (e.g., by not allocating cache lines or not executing, etc., until the content of the register is known or presumed to be safe (e.g., known to be based on the correct, no longer speculative, execution path)). Conditions for removing/relaxing the register hardening, by hardware and/or as specified by software, may include any one of or combination of: when the corresponding register instruction is no longer speculative due to older branches (conditional, indirect, implicit, etc.) or some other hardware predictor, at retirement of the corresponding register instruction, when specific older instructions/operations have completed execution (e.g., only block-listed/non-safe-listed branches or block-listed/non-safe-listed conditional branches), a flag or condition specified by the corresponding register instructions evaluates to true (the fence operation may modify the content of the register if a flag and condition are specified and evaluate to false). etc.In embodiments, hardware (e.g., SV detection HW 150 as described above) may determine or predict a type or category of attack, and software (e.g., system SW 140, as described above, using an SV hardening instruction as described above) may choose a type or category of register hardening based on the information from the hardware.As an example, a mechanism for data oblivious register hardening may include preventing the latency of operations from depending on a value in a register.Various embodiments may include other approaches to and/or techniques for SV mitigation, including, but not limited to the following (each as may be defined/described below): data tainting and tracking, segmentation-based protections, access distancing, and hybrid-key-based web browsing.In embodiments, data tainting and tracking may include the capability for software (e.g., system SW 140), using one or more instructions, mode bits within or associated with one or more instructions, segment selectors associated with one or more instructions, values in registers associated with one or more instructions, prefixes or suffixes associated with one or more instructions, etc. to mark data that might be (e.g., based on information from SV detection HW 150) controlled by an attacker. Such marking may be referred to as tainting and/or such data may be referred to as tainted (and data not so marked may be referred to as untainted).In embodiments, tainted data may be tracked by hardware. For example, the data itself may be marked by including in it one or more extra bits to mark it as tainted. As another example, a record or list may be kept or maintained to indicate registers, memory locations, or other storage locations (e.g., by address or otherwise) into which tainted data has been loaded or stored.In embodiments, operations using tainted data may be prevented from being performed speculatively, operations using tainted data may be allowed to be performed non-speculatively, and/or operations using untainted data may be allowed to be performed speculatively and non-speculatively. For example, a speculative load from a memory address may be allowed to proceed if the address is a tainted address (i.e., marked as tainted data) but prevented from proceeding if the address is a tainted address (i.e., marked as tainted data).Figure ID illustrates a method 190 of data tainting for SV mitigation according to embodiments. In 191, vulnerability to a speculative execution attack is detected (e.g., by SV detection HW 150). In 192, in connection with detection of vulnerability to a speculative execution attack, an indication that data from a first operation is tainted is provided (e.g., by SV detection HW 150 to system SW 140). In 193, the data is marked as to be tracked (e.g., marked by SV detection HW 150 for tracking by HW 110) and/or as tainted (e.g., in response to decoding an instruction from system SW 140). In 194, performance of a second operation using the data if the second operation is to be performed speculatively and the data is tainted is prevented (e.g., by SV mitigation HW 130). In 195, the second operation is performed if or when performance is or becomes non-speculative or the data is or becomes untainted.In embodiments, segmentation-based protections may include a novel programming language construct that provide for particular regions of code to access particular segments (or ranges, regions, etc.) of memory with protection from SVs. In embodiments, the protected segments may be used to store data structures, their fields, program variables, etc. for particular programs. In embodiments, the programming language construct may also allow for specifying access permissions.In embodiments, the programming language construct may be compiled to use instructions to access the memory in the segment with protection check in place. These instructions may be novel instructions and/or instructions (e.g., that read, write, or modify the memory segment) with or associated with mode bits, segment selectors, values in registers, prefixes or suffixes, etc. to specify the protections and/or access permissions. In embodiments, these instructions may be instructions that are executed with the specified access checking performed automatically.In embodiments, the programming language construct and novel instructions may be supported by hardware that executes the code while protecting the segment from intrusion including speculative side channel attacks (e.g., using any known or novel (as may be described in this specification) SV mitigation techniques). In embodiments, implementation of the hardware may provide for the instructions to be performed without explicit loads and checks of the segment bounds.For example, the programming language construct may be of the form (where "GiveAccess" represents a name/label/mnemonic of/for the instruction/construct, "Base=CodeBegin" is to indicate/specify the start of the code, "CodeLen" is to indicate/specify the length/range of the code, "MemBegin" is to indicate/specify the start (e.g., an address) of the corresponding memory segment, "MemLen" is to indicate/specify the length of the corresponding memory segment, and "AccessType" is to indicate/specify the permissions):GiveAccess Base=CodeBegin, CodeLen, MemBegin, MemLen, AccessTypeIn embodiments, the specified code region may include a table having a number of different buffers that it may access. The buffers may be embedded in the table, for example (where "Num of buffs" corresponds to the number of buffers including a first buffer starting at "Start_1" and having a length/range indicated/specified by "Len_1" and permissions indicated/specified by "AccessType1" and so on): Num of buffsStart 1, Len_1, AccessType_1Start_2, Len_2, AccessType_2Start_n, Len_n, AccessType_nIn embodiments, the code within the specified region may access the memory buffers with an index to the table and an index within the corresponding buffer.In embodiments, just-in-time (JIT) compilers may dynamically check for availability of the construct and generate code accordingly, and static compilers may generate a version of code that uses the construct and another version that does not.In embodiments, access distancing may include refactoring software programs, applications, libraries, modules, components, functions, procedures, blocks, and/or other form of software and/or program code, etc. (where the term "code" may be used to mean software in any such form) to limit the impact of intrusion by reducing the attack surface. Embodiments may provide for the safety of code to be increased by reducing and/or redirecting one or more interactions and communications by a component (where the term "component" may be used to mean the code or any portion or subset of the code) and/or between/among components so that fewer components are exposed to a vulnerable or faulty component. Embodiments may include automated creation of an access graph of code and automated refactoring to a more restrictive access topology. Embodiments may use hardware- or software-based telemetry to guide the refactoring.In embodiments, telemetry data may be collected when code is executed to provide a memory access topology diagram of the code, revealing the interactions and communications between different modules and what data is touched by different execution paths. In embodiments, such information and/or related information may also or instead be gathered by profiling the code when it is compiled.In embodiments, a software development advisor tool may use the memory access topology diagram to reduce the attack surface by refactoring the code. Figure 2A illustrates a simple example.In Figure 2A, a memory access topology diagram 200 created according to an embodiment may reveal that a module P (210) is used by three functions: F (222), G (224), and H (226). Module P has three data structures: S1 (232), S2 (234), and Sn (236), that are accessed by its code. For providing service to the calls from F, G, and H, the functions f1 (242), f2 (244), and fn (246) are executed correspondingly. As is, all the mentioned data structures can be accessed and modified by each one of functions f1, f2, fn. However, in reality, only S1 might be needed for f1, S2 for f2, and Sn for fn. If the code of f2 can be attacked, as is, that can impact S1, S2, and Sn. However, a software development advisor tool according to an embodiment may analyze the access patterns, realize this fact, and transform the code on the left side to the code on the right side. Thus, in this example, an embodiment reduces the attack surface of the code from a size of 3*(S1+S2+Sn) to a size of S1+S2+Sn, which is one third of that of the original code.In the example of Figure 2A, a full isolation of the functions is performed by closing and specializing the module P for its three different callers (F, G, H), which might not always possible for other code. However, in embodiments, similar transformations may group different parts of code, including modules and functions, to reduce the attack surface.Hardware 250 for access distancing, as shown in Figure 2B, according to embodiments, may include one or more processor cores 252 to execute code, and memory access circuitry 254 to access a memory in connection with execution of the code. One or more of the one or more processor cores 252 is also to generate a memory access topology diagram of the code to determine a first attackable surface of the code (e.g., as described above); and refactor the code based on the memory access topology diagram to generate refactored code, the refactored code to have a second attackable surface smaller than the first attackable surface (e.g., as described above).A method 260 of access distancing according to embodiments is shown in Figure 2C. In 262, code is executed.In 264, a data access profile of the code is collected (e.g., as described above). Collecting the data access profile may be performed statically or dynamically, by executing the code in its use scenarios (e.g., using telemetry hardware). In various embodiments, collecting the data access profile, may be performed/implanted by/in hardware, firmware, software, and/or any combination of hardware, firmware, and software.In 266, a memory access topology diagram is generated (e.g., as described above) based on the data access profile. In various embodiments, generating the memory access topology diagram, may be performed/implanted by/in hardware, firmware, software, and/or any combination of hardware, firmware, and software.In 268, the code is refactored (e.g., as described above). Refactoring the code may be performed by a software development advisor tool that uses the profile information and the code to create a model for calculating the attack surface, then transforming the code to reduce the attack surface based on the model. In embodiments, the transforming may include cloning and/or specialization of procedures to provide for reducing interactions and communications. In embodiments, the method may be iterative, and the advisor tool may learn from new telemetry data from transformed code. In various embodiments, refactoring, including the advisor tool, may be performed/implanted by/in hardware, firmware, software, and/or any combination of hardware, firmware, and software.In embodiments, refactoring may be performed statically or dynamically. For example, a JIT or managed runtime could dynamically profile code and then specialize it on the fly to perform fine-grained compartmentalization. An optimizing JIT may have a series of "gears," wherein they shift to higher, more aggressively optimized specialization of a function in response to learning that the function has a high (e.g., at or above a fixed or variable threshold) frequency of use and/or many (e.g., at or above a fixed or variable threshold) interactions/communications. Permissions of a function may be locked down (e.g., by or based on information from a profiler) after sufficient (e.g., at or above a fixed or variable threshold) knowledge regarding its use, boundaries, interactions, communication, etc. has been collected and/or analyzed.In embodiments, the security of and/or the efficiency of securing web browsing, website usage, web application usage, etc. may be increased and SV may be mitigated by protecting memory with hybrid keys based on public keys and process identifiers (IDs). For example, embodiments may be used to protect data, executable content, and code generation such as JIT code/bytecode and its generation, compiled/pre-generated code/bytecode and its generation, web application (e.g., progressive web application or PWA) content, etc.Usages of embodiments may be desired because they may be more compatible with existing approaches to web security (e.g., public key private key encryption) and more efficient than existing approaches to web security (e.g., process isolation). For example, embodiments may provide for public-key-based web applications to use a combined memory security policy that allows groupings of processes (e.g., based on groups of webpages) to use shared memory, instead of isolating all processes (e.g., each individual webpage) from each other.Aspects of some embodiments are illustrated in Figure 3A. Figure 3A shows system 300 including and/or capable of receiving a number of public keys 312 and a number of process IDs 314. Each public key 312 may be obtained, for example, from a corresponding website and/or website certificate and/or be used for site isolation and secure internet communication. Each process ID 314 may correspond to (e.g., be generated to identify) a process, such as a website or browser process, where a "process" may include a process, a task, a software thread, an application, a virtual machine, a container, etc.Any combination of one or more public keys 312 and one or more process IDs 314 may be used by hybrid key generator 310 to generate one or more hybrid keys 316. For example, a first public key from a first website and a first and a second process ID may be used to generate a first hybrid key, a second public key from a second website and a third and a fourth process ID may be used to generate a second hybrid key, and so on.In embodiments, hybrid key generator 31o may include hardware such as circuitry to generate and/or combine cryptographic keys, such as but not limited to one or more shift registers (e.g., linear feedback shift registers), modular exponentiation circuitry, elliptical curve cryptography circuitry, arithmetic operation circuitry, logic operation circuitry, etc. In embodiments, hybrid key generator 310 may use inputs in addition to public keys and process IDs to generate keys. These inputs may include random numbers, pseudo-random numbers, and/or private keys of system 300 and/or a processor/core in system 300 (e.g., generated by a random number generator, generated by a physical unclonable function, stored in fuses, etc.).Each such hybrid key 316 may be used by hybrid-key-based memory protection hardware 320 to protect memory 330. For example, memory protection hardware 320 may protect one or more memory spaces using a single hybrid key 316. Each memory space may include and/or correspond to one or more memory ranges, regions, or portions of memory 330 (e.g., defined by address ranges where the address may be physical addresses, virtual addresses, host addresses, guest addresses, etc.). Memory protection hardware 320 may use a single hybrid key 316 to protect memory spaces according to any memory protection technique, such as using the single hybrid key 316 to encrypt and decrypt data as it is stored in and loaded from memory 330, using the single hybrid key 316 to control access to memory 330 based on range registers, etc. Furthermore, hybrid-key-based memory protection hardware 320 may use multiple hybrid keys 316, each to protect one or more corresponding spaces, ranges, or regions of memory 330.In embodiments, memory 330 may represent system memory (e.g., dynamic random-access memory), local (e.g., static random-access memory on the same substrate, chip, or die, or within the same package as a processor or processor core executing processes using the memory), or a combination of system and local memory. Memory 330 may store/cache content, data, code, etc. from/for any number of processes (e.g., website processes, browser processes, etc.). In embodiments, access to spaces in memory 330 may be provided and/or controlled through a memory access structure 332, which may include hardware, circuitry, and/or storage to generate, store, and/or reference one or more memory pointers, memory addresses, memory address ranges, memory address translation/page/paging tables or structures, which may prevent, restrict, limit, and/or otherwise control access based on (e.g., access may require) a corresponding hybrid key 316. For example, access to each web/browser process's content, data, code, etc. in memory 330 through a heap memory pointer structure may require a corresponding hybrid key 316.In embodiments, memory access structure 332 may represent a single structure to control access to a single memory space, a single structure to control access to multiple spaces, multiple structures wherein each structure is to control access to a corresponding one of multiple spaces, a distributed structure including multiple single structures (e.g., one per memory space to provide/perform generation, storage, referencing, etc. unique to each memory space associated with a particular hybrid key 316) and a shared structure (e.g., to provide/perform generation, storage, referencing, etc. common to all of memory spaces associated with the particular hybrid key 316), etc.In embodiments, any number of processes may share a hybrid key (e.g., generated based on a single public key and any number of process IDs) and therefore share memory space(s) in memory 330. Furthermore, memory 330 may also be used to store memory spaces protected with processes IDs for individual processes (including those based on websites/browsing and those not based on websites/browsing) according to any known approach.In embodiments, pre-compiled binaries used in JIT code such as built-ins as well as JIT code compiled at runtime by a virtual machine (VM) that is converted to a bytecode (e.g., abstract syntax tree (AST) bytecode and content used in web applications (e.g., JavaScript text code, WebAssembly bytecode, Cascade Style Sheets (CSS)) and binary images (e.g., an executable file) may be associated with a hybrid key. Embodiments may provide for grouping rights of applications, functional processes, and content providers and allow grouped processes to share memory.Figure 3B illustrates method 350 of protecting memory using hybrid keys according to embodiments. In 352, a public key may be received from a website. In 354, a hybrid key based on a first public key and one or more process identifiers is generated (e.g., by hybrid key generator 310). Each of the process identifiers may correspond to one or more memory spaces in a memory.In 356, the hybrid key is associated (e.g., by memory protection hardware 320) with each of multiple memory access structures. Each of the memory access structures to control access to a corresponding one of the memory spaces.In 358, the hybrid key is used (e.g., by memory protection hardware 320 and/or memory access structure(s) 332 to control access to one or more of the memory spaces. For example, the hybrid key may be used to allow access a first group of web browser processes to access a first group of memory spaces and prevent access by a process that is not in the group.ADDITIONAL DESCRIPTIONDescribed below are mechanisms, including instruction sets, to support systems, processors, emulation, etc. according to embodiments. For example, what is described below details aspects of instruction formats and instruction execution including various pipeline stages such as fetch, decode, schedule, execute, retire, etc. that may be used in a core according to embodiments.Different figures may show corresponding aspects of embodiments. For example, any and/or all of the blocks in Figure 1A may correspond to blocks in other figures. Furthermore, a block representing hardware in Figure 1A may correspond to a block representing hardware in any of the other figures, such as in a block diagram of a system according to an embodiment. As such, an embodiment represented by that system-level block diagram may include any of the blocks shown in other figures as well as any of the details in the descriptions of those other figures. The same is true for figures depicting a core, a multicore processor, a system on a chip (SoC), etc.INSTRUCTION SETSAn instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream may have specific contents in the operand fields that select specific operands. A set of single instruction multiple data (SIMD) extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, September 2014 ; and see Intel® Advanced Vector Extensions Programming Reference, October 2014 ).EXEMPLARY INSTRUCTION FORMATSEmbodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations with the vector friendly instruction format.Figures 4A-4B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments. Figure 4A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments; while Figure 4B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments. Specifically, a generic vector friendly instruction format 1100 is shown, for which are defined class A and class B instruction templates, both of which include no memory access 1105 instruction templates and memory access 1120 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.While embodiments will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).The class A instruction templates in Figure 4A include: 1) within the no memory access 1105 instruction templates there is shown a no memory access, full round control type operation 1110 instruction template and a no memory access, data transform type operation 1115 instruction template; and 2) within the memory access 1120 instruction templates there is shown a memory access, temporal 1125 instruction template and a memory access, non-temporal 1130 instruction template. The class B instruction templates in Figure 4B include: 1) within the no memory access 1105 instruction templates there is shown a no memory access, write mask control, partial round control type operation 1112 instruction template and a no memory access, write mask control, vsize type operation 1117 instruction template; and 2) within the memory access 1120 instruction templates there is shown a memory access, write mask control 1127 instruction template.The generic vector friendly instruction format 1100 includes the following fields listed below in the order illustrated in Figures 4A-4B.Format field 1140 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.Base operation field 1142 - its content distinguishes different base operations.Register index field 1144 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g., 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).Modifier field 1146 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 1105 instruction templates and memory access 1120 instruction templates. Memory access operations read from and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.Augmentation operation field 1150 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment, this field is divided into a class field 1168, an alpha field 1152, and a beta field 1154. The augmentation operation field 1150 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.Scale field 1160 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale* index + base).Displacement Field 1162A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale* index + base + displacement).Displacement Factor Field 1162B (note that the juxtaposition of displacement field 1162A directly over displacement factor field 1162B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale* index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 1174 (described later herein) and the data manipulation field 1154C. The displacement field 1162A and the displacement factor field 1162B are optional in the sense that they are not used for the no memory access 1105 instruction templates and/or different embodiments may implement only one or none of the two.Data element width field 1164 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.Write mask field 1170 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-write-masking, while class B instruction templates support both merging- and zeroing-write-masking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 1170 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments are described in which the write mask field's 1170 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 1170 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 1170 content to directly specify the masking to be performed.Immediate field 1172 - its content allows for the specification of an immediate. This field is optional in the sense that it is not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.Class field 1168 - its content distinguishes between different classes of instructions. With reference to Figures 4A-B, the contents of this field select between class A and class B instructions. In Figures 4A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 1168A and class B 1168B for the class field 1168 respectively in Figures 4A-B).INSTRUCTION TEMPLATES OF CLASS AIn the case of the non-memory access 1105 instruction templates of class A, the alpha field 1152 is interpreted as an RS field 1152A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1152A.1 and data transform 1152A.2 are respectively specified for the no memory access, round type operation 1110 and the no memory access, data transform type operation 1115 instruction templates), while the beta field 1154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1105 instruction templates, the scale field 1160, the displacement field 1162A, and the displacement scale filed 1162B are not present.No-MEMORY ACCESS INSTRUCTION TEMPLATES - FULL ROUND CONTROL TYPE OPERATIONIn the no memory access full round control type operation 1110 instruction template, the beta field 1154 is interpreted as a round control field 1154A, whose content(s) provide static rounding. While in the described embodiments the round control field 1154A includes a suppress all floating-point exceptions (SAE) field 1156 and a round operation control field 1158, alternative embodiments may support (e.g., may encode) both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 1158).SAE field 1156 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 1156 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler.Round operation control field 1158 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 1158 allows for the changing of the rounding mode on a per instruction basis. In one embodiment where a processor includes a control register for specifying rounding modes, the round operation control field's 1158 content overrides that register value.No MEMORY ACCESS INSTRUCTION TEMPLATES - DATA TRANSFORM TYPE OPERATIONIn the no memory access data transform type operation 1115 instruction template, the beta field 1154 is interpreted as a data transform field 1154B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).In the case of a memory access 1120 instruction template of class A, the alpha field 1152 is interpreted as an eviction hint field 1152B, whose content distinguishes which one of the eviction hints is to be used (in Figure 4A, temporal 1152B.1 and non-temporal 1152B.2 are respectively specified for the memory access, temporal 1125 instruction template and the memory access, non-temporal 1130 instruction template), while the beta field 1154 is interpreted as a data manipulation field 1154C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 1120 instruction templates include the scale field 1160, and optionally the displacement field 1162A or the displacement scale field 1162B.Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred dictated by the contents of the vector mask that is selected as the write mask.MEMORY ACCESS INSTRUCTION TEMPLATES - TEMPORALTemporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.MEMORY ACCESS INSTRUCTION TEMPLATES - NON-TEMPORALNon-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.INSTRUCTION TEMPLATES OF CLASS BIn the case of the instruction templates of class B, the alpha field 1152 is interpreted as a write mask control (Z) field 1152C, whose content distinguishes whether the write masking controlled by the write mask field 1170 should be a merging or a zeroing.In the case of the non-memory access 1105 instruction templates of class B, part of the beta field 1154 is interpreted as an RL field 1157A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 1157A.1 and vector length (VSIZE) 1157A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 1112 instruction template and the no memory access, write mask control, VSIZE type operation 1117 instruction template), while the rest of the beta field 1154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 1105 instruction templates, the scale field 1160, the displacement field 1162A, and the displacement scale filed 1162B are not present.In the no memory access, write mask control, partial round control type operation 1112 instruction template, the rest of the beta field 1154 is interpreted as a round operation field 1159A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler).Round operation control field 1159A - just as round operation control field 1158, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 1159A allows for the changing of the rounding mode on a per instruction basis. In one embodiment where a processor includes a control register for specifying rounding modes, the round operation control field's 1159A content overrides that register value.In the no memory access, write mask control, VSIZE type operation 1117 instruction template, the rest of the beta field 1154 is interpreted as a vector length field 1159B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).In the case of a memory access 1120 instruction template of class B, part of the beta field 1154 is interpreted as a broadcast field 1157B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 1154 is interpreted as the vector length field 1159B. The memory access 1120 instruction templates include the scale field 1160, and optionally the displacement field 1162A or the displacement scale field 1162B.With regard to the generic vector friendly instruction format 1100, a full opcode field 1174 is shown including the format field 1140, the base operation field 1142, and the data element width field 1164. While one embodiment is shown where the full opcode field 1174 includes all of these fields, the full opcode field 1174 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 1174 provides the operation code (opcode).The augmentation operation field 1150, the data element width field 1164, and the write mask field 1170 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.The combination of write mask field and data element width field creates typed instructions in that they allow the mask to be applied based on different data element widths.The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high-performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general-purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general-purpose cores may be high-performance general-purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments. Programs written in a high level language would be put (e.g., JIT compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.EXEMPLARY SPECIFIC VECTOR FRIENDLY INSTRUCTION FORMATFigure 5A is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments. Figure 5A shows a specific vector friendly instruction format 1200 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 1200 may be used to extend the x86 instruction set, and thus some of the fields are similar to or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 4 into which the fields from Figure 5A map are illustrated.It should be understood that, although embodiments are described with reference to the specific vector friendly instruction format 1200 in the context of the generic vector friendly instruction format 1100 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 1200 except where claimed. For example, the generic vector friendly instruction format 1100 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 1200 is shown as having fields of specific sizes. By way of specific example, while the data element width field 1164 is illustrated as a one-bit field in the specific vector friendly instruction format 1200, the invention is not so limited (that is, the generic vector friendly instruction format 1100 contemplates other sizes of the data element width field 1164).The specific vector friendly instruction format 1200 includes the following fields listed below in the order illustrated in Figure 5A.EVEX Prefix 1202 (Bytes 0-3)- is encoded in a four-byte form.Format Field 1140 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 1140 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format) in one embodiment.The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.REX field 1205 (EVEX Byte 1, bits [7-5]) - consists of an EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and EVEX.B bit field EVEX byte 1, bit [5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1s complement form, i.e., ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.REX' field 1210 - this is the first part of the REX' field 1210 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.Opcode map field 1215 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (0F, OF 38, or OF 3).Data element width field 1164 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).EVEX.vwv 1220 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX. vvvv may include the following: 1) EVEX.vwv encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vwv encodes the destination register operand, specified in 1s complement form for certain vector shifts; or 3) EVEX.vwv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 1220 encodes the 4 low-order bits of the first source register specifier stored in inverted (Is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 1168 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.Prefix encoding field 1225 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's programmable logic array (PLA), so the PLA may execute both the legacy and EVEX format of these legacy instructions without modification. Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2-bit SIMD prefix encodings, and thus not require the expansion.Alpha field 1152 (EVEX byte 3, bit [7] -EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with α) - as previously described, this field is context specific.Beta field 1154 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LLO, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.REX' field 1210 - this is the remainder of the REX' field and is the EVEX.V' bit field (EVEX Byte 3, bit [3] - V') that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Write mask field 1170 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).Real Opcode Field 1230 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 1240 (Byte 5) includes MOD field 1242, Reg field 1244, and R/M field 1246. As previously described, the MOD field's 1242 content distinguishes between memory access and non-memory access operations. The role of Reg field 1244 may be summarized to two situations: encoding either the destination register operand or a source register operand or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 1246 may include the following: encoding the instruction operand that references a memory address or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the content of SIB 1250 is used for memory address generation. SIB.xxx 1254 and SIB.bbb 1256 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.Displacement field 1162A (Bytes 7-10) - when MOD field 1242 contains 10, bytes 7-10 are the displacement field 1162A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.Displacement factor field 1162B (Byte 7) - when MOD field 2642 contains 01, byte 7 is the displacement factor field 1162B. The location of this field is the same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it may only address between -128 and 127 bytes offsets; in terms of 64-byte cache lines, disp8 uses 8 bits that may be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 1162B is a reinterpretation of disp8; when using displacement factor field 1162B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte used for the displacement but with a much greater range). Such compressed displacement assumes that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 1162B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 1162B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 1172 operates as previously described.FULL OPCODE FIELDFigure 5B is a block diagram illustrating the fields of the specific vector friendly instruction format 1200 that make up the full opcode field 1174 according to one embodiment. Specifically, the full opcode field 1174 includes the format field 1140, the base operation field 1142, and the data element width (W) field 1164. The base operation field 1142 includes the prefix encoding field 1225, the opcode map field 1215, and the real opcode field 1230.REGISTER INDEX FIELDFigure 5C is a block diagram illustrating the fields of the specific vector friendly instruction format 1200 that make up the register index field 1144 according to one embodiment. Specifically, the register index field 1144 includes the REX 1205 field, the REX' 1210 field, the MODR/M.reg field 1244, the MODR/M.r/m field 1246, the VVVV field 1220, xxx field 1254, and the bbb field 1256.AUGMENTATION OPERATION FIELDFigure 5D is a block diagram illustrating the fields of the specific vector friendly instruction format 1200 that make up the augmentation operation field 1150 according to one embodiment. When the class (U) field 1168 contains 0, it signifies EVEX.U0 (class A 1168A); when it contains 1, it signifies EVEX.U1 (class B 1168B). When U=0 and the MOD field 1242 contains 11 (signifying a no memory access operation), the alpha field 1152 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 1152A. When the rs field 1152A contains a 1 (round 1152A.1), the beta field 1154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 1154A. The round control field 1154A includes a one-bit SAE field 1156 and a two-bit round operation field 1158. When the rs field 1152A contains a 0 (data transform 1152A.2), the beta field 1154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three-bit data transform field 1154B. When U=0 and the MOD field 1242 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 1152 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 1152B and the beta field 1154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three-bit data manipulation field 1154C.When U=1, the alpha field 1152 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 1152C. When U=1 and the MOD field 1242 contains 11 (signifying a no memory access operation), part of the beta field 1154 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 1157A; when it contains a 1 (round 1157A.1) the rest of the beta field 1154 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 1159A, while when the RL field 1157A contains a 0 (VSIZE 1157A.2) the rest of the beta field 1154 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 1159B (EVEX byte 3, bit [6-5]- L1-0). When U=1 and the MOD field 1242 contains 00, 01, or 10 (signifying a memory access operation), the beta field 1154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 1159B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 1157B (EVEX byte 3, bit [4]- B).EXEMPLARY REGISTER ARCHITECTUREFigure 6 is a block diagram of a register architecture 1300 according to one embodiment. In the embodiment illustrated, there are 32 vector registers 1310 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 1200 operates on these overlaid register file as illustrated in the below tables.Adjustable Vector LengthClassOperationsRegistersInstruction Templates that do not include the vector length field 1159BA (Figure 5A; U=0)1110, 1115, 1125, 1130zmm registers (the vector length is 64 byte)B (Figure 5B; U=1)1112zmm registers (the vector length is 64 byte)Instruction templates that do include the vector length field 1159BB (Figure 5B; U=1)1117, 1127zmm, ymm, or xmm registers (the vector length is 64-byte, 32 byte, or 16 byte) depending on the vector length field 1159BIn other words, the vector length field 1159B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instruction templates without the vector length field 1159B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 1200 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in a zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.Write mask registers 1315 - in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 1315 are 16 bits in size. As previously described, in one embodiment, the vector mask register k0 may not be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction.General-purpose registers 1325 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 1345, on which is aliased the MMX packed integer flat register file 1350 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments may use wider or narrower registers. Additionally, alternative embodiments may use more, less, or different register files and registers.EXEMPLARY CORE ARCHITECTURES, PROCESSORS, AND COMPUTER ARCHITECTURESProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high-performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.EXEMPLARY CORE ARCHITECTURESIN-ORDER AND OUT-OF-ORDER CORE BLOCK DIAGRAMFigure 7A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments. Figure 7B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments. The solid lined boxes in Figures 7A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 7A, a processor pipeline 1400 includes a fetch stage 1402, a length decode stage 1404, a decode stage 1406, an allocation stage 1408, a renaming stage 1410, a scheduling (also known as a dispatch or issue) stage 1412, a register read/memory read stage 1414, an execute stage 1416, a write back/memory write stage 1418, an exception handling stage 1422, and a commit stage 1424.Figure 7B shows processor core 1490 including a front-end unit 1430 coupled to an execution engine unit 1450, and both are coupled to a memory unit 1470. The core 1490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front-end unit 1430 includes a branch prediction unit 1432 coupled to an instruction cache unit 1434, which is coupled to an instruction translation lookaside buffer (TLB) unit 1436, which is coupled to an instruction fetch unit 1438, which is coupled to a decode unit 1440. The decode unit 1440 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, PLAs, microcode read only memories (ROMs), etc. In one embodiment, the core 1490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1440 or otherwise within the front-end unit 1430). The decode unit 1440 is coupled to a rename/allocator unit 1452 in the execution engine unit 1450.The execution engine unit 1450 includes the rename/allocator unit 1452 coupled to a retirement unit 1454 and a set of one or more scheduler unit(s) 1456. The scheduler unit(s) 1456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1456 is coupled to the physical register file(s) unit(s) 1458. Each of the physical register file(s) units 1458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1458 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) unit(s) 1458 is overlapped by the retirement unit 1454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1454 and the physical register file(s) unit(s) 1458 are coupled to the execution cluster(s) 1460. The execution cluster(s) 1460 includes a set of one or more execution units 1462 and a set of one or more memory access units 1464. The execution units 1462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1456, physical register file(s) unit(s) 1458, and execution cluster(s) 1460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 1464 is coupled to the memory unit 1470, which includes a data TLB unit 1472 coupled to a data cache unit 1474 coupled to a level 2 (L2) cache unit 1476. In one exemplary embodiment, the memory access units 1464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1472 in the memory unit 1470. The instruction cache unit 1434 is further coupled to a level 2 (L2) cache unit 1476 in the memory unit 1470. The L2 cache unit 1476 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1400 as follows: 1) the instruction fetch 1438 performs the fetch and length decoding stages 1402 and 1404; 2) the decode unit 1440 performs the decode stage 1406; 3) the rename/allocator unit 1452 performs the allocation stage 1408 and renaming stage 1410; 4) the scheduler unit(s) 1456 performs the schedule stage 1412; 5) the physical register file(s) unit(s) 1458 and the memory unit 1470 perform the register read/memory read stage 1414; the execution cluster 1460 perform the execute stage 1416; 6) the memory unit 1470 and the physical register file(s) unit(s) 1458 perform the write back/memory write stage 1418; 7) various units may be involved in the exception handling stage 1422; and 8) the retirement unit 1454 and the physical register file(s) unit(s) 1458 perform the commit stage 1424.The core 1490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1490 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1434/1474 and a shared L2 cache unit 1476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.SPECIFIC EXEMPLARY IN-ORDER CORE ARCHITECTUREFigures 8A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 8A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1502 and with its local subset of the Level 2 (L2) cache 1504, according to embodiments. In one embodiment, an instruction decoder 1500 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 1506 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1508 and a vector unit 1510 use separate register sets (respectively, scalar registers 1512 and vector registers 1514) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 1506, alternative embodiments may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 1504 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1504. Data read by a processor core is stored in its L2 cache subset 1504 and may be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1504 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.Figure 8B is an expanded view of part of the processor core in Figure 8A according to embodiments. Figure 8B includes an L1 data cache 1506A as part of the L1 cache 1506, as well as more detail regarding the vector unit 1510 and the vector registers 1514. Specifically, the vector unit 1510 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1528), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1520, numeric conversion with numeric convert units 1522A-B, and replication with replication unit 1524 on the memory input. Write mask registers 1526 allow predicating resulting vector writes.Figure 9 is a block diagram of a processor 1600 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments. The solid lined boxes in Figure 9 illustrate a processor 1600 with a single core 1602A, a system agent unit 1610, a set of one or more bus controller units 1616, while the optional addition of the dashed lined boxes illustrates an alternative processor 1600 with multiple cores 1602A-N, a set of one or more integrated memory controller unit(s) 1614 in the system agent unit 1610, and special purpose logic 1608.Thus, different implementations of the processor 1600 may include: 1) a CPU with the special purpose logic 1608 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1602A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1602A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1602A-N being a large number of general purpose in-order cores. Thus, the processor 1600 may be a general-purpose processor, coprocessor, or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1600 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1606, and external memory (not shown) coupled to the set of integrated memory controller units 1614. The set of shared cache units 1606 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring-based interconnect unit 1612 interconnects the special purpose logic 1608 (integrated graphics logic is an example of and is also referred to herein as special purpose logic), the set of shared cache units 1606, and the system agent unit 1610/integrated memory controller unit(s) 1614, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1606 and cores 1602A-N.In some embodiments, one or more of the cores 1602A-N are capable of multithreading. The system agent 1610 includes those components coordinating and operating cores 1602A-N. The system agent unit 1610 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1602A-N and the special purpose logic 1608. The display unit is for driving one or more externally connected displays.The cores 1602A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1602A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.EXEMPLARY COMPUTER ARCHITECTURESFigures 10-13 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, handheld devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 10, shown is a block diagram of a system 1700 in accordance with one embodiment. The system 1700 may include one or more processors 1710, 1715, which are coupled to a controller hub 1720. In one embodiment the controller hub 1720 includes a graphics memory controller hub (GMCH) 1790 and an Input/Output Hub (IOH) 1750 (which may be on separate chips); the GMCH 1790 includes memory and graphics controllers to which are coupled memory 1740 and a coprocessor 1745; the IOH 1750 couples input/output (I/O) devices 1760 to the GMCH 1790. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1740 and the coprocessor 1745 are coupled directly to the processor 1710, and the controller hub 1720 in a single chip with the IOH 1750.The optional nature of additional processors 1715 is denoted in Figure 10 with broken lines. Each processor 1710, 1715 may include one or more of the processing cores described herein and may be some version of the processor 1600.The memory 1740 may be, for example, dynamic random-access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1720 communicates with the processor(s) 1710, 1715 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1795.In one embodiment, the coprocessor 1745 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1720 may include an integrated graphics accelerator.There may be a variety of differences between the physical resources 1710, 1715 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1710 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1710 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1745. Accordingly, the processor 1710 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1745. Coprocessor(s) 1745 accept and execute the received coprocessor instructions.Referring now to Figure 11, shown is a block diagram of a first more specific exemplary system 1800 in accordance with an embodiment. As shown in Figure 11, multiprocessor system 1800 is a point-to-point interconnect system, and includes a first processor 1870 and a second processor 1880 coupled via a point-to-point interconnect 1850. Each of processors 1870 and 1880 may be some version of the processor 1600. In one embodiment, processors 1870 and 1880 are respectively processors 1710 and 1715, while coprocessor 1838 is coprocessor 1745. In another embodiment, processors 1870 and 1880 are respectively processor 1710 coprocessor 1745.Processors 1870 and 1880 are shown including integrated memory controller (IMC) units 1872 and 1882, respectively. Processor 1870 also includes as part of its bus controller unit's point-to-point (P-P) interfaces 1876 and 1878; similarly, second processor 1880 includes P-P interfaces 1886 and 1888. Processors 1870, 1880 may exchange information via a point-to-point (P-P) interface 1850 using P-P interface circuits 1878, 1888. As shown in Figure 11, IMCs 1872 and 1882 couple the processors to respective memories, namely a memory 1832 and a memory 1834, which may be portions of main memory locally attached to the respective processors.Processors 1870, 1880 may each exchange information with a chipset 1890 via individual P-P interfaces 1852, 1854 using point to point interface circuits 1876, 1894, 1886, 1898. Chipset 1890 may optionally exchange information with the coprocessor 1838 via a high-performance interface 1892. In one embodiment, the coprocessor 1838 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1890 may be coupled to a first bus 1816 via an interface 1896. In one embodiment, first bus 1816 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 11, various I/O devices 1814 may be coupled to first bus 1816, along with a bus bridge 1818 which couples first bus 1816 to a second bus 1820. In one embodiment, one or more additional processor(s) 1815, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1816. In one embodiment, second bus 1820 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1820 including, for example, a keyboard and/or mouse 1822, communication devices 1827 and a storage unit 1828 such as a disk drive or other mass storage device which may include instructions/code and data 1830, in one embodiment. Further, an audio I/O 1824 may be coupled to the second bus 1820. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 11, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 12, shown is a block diagram of a second more specific exemplary system 1900 in accordance with an embodiment. Like elements in Figures 11and12 bear like reference numerals, and certain aspects of Figure 11 have been omitted from Figure 12 in order to avoid obscuring other aspects of Figure 11.Figure 12 illustrates that the processors 1870, 1880 may include integrated memory and I/O control logic ("CL") 1972 and 1982, respectively. Thus, the CL 1972, 1982 include integrated memory controller units and include I/O control logic. Figure 12 illustrates that not only are the memories 1832, 1834 coupled to the CL 3372, 3382, but also that I/O devices 3314 are also coupled to the control logic 3372, 3382. Legacy I/O devices 3315 are coupled to the chipset 1890.Referring now to Figure 13, shown is a block diagram of a SoC 2000 in accordance with an embodiment. Similar elements in Figure 13 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 13, an interconnect unit(s) 2002 is coupled to: an application processor 2010 which includes a set of one or more cores 1602A-N, which include cache units 1604A-N, and shared cache unit(s) 1606; a system agent unit 1610; a bus controller unit(s) 1616; an integrated memory controller unit(s) 1614; a set or one or more coprocessors 2020 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 2030; a direct memory access (DMA) unit 2032; and a display unit 2040 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 2020 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 1830 illustrated in Figure 11, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high-level procedural or object-oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores," may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.EMULATION (INCLUDING BINARY TRANSLATION, CODE MORPHING, ETC.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Figure 14 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 14 shows a program in a high-level language 2102 may be compiled using an x86 compiler 2104 to generate x86 binary code 2106 that may be natively executed by a processor with at least one x86 instruction set core 2116. The processor with at least one x86 instruction set core 2116 represents any processor that may perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 2104 represents a compiler that is operable to generate x86 binary code 2106 (e.g., object code) that may, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 2116. Similarly, Figure 14 shows the program in the high-level language 2102 may be compiled using an alternative instruction set compiler 2108 to generate alternative instruction set binary code 2110 that may be natively executed by a processor without at least one x86 instruction set core 2114 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 2112 is used to convert the x86 binary code 2106 into code that may be natively executed by the processor without an x86 instruction set core 2114. This converted code is not likely to be the same as the alternative instruction set binary code 2110 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 2112 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation, or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 2106.EXAMPLESIn embodiments, an apparatus includes speculation vulnerability mitigation hardware to implement one or more of a plurality of speculation vulnerability mitigation mechanisms; and speculation vulnerability detection hardware to detect vulnerability to a speculative execution attack and to provide to software an indication of speculative execution attack vulnerability.Any such embodiments may include any of the following aspects. Detection is based on conditions indicative of a speculative execution attack. The indication includes a prediction. The indication includes a confidence level for the prediction. The indication includes a category of speculative execution attack. The apparatus includes one or more registers to provide the indication to software. At least one of the one or more of the plurality of speculation vulnerability mitigation mechanisms is configurable by the software. The apparatus includes one or more registers to provide for the software to configure the at least one of the one or more of the plurality of speculation vulnerability mitigation mechanisms. At least one of the one or more registers is to store a weights vector including a plurality of elements, each element to indicate one of a plurality of weights to apply to a corresponding one of the plurality speculation vulnerability mitigation mechanisms. The apparatus includes an instruction decoder to decode one or more instructions to configure the at least one of the one or more of the plurality of speculation vulnerability mitigation mechanisms. The plurality of speculation vulnerability mitigation mechanisms includes a restricted speculative execution mode.In embodiments, a method includes detecting, by speculation vulnerability detection hardware in a processor, vulnerability of the processor to a speculative execution attack; providing, to software, an indication of speculative execution attack vulnerability; and implementing, by speculation vulnerability mitigation hardware in the processor, one or more of a plurality of speculation vulnerability mitigation mechanisms.Any such embodiments may include any of the following aspects. At least one of the one or more of a plurality of speculation vulnerability mitigation mechanisms is pre-configured by default. The method includes receiving, from the software, configuration information to reconfigure the at least one of the one or more of the plurality of speculation vulnerability mechanisms. Receiving the configuration information includes executing one or more instructions to reconfigure the at least one of the one or more of the plurality of speculation vulnerability mechanisms. Executing the one or more instructions includes loading the configuration information into one or more registers. At least one of the one or more registers is to store a weights vector including a plurality of elements, each element to indicate one of a plurality of weights to apply to a corresponding one of the plurality speculation vulnerability mechanisms. The method includes dynamically reconfiguring the corresponding one of the plurality speculation vulnerability mechanisms, based on the weights vector.In embodiments, a system includes a memory controller to couple a processor core to a memory; and the processor core to execute instructions to be fetched by the memory controller from application software in the memory, the processor core including speculation vulnerability mitigation hardware to implement one or more of a plurality of speculation vulnerability mitigation mechanisms; and speculation vulnerability detection hardware to detect vulnerability to a speculative execution attack in connection with execution of the instructions and to provide to system software an indication of speculative execution attack vulnerability.Any such embodiments may include any of the following aspects. The system software is to configure the speculation vulnerability mitigation hardware in response to the indication and based on a speculation vulnerability mitigation policy.In embodiments, an apparatus includes decode circuitry to decode a single instruction to mitigate vulnerability to a speculative execution attack; and execution circuitry, coupled to the decode circuitry, to be hardened in response to the single instruction.Any such embodiments may include any of the following aspects. The single instruction is to indicate one or more micro-architectural structures of the execution circuitry to be hardened. The single instruction is to indicate one or more conditions under which the execution circuitry is to be hardened. The single instruction is to indicate one or more micro-architectural changes to be prevented. The single instruction is to indicate a hardening mode vector including a plurality of fields, each field corresponding to one of a plurality of hardening mechanisms. The apparatus includes a hardening mode register to store a hardening mode vector including a plurality of fields, each field corresponding to one of a plurality of hardening mechanisms. The single instruction is to indicate that one or more front-end structures are to be hardened. The single instruction is to indicate that one or more back-end structures of the execution circuitry are to be hardened. The single instruction is to indicate that one or more memory structures of the execution circuitry are to be hardened. The single instruction is to indicate that one or more branch prediction structures of the execution circuitry are to be hardened. The single instruction is to indicate that changes to a cache, a buffer, or a register are to be prevented. The single instruction is to indicate that changes to branch prediction state are to be prevented.In embodiments, a method includes decoding, by a processor, a first invocation of a single instruction to mitigate vulnerability to a speculative execution attack; and hardening, in response to the first invocation of the single instruction, one or more micro-architectural structures in the processor.Any such embodiments may include any of the following aspects. The single instruction is to indicate one or more conditions under which the one or more of the micro-architectural structures are to be hardened. The single instruction is to indicate one or more micro-architectural changes to be prevented. The single instruction is to indicate a hardening mode vector including a plurality of fields, each field corresponding to one of a plurality of hardening mechanisms. Hardening includes preventing changes to a cache, a buffer, or a register. The method includes decoding, by the processor, a second invocation of the single instruction; and relaxing, in response to the second invocation of the single instruction, the hardening of the one or more micro-architectural structures.In embodiments, a non-transitory machine-readable medium stores a plurality of instructions, including a single instruction which, when executed by a machine, causes the machine to perform a method including storing a hardening mode vector indicated by the single instruction, the hardening mode vector including a plurality of fields, each field corresponding to one of a plurality of hardening mechanisms; and hardening, based on the hardening mode vector, one or more micro-architectural structures in the machine.Any such embodiments may include any of the following aspects. The method includes preventing changes to a cache, a buffer, or a register.In embodiments, an apparatus includes decode circuitry to decode a load hardening instruction to mitigate vulnerability to a speculative execution attack; and load circuitry, coupled to the decode circuitry, to be hardened in response to the load hardening instruction.Any such embodiments may include any of the following aspects. The load circuitry is to be hardened to prevent a load operation from being performed. The load circuitry is to be hardened to prevent a load operation from leaving a side channel based on data to be loaded by the load operation. The load circuitry is to be hardened to prevent execution of a dependent instruction, wherein the dependent instruction is dependent on data to be loaded by a load operation. The load circuitry is to be hardened to prevent execution of a dependent instruction from leaving a side channel, wherein the dependent instruction is dependent on data to be loaded by a load operation. The load circuitry is to be hardened to prevent allocation of a cache line for data to be loaded by a load operation. Hardening of the load circuitry is to be relaxed in response to retirement of a speculative load instruction. Hardening of the load circuitry is to be relaxed in response to a speculative load operation becoming non-speculative. Hardening of the load circuitry is to be relaxed in response to a speculative load operation becoming non-speculative based on resolution of a branch condition. Hardening of the load circuitry is to be relaxed in response to a speculative load operation becoming non-speculative based on retirement of a branch instruction. The load circuitry is to be hardened to prevent a load operation from bypassing a store operation. The load circuitry is to be hardened to prevent speculative data from being loaded. The load circuitry is to be hardened to prevent a speculative store bypass. The load circuitry is to be hardened to prevent dependence of load latency on data to be loaded.In embodiments, a method includes decoding, by a processor, a load hardening instruction to mitigate vulnerability to a speculative execution attack; and hardening, in response to the load hardening instruction, load circuitry in the processor.Any such embodiments may include any of the following aspects. Hardening the load circuitry includes preventing a load operation from being performed. The method includes decoding a load instruction; performing a first operation in response to the load instruction; preventing a second operation in response to the load operation, wherein preventing the second operation prevents the load instruction from leaving a side channel. The method includes decoding a load instruction; and relaxing hardening of the load circuitry in response to retirement of the load instruction.In embodiments, a non-transitory machine-readable medium stores a plurality of instructions, including a load hardening instruction and a load instruction, wherein execution of the plurality of instructions by a machine causes the machine to perform a method including hardening load circuitry in the machine in response to the load hardening instruction; performing a hardened load operation speculatively in response to the load instruction; and retiring the load instruction; and relaxing hardening of the load circuitry in response to retiring the load instruction.Any such embodiments may include any of the following aspects. The plurality of instructions includes a dependent instruction, the dependent instruction is dependent on data to be loaded by the load instruction, and hardening the load circuitry includes preventing execution of the dependent instruction.In embodiments, an apparatus includes decode circuitry to decode a store hardening instruction to mitigate vulnerability to a speculative execution attack; and store circuitry, coupled to the decode circuitry, to be hardened in response to the store hardening instruction.Any such embodiments may include any of the following aspects. The store circuitry is to be hardened to prevent a store operation from being performed. The store circuitry is to be hardened to prevent a store operation from leaving a side channel based on data to be stored by the store operation. The store circuitry is to be hardened to prevent execution of a dependent instruction, wherein the dependent instruction is dependent on data to be stored by a store operation. The store circuitry is to be hardened to prevent execution of a dependent instruction from leaving a side channel, wherein the dependent instruction is dependent on store-to-load forwarded data from a store operation. The store circuitry is to be hardened to prevent allocation of a cache line for data to be stored by a store operation. Hardening of the store circuitry is to be relaxed in response to retirement of a store instruction. Hardening of the store circuitry is to be relaxed in response to a store operation becoming non-speculative. Hardening of the store circuitry is to be relaxed in response to a store operation becoming non-speculative based on resolution of a branch condition. Hardening of the store circuitry is to be relaxed in response to a store operation becoming non-speculative based on retirement of a branch instruction. The store circuitry is to be hardened to prevent a load operation from bypassing a store operation. The store circuitry is to be hardened to prevent speculative data from being stored. The store circuitry is to be hardened to prevent a speculative store bypass. The store circuitry is to be hardened to prevent dependence of store latency on data to be stored.In embodiments, a method includes decoding, by a processor, a store hardening instruction to mitigate vulnerability to a speculative execution attack; and hardening, in response to the store hardening instruction, store circuitry in the processor.Any such embodiments may include any of the following aspects. Hardening the store circuitry includes preventing a store operation from being performed. The method includes decoding a store instruction; performing a first operation in response to the store instruction; preventing a second operation in response to the store operation, wherein preventing the second operation prevents the store instruction from leaving a side channel. The method includes decoding a store instruction; and relaxing hardening of the store circuitry in response to retirement of the store instruction.In embodiments, a non-transitory machine-readable medium stores a plurality of instructions, including a store hardening instruction and a store instruction, wherein execution of the plurality of instructions by a machine causes the machine to perform a method including hardening store circuitry in the machine in response to the store hardening instruction; performing a hardened store operation speculatively in response to the store instruction; retiring the store instruction; and relaxing hardening of the store circuitry in response to retiring the store instruction.Any such embodiments may include any of the following aspects. The plurality of instructions includes a dependent instruction, the dependent instruction is dependent on data to be stored by the store instruction, and hardening the store circuitry includes preventing execution of the dependent instruction.In embodiments, an apparatus includes decode circuitry to decode a branch hardening instruction to mitigate vulnerability to a speculative execution attack; and branch circuitry, coupled to the decode circuitry, to be hardened in response to the branch hardening instruction.Any such embodiments may include any of the following aspects. The branch circuitry is to be hardened to prevent a speculative branch from being taken. The branch circuitry is to be hardened to prevent branch prediction. The branch circuitry is to be hardened to mispredict a branch to a safe location. The branch circuitry is to be hardened to harden a load operation in the shadow of a branch. The branch circuitry is to be hardened to delay a branch. The branch is to be delayed until a branch condition is resolved. The branch is to be delayed until a corresponding branch instruction is retired. The branch is to be delayed until a branch termination instruction is received. The branch is to be delayed until the branch is known to be safe.In embodiments, a method includes decoding, by a processor, a branch hardening instruction to mitigate vulnerability to a speculative execution attack; and hardening, in response to the branch hardening instruction, branch circuitry in the processor.Any such embodiments may include any of the following aspects. Hardening the branch circuitry includes preventing a speculative branch from being taken. Hardening the branch circuitry includes preventing branch prediction. Hardening the branch circuitry includes mispredicting a branch to a safe location. Hardening a load operation in the shadow of a branch. Hardening the branch circuitry includes delaying a branch. The branch is delayed until a branch condition is resolved. The branch is delayed until a corresponding branch instruction is retired.In embodiments, a non-transitory machine-readable medium stores a plurality of instructions, including a branch hardening instruction and a branch instruction, wherein execution of the plurality of instructions by a machine causes the machine to perform a method including hardening branch circuitry in the machine in response to the branch hardening instruction; delaying a branch to be taken in response to the branch instruction; retiring the branch instruction; and relaxing hardening of the branch circuitry in response to retiring the branch instruction.Any such embodiments may include any of the following aspects. The plurality of instructions includes a branch condition resolution instruction, the branch condition resolution instruction is to resolve a branch condition, and delaying the branch continues until the branch condition is resolved.In embodiments, an apparatus includes decode circuitry to decode a register hardening instruction to mitigate vulnerability to a speculative execution attack; and execution circuitry, coupled to the decode circuitry, to be hardened in response to the register hardening instruction.Any such embodiments may include any of the following aspects. The execution circuitry is to be hardened to fence a register. The execution circuitry is to be hardened to prevent speculative execution of an instruction to load a register. The execution circuitry is to be hardened to prevent speculative execution of an instruction to use content of a register. The execution circuitry is to be hardened to prevent a speculative operation from using content of a register. The execution circuitry is to be hardened to prevent data forwarding from a register to a dependent operation. The execution circuitry is to be hardened to prevent execution of an instruction using content of a register to leave a side channel. The execution circuitry is to be hardened to prevent allocation of a cache line based on execution of an instruction using content of a register. Hardening of the execution circuitry is to be relaxed in response to retirement of an instruction to load a register. Hardening of the execution circuitry is to be relaxed in response to retirement of an instruction to use content of a register. Hardening of the execution circuitry is to be relaxed in response to a register load operation becoming non-speculative. Hardening of the execution circuitry is to be relaxed in response to an operation to use content of a register becoming non-speculative. Hardening of the execution circuitry is to be relaxed in response to resolution of a branch condition. Hardening of the execution circuitry is to be relaxed in response to resolution of a fence condition. The execution circuitry is to be hardened to prevent dependence of latency of an operation on data stored in a register.In embodiments, a method includes decoding, by a processor, a register hardening instruction to mitigate vulnerability to a speculative execution attack; and hardening, in response to the register hardening instruction, execution circuitry in the processor.Any such embodiments may include any of the following aspects. Hardening the execution circuitry includes fencing a register. Hardening the execution circuitry includes preventing a speculative operation from using content of a register.In embodiments, a non-transitory machine-readable medium stores a plurality of instructions, including a first instruction and a second instruction, wherein execution of the plurality of instructions by a machine causes the machine to perform a method including hardening execution circuitry in the machine in response to the first instruction to mitigate vulnerability to a speculative execution attack; preventing a speculative operation to be performed in response to the second instruction from using content of a register.Any such embodiments may include any of the following aspects. The method includes relaxing hardening in response to the speculative operation becoming non-speculative.In embodiments, an apparatus includes speculation vulnerability detection hardware to detect vulnerability to a speculative execution attack and, in connection with a detection of vulnerability to a speculative execution attack, to provide an indication that data from a first operation is tainted; execution hardware to perform a second operation using the data if the second operation is to be performed non-speculatively and to prevent performance of the second operation if the second operation is to be performed speculatively and the data is tainted.Any such embodiments may include any of the following aspects. The execution hardware is also to perform the second operation if the data is untainted. The speculation vulnerability detection hardware is to mark the data as tainted. The speculation vulnerability detection hardware is to mark the data as to be tracked. The indication is to be provided to software. The apparatus is to mark the data as tainted in response to a request from the software. The apparatus also includes an instruction decoder to decode an instruction to mark the data as tainted. The data is to be tracked by adding a bit to the data. The apparatus including tracking hardware to maintain a list of locations tainted data is stored. The second operation is a load operation, and the data is to be used as an address for the load operation.In embodiments, a method includes detecting, by speculation vulnerability detection hardware, a vulnerability to a speculative execution attack; providing, in connection with a detection of vulnerability to a speculative execution attack, an indication that data from a first operation is tainted; and preventing performance of a second operation using the data if the second operation is to be performed speculatively and the data is tainted.Any of such embodiments may include any of the following aspects. The method includes performing the second operation is the second operation is to be performed non-speculatively or the data is untainted. The method includes marking the data as tainted. The method includes marking the data as to be tracked. The indication is provided to software. The method includes marking the data as tainted in response to a request from the software. The method includes decoding an instruction to mark the data as tainted. The second operation is a load operation, and the data is to be used as an address for the load operation.In embodiments, a system includes a memory controller to couple a processor core to a memory; the processor core to execute instructions to be fetched by the memory controller from application software in the memory, the processor core including speculation vulnerability detection hardware to detect a vulnerability to a speculative execution attack and, in connection with a detection of vulnerability to a speculative execution attack during execution of the instructions, to provide an indication that data from a first operation is tainted; and execution hardware to perform a second operation using the data if the second operation is to be performed non-speculatively and to prevent performance of the second operation if the second operation is to be performed speculatively and the data is tainted.Any of such embodiments may include any of the following aspects. The indication is to be provided to system software in the memory and the processor core is to mark the data as tainted in response to a request from the system software.In embodiments, an apparatus includes a hybrid key generator and memory protection hardware. The hybrid key generator is to generate a hybrid key based on a public key and multiple process identifiers. Each of the process identifiers corresponds to one or more memory spaces in a memory. The memory protection hardware is to use the first hybrid key to protect to the memory spaces.Any such embodiments may include any of the following aspects. The first public key is to be obtained from a first website. The first public key is to be obtained from a first certificate for the first website. At least one of the first plurality of process identifiers is to identify a first web browser process, wherein the first website is accessible through the first web browser process. Each of the first plurality of process identifiers is to identify one of a plurality of web browser processes, wherein the first website is accessible through all of the plurality of web browser processes. At least one of the first plurality of memory spaces is accessible through a first memory access structure, the first memory access structure to control access based on the first hybrid key. Use of the first hybrid key by the memory protection hardware is to include associating the first hybrid key with each of a first plurality of memory access structures. Use of the first hybrid key by the memory protection hardware is to include allowing access from a first plurality of processes, including the first web browser process, to the first plurality of memory spaces and preventing access from a second process to the first plurality of memory spaces. The second process is a second web browser process to access a second website. The memory protection hardware is also to use a second hybrid key to protect a second memory space corresponding to the second web browser process. The second memory space is accessible through a second memory access structure, the second memory access structure to control access based on the second hybrid key. Protection of the first plurality of memory spaces and the second memory space by the memory protection hardware is to include associating the second hybrid key with the second memory access structure. The hybrid key generator is also to generate the second key based on a second public key and a second plurality of process identifiers, each of second plurality of process identifiers corresponding to one of a second plurality of memory spaces including the second memory space. The second public key is to be obtained from a second website. A first of the first plurality of process identifiers is to identify a process to store web content in a corresponding one of the first plurality of memory spaces. The web content is to include one of more of just-in-time code, compiled code, and web application content.In embodiments, a method includes generating a first hybrid key based on a first public key and a first plurality of process identifiers, each of first plurality of process identifiers corresponding to one or more of a first plurality of memory spaces in a memory; and using the first hybrid key to control access to the first plurality of memory spaces.Any such embodiments may include any of the following aspects. The method includes receiving the first public key from a first website. The method includes associating the first hybrid key with each of a first plurality of memory access structures, each of the first plurality of memory access structures to control access to a corresponding one of the first plurality of memory spaces. Using the first hybrid key to control access to the first plurality of memory spaces includes allowing access from a first plurality of web browser processes to the first plurality of memory spaces and preventing access from a second process to the first plurality of memory spaces.In embodiments, an apparatus includes one or more processor cores to execute code; and memory access circuitry to access a memory in connection with execution of the code; wherein one or more of the one or more processor cores is also to generate a memory access topology diagram of the code to determine a first attackable surface of the code; and refactor the code based on the memory access topology diagram to generate refactored code, the refactored code to have a second attackable surface smaller than the first attackable surface.Any such embodiments may include any of the following aspects. The memory access topology diagram is to reveal interactions between components of the code. Refactoring of the code is to include transformation of a first component into at least a second component and a third component. The first component is accessible by a fourth component and a fifth component, the second component is accessible by the fourth component and not accessible by the fifth component, and the third component is accessible by the fifth component and not accessible by the fourth component. The second component is a specialization of the first component. The second component is a clone of the first component. Access to the first component includes access to a first data structure and a second data structure. The first component includes a first function and a second function, wherein, the first data structure is accessible through the first function and the second function, and the second data structure is accessible through the first function and the second function. The memory access topology diagram is to reveal execution of the fourth component accesses the first data structure and not the second data structure, and execution of the fifth component accesses the second data structure and not the first data structure. The refactoring of the code is to transform the first function to provide access to the first data structure and not to the second data structure, and the second function to provide access to the second data structure and not to the first data structure. Access to the second component includes access to the first data structure and not the second data structure; and access to the third component includes access to the second data structure and not the first data structure. The second component includes the first function and not the second function, and the third component includes the second function and not the first function.In embodiments, a method includes executing code by a processor; generating, by the processor in response to execution of the code, a memory access topology diagram of the code; and refactoring, by the processor based on the memory access topology diagram, the code to reduce an attack surface of the code.Any such embodiments may include any of the following aspects. The memory access topology diagram is to reveal interactions between components of the code. The refactoring is to reduce the attack surface by transforming a first component into at least a second component and a third component. Executing the code includes accessing the first component by a fourth component and by a fifth component, and refactoring includes making the second component accessible only by the fourth component and making the third component accessible only by the fifth component. Accessing the first component includes accessing a first data structure and a second data structure. The first component includes a first function and a second function, wherein the first data structure is accessible through the first function and the second function, and the second data structure is accessible through the first function and the second function. The memory access topology diagram is to reveal execution of the fourth component accesses the first data structure and not the second data structure, and execution of the fifth component accesses the second data structure and not the first data structure. The refactoring is to include transforming the first function to provide access the first data structure and not to the second data structure and transforming the second function to provide access to the second data structure and not to the first data structure.An apparatus may include means for performing any function disclosed herein. In embodiments, an apparatus may include a data storage device that stores code that when executed by a hardware processor causes the hardware processor to perform any method disclosed herein. An apparatus may be as described in the detailed description. A method may be as described in the detailed description. In embodiments, a non-transitory machine-readable medium may store code that when executed by a machine causes the machine to perform a method including any method disclosed herein.Method embodiments may include any details, features, etc. or combinations of details, features, etc. described in this specification.While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described and may be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting. |
Embodiments of techniques and systems for facilitation of performance of predicted actions based on application-provided contexts are described. In embodiments, applications may include a context component that is configured to provide context information, such as in the form of one or more tags, to a prediction engine. In embodiments, the prediction engine may tag one or more observed actions and/or resource utilizations with the received tag, allowing for increased knowledge of application status when making predictions from the observed actions. In embodiments, the tag may also be applied to a current action being used to determine potential actions for early performance. Other embodiments may be described and claimed. |
A computer- implemented method for predicting potential actions of a first computing device, the method comprising: determining, by a first computing device, contextual information for an application currently executing on the computing device, wherein determining contextual information for the application comprises determining a tag describing a current status of the application; providing, by the first computing device, the determined contextual information, including the tag, to an observation engine operated on a second computing device to be analyzed by the observation engine in determining potential actions or resource utilizations of the first computing device, prior to performance of actions or resource utilizations to assist performance of the determined potential actions or resource utilizations. The method of claim 1, wherein said determining contextual information for the application is performed by the application. The method of claim 1 or 2, wherein the tag comprises an indication of a file being accessed by the application. The method of claim I or 2, wherein the tag comprises data representative of a data type for data used by the application. The method of claim 1 or 2, wherein the tag comprises an indication of information received by the application via a network. The method of any preceding claim, wherein the first and second computing devices are the same computing device. The method of claim 6, further comprising monitoring one or more actions or resource utilizations of the computing device. The method of claim 7, further comprising determining, based at least in part on the one or more monitored actions or resource utilizations and on the contextual information for the application, for a received action, one or more probabilities for one or more potential actions or resource utilizations of the computing device. The method of any preceding claim, further comprising: - 14providing by the first computing device, an indication to the observation engine that a context associated with the determined contextual information has ended; wherein: the observation engine is to store an indication, in a data structure, of a context associated with the determined contextual information, in response to receipt of the tag, and the observation engine is to modify the data structure in response to receipt of the indication that the context has ended. The method of claim 9, wherein data structure is a stack data structure, and store the indication in the data structure comprises push the tag onto the stack data structure. The method of claim 9 or 10, wherein modify the data structure comprises remove the tag from the stack data structure. At least one machine-readable medium comprising a plurality of instructions that in response to being executed on a computing device, cause the computing device to carry out a computer-implemented method according to any one of claims 1-. At least one machine-readable medium of claim 12, wherein the instructions further comprises instructions for the application. An apparatus comprising means for performing the computer implemented method of any one of claims 1-. An apparatus for predicting activities of an application executing on the apparatus, the apparatus comprising: one or more computer processors; a context component operated by the one or more computer processors to: determine contextual information for the application during execution of the application, wherein determine contextual information for the application comprises determining a tag describing a current status of the application; and provide the determined contextual information, including the tag, to an observation engine to be analyzed in determining potential actions or resource utilizations of the application, prior to performance of actions or resource utilizations to assist performance of the determined potential actions or resource utilizations. -15-. The apparatus of claim 15, further comprising circuitry to execute the application. The apparatus of claim 15 or 16, wherein the tag comprises an indication of a file being accessed by the application. The apparatus of claim 15 or 16, wherein the tag comprises data representative of a data type for data used by the application. The apparatus of claim 15 or 16, wherein the tag comprises an indication of information received by the application via a network. The apparatus of any one of claims 15 to 19, further comprising the observation engine configured to be operated by the one or more computer processors to monitor one or more actions or resource utilizations of the computing device. The apparatus of claim 20, further comprising an analysis engine configured to be operated by the one or more computer processors to determine, based at least in part on the one or more monitored actions or resource utilizations and on the contextual information for the application, for a received action, one or more probabilities for one or more potential actions or resource utilizations of the computing device. The apparatus of any one of claims 15 to 21, wherein the context component operated by the one or more computer processors to provide an indication to the observation engine that a context associated with the determined contextual information has ended; wherein: the observation engine is to store an indication, in a data structure, of a context associated with the determined contextual information, in response to receipt of the tag, and the observation engine is to modify the data structure in response to receipt of the indication that the context has ended. The apparatus of claim 22, wherein data structure is a stack data structure, and store the indication in the data structure comprises push the tag onto the stack data structure. The apparatus of claim 22 or 23, wherein modify the data structure comprises remove the tag from the stack data structure. -16. |
WO 2014/003921 PCT/US2013/042096 APPLICATION-PROVIDED CONTEXT FOR POTENTIAL ACTION PREDICTION Cross Reference to Related Application The present application claims priority to U.S. Patent Application No. 13/539,157, filed June 29, 2012, the entire content of which is hereby incorporated by reference in its entirety for all purposes. Background Many users experience slower-than-expected performance when using computing devices. In particular, many new computers and devices are often perceived as only marginally faster than their predecessors because response time of the system to user input may remain similar to older systems. Similarly, common applications may be perceived to take about the same amount of time to start or to complete. For example, clicking on a button in a user interface or starting a new command often tends to result in a largely constant response time from system to system. This performance may appear to be almost independent from the real performance and capabilities of the underlying system. While use of solid state drives and smarter caching mechanisms may help in some circumstances, they have not solved this issue. Brief Description of the Drawings Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings. Figure 1 is a block diagram illustrating an example predicted action performance system, in accordance with various embodiments. Figure 2 is a block diagram illustrating an example probabilities engine, in accordance with various embodiments. Figure 3 illustrates an example action prediction and performance process, in accordance with various embodiments. Figure 4 illustrates an example probability generation process, in accordance with various embodiments. Figure 5 illustrates an example flow structure generation process, in accordance with various embodiments. Figure 6 illustrates an example observation collection process, in accordance with - 1 various embodiments. Figure 7 illustrates an example flow structure, in accordance with various embodiments. Figure 8 illustrates an example process for generating probabilities from a flow structure, in accordance with various embodiments. Figure 9 illustrates an example expected value structure, in accordance with various embodiments. Figure 10 illustrates an example predicted action performance process, in accordance with various embodiments. Figure 11 illustrates an example computing environment suitable for practicing the disclosure, in accordance with various embodiments. Detailed Description In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents. Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous and are intended to mean a non exclusive inclusion, such that a system, method or apparatus that comprises a list of elements does not include those elements solely, but may well include other elements not listed. As used herein, the term "module" may refer to, be part of, or include an -2WO 2014/003921 PCT/US2013/042096 Application Specific Integrated Circuit ("ASIC"), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Referring now to Figure 1, a block diagram is shown illustrating embodiments of an example predicted action performance system. In various embodiments, the predicted action performance system may include a predicted action engine 100 ("PAE 100") and a probabilities engine 110 ("PE 110"). In various embodiments, the PAE 100 may be configured to receive information about the historical and/or current operation of a computing device. The PAE 100 may be configured to, based in part on this information, select one or more actions to support potential actions and/or resource utilizations that are predicted as likely to occur on the computing device. In various embodiments, actions may include such things as starting of processes, opening a window or dialog box, incoming network events, or user interaction. For example, the PAE 100 may be configured to select to pre-load code for an application that is predicted to be executed soon, or may read data into a cache. As illustrated in the example of Figure 1, in various embodiments, the PAE 100 may be configured to select actions to support potential actions and/or resource utilizations of an executing process, such as process 150. In various embodiments, the process 150 may include a subprocess 160. In various embodiments, the PAE 100 may be configured to predict that a second subprocess 170 is likely to be executed in the near future. Thus, in various embodiments, the PAE 100 may be configured to facilitate pre-fetching of (and/or facilitate early execution of ) code for the subprocess 170. In other embodiments, the PAE may be configured to cause pre-fetching and/or early execution of executable code that is outside of a currently-executing process. For example, if an email is received with an attachment of a particular document type, the PAE 100 may select to pre-fetch code for an application or process that is configured to read that document type. Similarly, in some embodiments, the PAE 100 may be configured to predict that an external resource 175 (for example a network card) is likely to be used in the near future (for example, to perform a domain name system search). Thus, in various embodiments, the PAE 100 may be configured to facilitate the making of an early request of the external resource 175. Recognizing that the foregoing example was merely indicative of potential actions and capabilities of the PAE 100, in other embodiments, different processes or external resources may be involved. -3 WO 2014/003921 PCT/US2013/042096 In the examples of Figure 1, aspects of the predicted action performance system may be illustrated on the left side of the dashed line, while aspects of the computing device for which the predicted action performance system is predicting action may be illustrated on the right side of the dashed line. Thus, in some embodiments, the predicted action performance system may be configured to operate on a device or apparatus that is separate from the predicted action performance system. However, in various embodiments, one or more aspects of the predicted action performance system may be operated on the same computing device that actions are being predicted for. In various embodiments, the PAE 100 may be configured to receive one or more probabilities of potential actions to be performed on a computing device. In various embodiments, the PAE 100 may receive these probabilities from the PE 110. Particular embodiments of the PE 110 are discussed below. In various embodiments, the PAE 100 may also be configured to receive (or otherwise obtain) a current system context 120 for the computing device. In various embodiment, the system context may include a state of the computing device (e.g., power, performance, memory, storage, load, battery state, and/or thermal data), logical environment (e.g., network connectivity, data received over a network), and/or physical location of the computing device (e.g., is the computing device mobile, at home, at an office, on a flight, in a foreign country, etc.).. In various embodiments, the context may include other information, both outside and inside the computing device, data, and/or conclusions that may be drawn from that information and data. In various embodiments, the current system context may be received passively by the PAE 100, such as by applications or system processes reporting system context information to the PAE 100. In other embodiments, the PAE 100 may configured to actively request and/or otherwise obtain the current system context 120 from the computing device. In various embodiments, the PAE 100 may be configured to select actions for performance based on available system resources, such as those identified in the current system context. Referring now to Figure 2, a block diagram is shown illustrating an example PE 110, in accordance with various embodiments. In various embodiments, the PE 110 may include an observation engine 250 ("OE 250") and an analysis engine 260 ("AE 260"). In various embodiments, the OE 250 may be configured to receive actions and resource utilizations 210 of the computing device. As described herein the OE 250 may generate a flow structure 250 describing steady states and transitions of the computing device based -4WO 2014/003921 PCT/US2013/042096 on the historical data received by the OE 250. This flow structure may be used by the AE 260, along with an indication of a current action 205 that is being performed by the computing device, to determine one or more probabilities for potential actions that may follow the received current action 205. These probabilities may be used by the PAE 100 to select an action for performance, as described herein. In various embodiments, the actions/resource utilizations 210 may be received passively by the OE 250, such as by applications or system processes reporting indications of actions and/or resource utilizations that have been performed to the OE 250. In other embodiments, the OE 250 may configured to actively request and/or otherwise obtain the actions and/or resource utilizations 210 from the computing device. In various embodiments, the OE 250 may also be configured to receive application context information from one or more applications 220 executing on the computing device. In various embodiments, the application 220 may include a context component 230 which may be in communication with the OE 250 in order to provide the context information. The application 220 may be so configured in order to provide the OE 250, and therefore the PE 110, with more information than would otherwise be available to the PE 110 without direct assistance from applications executing on the computing device. For example, a coding environment application 220 may provide, such as through its context component 230, tags that describe a type of code is being written in the application. In another example, an email application 220 may provide a tag that an email has been received, a tag of the sender of the email, and a tag describing that a .ppt file is attached. This information may be used by the PE 110 to determine that every time an email with a .ppt file is received from a certain person, PowerPoint is likely to be executed. The PAE 100 may thus facilitate the loading of code for the PowerPointTM application. In various embodiments, the context component 230 may provide information such as, but not limited to, application state, information describing one or more files accessed by the application 220, messages received by the application 220, the identity of one or more recipients or senders of information to the application, etc. In various embodiments the context component 230 may provide application context information to the OE 250 in the form of one or more tags. As described below, these tags may be appended to actions and/or resource utilizations 210 received by the OE 250 in order to provide additional context for these received actions and/or resource utilizations 210; this, in turn, may allow the OE to generate more accurate and/or detailed flow structures 250. Similarly, the OE -5WO 2014/003921 PCT/US2013/042096 250 may, in various embodiments, provide one or more context tags 225 to the AE 260, which may be used to provide context to one or more current actions 205. This provision of the context tag 255 may, in various embodiments, facilitate the AE 260 in producing more accurate probabilities 270. Particular uses of application context information and tags are described herein. Figure 3 illustrates an example action prediction and performance process 300, in accordance with various embodiments. The process may begin at operation 320, where, in various embodiments, the PE 110 may generate one or more probabilities for use by the PAE 100. Particular embodiments of operation 320 are discussed below. Next, at operation 340, the PAE 100 may perform one or more predicted actions based on the probabilities generated by the PE 110 at operation 320. In embodiments, the performance of predicted actions at operation 340 may also be based in part on the current system context 120. Particular embodiments of operation 340 are discussed below. In various embodiments, the process may then repeat at operation 320 for additional probabilities and predicted action. In some embodiments, the process instead end. Figure 4 illustrates an example probability generation process 400, in accordance with various embodiments. In various embodiments, process 400 may be performed by the PE 110 to implement one or more embodiments of operation 320 of process 300. The process may begin at operation 410, where the OE 250 may generate a flow structure 250. Particular embodiments of operation 410 are discussed below. Next, at operation 420, the AE 260 may generate probabilities based on the generated flow structure 250 and a current action 205. Particular embodiments of operation 420 are discussed below. Next, at operation 430, the probabilities may be output from the AE 260. In various embodiments, the output probabilities may be ordered for ease of use by the PAE 100. Thus, in some embodiments, the probabilities may be ordered by likelihood. In other embodiments, the probabilities output by the AE 260 may be ordered by assumed distance in time from the current action 205. The process may then end. Figure 5 illustrates an example flow structure generation process 500, in accordance with various embodiments. In various embodiments, process 500 may be performed by the OE 250 to implement one or more embodiments of operation 410 of process 400. The process may begin at operation 520, where the OE 250 may collect information about actions and/or resource utilizations from the computing device. In various embodiments, these observations may be also be acquired from one or more applications. Particular embodiments of operation 520 are described below with reference -6WO 2014/003921 PCT/US2013/042096 to process 600 of Figure 6. Referring now to Figure 6, that figure illustrates an example observation collection process 600, in accordance with various embodiments. In various embodiments, process 600 may be performed by the OE 250 to implement one or more embodiments of operation 510 of process 500. The process may begin at operation 610, where the OE 250 may receive application context information from an application 220. In various embodiments, the application context information may be received from a context component 230 of the application 220. In some embodiments, the application context information may be received in the form of a tag. The following descriptions of operations of process 600 thus may make specific reference to a tag; however it may be recognized that, in other embodiments, the received application context information may take other forms. At operation 620, the OE 250 may push the recently-received tag onto a stack data structure. In various embodiments, a stack is used in order to allow for easy removal of the context, as well as to allow for nesting of various stacks as they are applied to received actions and resource utilizations; in other embodiments, other data structures may be used to store stacks. Next, at operation 630, the OE 250 may obtain one or more actions and/or resource utilizations. As discussed above, in various embodiments, these actions and/or resource utilizations may be received passively, while in others, the OE 250 may actively seek out action and/or resource utilization information. Next, at operation 640, the OE 250 may tag the received action/resource utilization with the recently-received tag. This tagging may, in various embodiments, facilitate the OE 250 in providing application context information to accompany received actions and/or resource utilizations, providing improved probability generation. In various embodiments, the OE 250 may repeat operations 630 and 640 in order to receive (and tag) additional actions and/or resource utilizations. However, the OE 250 may also receive an indication that an application context associated with the application context information has changed, such as at operation 650. Thus, for example, an application 220 may receive a user interaction where a user may select a menu. The application 220 may, such as using its context component 230, then send a tag indicating this menu selection to the OE 250. Later, if the user ends selection of the menu, the context component 230 of the application 220 may indicate to the OE 250 that the relevant context has ended. Then, at operation 660, the OE 250 may remove the -7WO 2014/003921 PCT/US2013/042096 tag from the stack structure. This may effectively end the tagging of future received actions with the received tag. The process may then end. Returning to process 500 of Figure 5, after collecting information about actions and/or resource utilizations, process 500 may continue to operation 530, where the OE 250 may identify one or more steady states of the computing device. In various embodiments, as illustrated below, these steady states may represent states at which the computing device is in a consistent state at a particular time. A steady state may, in various embodiments, include a consistent state of the context of the computing device. In some embodiments, a steady state may include a consistent state of one or more internal variables of the computing device, such as, for example, a current working directory, a current IP address of a network device, a current running state of one or more applications, etc. For example, in one embodiment, an example steady state may be described at a high level as "email program is running in foreground, displaying an editor window, waiting for user input." Next, at operation 540, the OE 250 may identify one or more transitional actions and/or resource utilizations that may be performed by the computing device. For example, at operation 540, the OE 250 may identify that a directory change command causes the computing device to change between directory steady states. In another example, at operation 540, the OE 250 may identify that a command to execute an application may cause the computing device to change to a steady state where the application is executing. In another example, a transitional actions may include receipt of a command from a user (such as a "send" command in an email application). Next, at operation 550, the OE 250 may generate frequencies of each of the steady states based on its received information about actions and resource utilizations. Particular examples of these frequencies may be seen below at Figure 7. At operation 560, these frequencies may be provided to the AE 260 for use in determining probabilities to be used by the PAE 100. The process may then end. Figure 7 illustrates an example flow structure with steady states and frequencies, in accordance with various embodiments. In the illustrated example, steady states are illustrated as graph nodes, while the graph transitions show frequencies of how often the OE 260 observed that particular transition between the two steady states during a given period ob observation. As the illustrated flow structure 700 shows, steady states may, in various embodiments, include receipt of a command to execute an application (e.g., "/usr/bin/bash", "/usr/bin/make/", "/bin/rm") or may include execution of a process based -8WO 2014/003921 PCT/US2013/042096 on that command (e.g., "/usr/bin/bash::bash", "/usr/bin/make::make"). It may be noted that, while the example flow structure of Figure 7 does not show steady states tagged with application context information, in various embodiments, the flow structure may additionally include application context information. Thus, in various embodiments, more than one steady state may exist for a given directory or process, but with different tags. Figure 8 illustrates an example process 800 for generating probabilities from a flow structure, in accordance with various embodiments. In various embodiments, process 800 may be performed by the AE 260 to implement operation 420 of process 400. The process may begin at operation 810, where the AE 260 may receive the flow structure generated by the OE 250. Next, at operation 820, the AE 260 may receive an indication of a current action 205. At operation 830, the AE 260 may receive application context tags 255 from the OE 250; these tags may be used to better identify relevant steady states and transitions in the flow structure. Next, at operation 840, the AE 260 may compute expected values that follow the received action. In various embodiments, the expected values may be computed based on direct frequencies between each steady state to the next and may not include frequencies that are not related the transition for which the expected value is being computed. In various embodiments, the AE 260 may utilize a sub-structure of the received flow structure that only includes steady states that may be reached after performance of the current action 205. In various embodiments, the AE 260 may then compute the expected values for how often each subsequent steady state may be reached after the current action 205. Referring now to Figure 9, Figure 9 illustrates an example expected value structure 900, in accordance with various embodiments. As illustrated in the example of Figure 9, in various embodiments, the AE 260 may compute expected values in a form of a number of times the transition may be performed out of 100. For example, if, based on a current action a given application is expected to be run 50% of the time, the expected value of a transition to that application may be 50 (out of 100). In another example, if an application is expected to be run, on average, twice, the expected value may be 200 out of 100. In some embodiments, the expected value may be capped at a maximum value. Returning to Figure 8, at operations 850 and 860, the AE 260 may compute, from the computed expected values, effective probabilities of steady states (850) and of resource utilizations (860). In various embodiments, the AE 260 may compute the effective probabilities by directly multiplying the expected values in probabilistic form. In -9WO 2014/003921 PCT/US2013/042096 other embodiments the AE 260 may utilize other methods of computing the probabilities, such as using artificial intelligence-based techniques or by including other information. Finally, at operation 870, the AE 260 may order the computed probabilities, such as by likelihood or distance (e.g. distance in the flow structure) from the current action 205. The process may then end. Figure 10 illustrates an example predicted action performance process 1000, in accordance with various embodiments. In various embodiments, the PAE 100 may perform process 1000 to implement operation 340 of process 300 of Figure 3. The process may begin at operation 1010, where the PAE 100 may obtain a system context from the computing device. As discussed above, in various embodiments, the system context may include, in various embodiments, resource availability, such as memory or storage capability, current workload, location of execution, and/or environmental information, such as a temperature of the computing device. Next, at operation 1020, the PAE 100 may obtain one or more probabilities for actions and/or resources, such as from the PE 110. As discussed above, in various embodiments, these probabilities may be ordered for use by the PAE 100. Next, at operation 1030, the PAE 100 may select actions and/or resource utilizations that support potential actions and/or resource allocations and which may be performed given the current system context for the computing device. Thus, in various embodiments, the PAE 100 may determine, for the potential action and/or resource utilizations for which probabilities were received, which support actions and/or resource utilizations may be performed, given the capabilities indicated by the system context. In various embodiments, the PAE 100, at operation 1030, may determine which of these support actions and/or resource utilizations may be performed without causing a noticeable slowdown to a user of the computing device. Finally, at operation 1040, the PAE 100 may facilitate performance of the selected actions and/or resources utilizations. In various embodiments, the PAE 100 may itself direct performance of the actions and/or resource utilizations. In other embodiments, the PAE 100 may request performance of the actions and/or resource utilizations from other entities. The process may then end. Figure 11 illustrates, for one embodiment, an example computer system 1100 suitable for practicing embodiments of the present disclosure. As illustrated, example computer system 1100 may include control logic 1108 coupled to at least one of the processor(s) 1104, system memory 1112 coupled to system control logic 1108, non - 10 WO 2014/003921 PCT/US2013/042096 volatile memory (NVM)/storage 1116 coupled to system control logic 1108, and one or more communications interface(s) 1120 coupled to system control logic 1108. In various embodiments, the one or more processors 1104 may be a processor core. System control logic 1108 for one embodiment may include any suitable interface controllers to provide for any suitable interface to at least one of the processor(s) 1104 and/or to any suitable device or component in communication with system control logic 1108. System control logic 1108 for one embodiment may include one or more memory controller(s) to provide an interface to system memory 1112. System memory 1112 may be used to load and store data and/or instructions, for example, for system 1100. In one embodiment, system memory 1112 may include any suitable volatile memory, such as suitable dynamic random access memory ("DRAM"), for example. System control logic 1108, in one embodiment, may include one or more input/output ("I/O") controller(s) to provide an interface to NVM/storage 1116 and communications interface(s) 1120. NVM/storage 1116 may be used to store data and/or instructions, for example. NVM/storage 1116 may include any suitable non-volatile memory, such as flash memory, for example, and/or may include any suitable non-volatile storage device(s), such as one or more hard disk drive(s) ("HDD(s)"), one or more solid-state drive(s), one or more compact disc ("CD") drive(s), and/or one or more digital versatile disc ("DVD") drive(s), for example. The NVM/storage 1116 may include a storage resource physically part of a device on which the system 1100 is installed or it may be accessible by, but not necessarily a part of, the device. For example, the NVM/storage 1116 may be accessed over a network via the communications interface(s) 1120. System memory 1112 and NVM/storage 1116 may include, in particular, temporal and persistent copies of predicted action performance logic 1124. The predicted action performance logic 1124 may include instructions that when executed by at least one of the processor(s) 1104 result in the system 1100 practicing one or more of the predicted action performance operations described above. In some embodiments, the predicted action performance logic 1124 may additionally/alternatively be located in the system control logic 1108. Communications interface(s) 1120 may provide an interface for system 1100 to communicate over one or more network(s) and/or with any other suitable device. -11 WO 2014/003921 PCT/US2013/042096 Communications interface(s) 1120 may include any suitable hardware and/or firmware, such as a network adapter, one or more antennas, a wireless interface, and so forth. In various embodiments, communication interface(s) 1120 may include an interface for system 1100 to use NFC, optical communications (e.g., barcodes), BlueTooth or other similar technologies to communicate directly (e.g., without an intermediary) with another device. For one embodiment, at least one of the processor(s) 1104 may be packaged together with system control logic 1108 and/or predicted action performance logic 1124. For one embodiment, at least one of the processor(s) 1104 may be packaged together with system control logic 1108 and/or predicted action performance logic 1124 to form a System in Package ("SiP"). For one embodiment, at least one of the processor(s) 1104 may be integrated on the same die with system control logic 1108 and/or predicted action performance logic 1124. For one embodiment, at least one of the processor(s) 1104 may be integrated on the same die with system control logic 1108 and/or predicted action performance logic 1124 to form a System on Chip ("SoC"). The following paragraphs describe examples of various embodiments. In various embodiments, an apparatus for predicting activities of the apparatus may include one or more computer processors. The apparatus may include a context component operated by the one or more computer processors. The context component may be operated to determine contextual information for an application during execution of the application and to provide the determined contextual information to an observation engine to be analyzed in determining potential actions or resource utilizations of the application. In various embodiments, the apparatus may further include the application, wherein the application comprises the context component. In various embodiments, the context component may be configured to determine contextual information for an application determination of a tag describing a status of the application. In various embodiments, the tag may include an indication of a file being accessed by the application. In various embodiments, the tag may include an indication of a data type for data being used by the application. In various embodiments, the tag may include an indication of information received by the application via a network. In various embodiments, the apparatus may further include an observation engine configured to be operated by the one or more computer processors to monitor one or more actions or resource utilizations of the computing device. In various embodiments, the observation engine may be further configured to be operated by the one or more computer - 12 WO 2014/003921 PCT/US2013/042096 processors to receive the determined contextual information, to tag one or of the monitored one or more actions or resource utilizations with the contextual information for the application, and to provide the determined contextual information to the analysis engine. In various embodiments, the apparatus may further include an analysis engine configured to be operated by the one or more computer processors to determine, based at least in part on the one or more monitored actions or resource utilizations and on the contextual information for the application, for a received action, one or more probabilities for one or more potential actions or resource utilizations of the computing device. Computer-readable media (including non-transitory computer-readable media), methods, systems and devices for performing the above-described techniques are illustrative examples of embodiments disclosed herein. Additionally, other devices in the above-described interactions may be configured to perform various disclosed techniques. Although certain embodiments have been illustrated and described herein for purposes of description, a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that embodiments described herein be limited only by the claims. Where the disclosure recites "a" or "a first" element or the equivalent thereof, such disclosure includes one or more such elements, neither requiring nor excluding two or more such elements. Further, ordinal indicators (e.g., first, second or third) for identified elements are used to distinguish between the elements, and do not indicate or imply a required or limited number of such elements, nor do they indicate a particular position or order of such elements unless otherwise specifically stated. - 13 |
Techniques are described for a multi-processor having two or more processors that increases the opportunity for a load-exclusive command to take a cache line in an Exclusive state, which results in increased performance when a store-exclusive is executed. A new bus operation read prefer exclusive is used as a hint to other caches that a requesting master is likely to store to the cache line, and, if possible, the other cache should give the line up. In most cases, this will result in the other master giving the line up and the requesting master taking the line Exclusive. In most cases, two or more processors are not performing a semaphore management sequence to the same address at the same time. Thus, a requesting master's load-exclusive is able to take a cache line in the Exclusive state an increased number of times. |
CLAIMSWHAT IS CLAIMED IS:1. A method for semaphore management across a coherent bus in a multiprocessor, the method comprising:determining that a first cache local to a first processor missed at a target address in response to a load exclusive instruction issued from the first processor;issuing a read prefer exclusive command on a coherent bus from the first cache to a second cache local to a second processor;determining in response to the read prefer exclusive command that a reservation granule in the second cache is in a not tagged state; andinvalidating the cache line in the second cache in response to the determination the reservation granule in the second cache for this address is in the not tagged state.2. The method of claim 1, further comprising:providing to the first cache data requested at the target address in response to the read prefer exclusive command snooped by the second cache from the coherent bus.3. The method of claim 1, further comprising:providing to the first cache data requested at the target address from a next level memory above the first cache in the memory hierarchy in response to the read prefer exclusive command detected by snooping the coherent bus.4. The method of claim 1, wherein a reservation granule in the first cache is tagged with the target address after the read prefer exclusive command has completed execution.5. The method of claim 1, wherein the read prefer exclusive operates as a hint to other caches in the multi-processor to give up the cache line if it is tagged as containing shared data.6. The method of claim 1, further comprising:determining a state of the cache line in the second processor; andchanging a state of the cache line received in the first processor to an exclusive state in response to the state of the cache line in the second processor being in a shared or exclusive state.7. The method of claim 1, further comprising:determining a state of the cache line in the second processor; andchanging a state of the cache line received in the first processor to a modified state in response to the state of the cache line in the second processor being in an owned or modified state.8. An apparatus for semaphore management in a multi-processing system, the apparatus comprising:a first cache controller configured to issue a read prefer exclusive command on a coherent bus from a first cache to a second cache in response to the first cache having a miss for data at a target address provided by a load exclusive instruction, wherein the first cache is coupled to a first processing agent that issued the load exclusive instruction and the second cache is coupled to a second processing agent; anda second cache controller configured to snoop the coherent bus and in response to a snooped read prefer exclusive command and a reservation granule in the second cache being tagged for this target address, ensures a state of the line in the second cache is in a valid and a shared state.9. The apparatus of claim 8, wherein data at the requested address is provided to the first cache in response to the read prefer exclusive command snooped by the second cache from the coherent bus.10. The apparatus of claim 8, wherein data at the requested address is provided to the first cache from a next level memory above the first cache in the memory hierarchy in response to the read prefer exclusive command detected by snooping the coherent bus.11. The apparatus of claim 8, further comprising:a reservation granule in the first cache is tagged with the target address after the read prefer exclusive command has completed execution.12. The apparatus of claim 8, wherein the read prefer exclusive operates as a hint to other caches in the multi-processing system to give up the cache line if it is tagged as containing shared data.13. The apparatus of claim 8, wherein the first processing agent receives the data provided by the second cache controller and changing the first processing agent's cache line state to shared.14. A method for semaphore management across a coherent bus in a multiprocessor, the method comprising:determining that a first cache local to a first processor hit at a target address in response to a load exclusive instruction issued from the first processor, wherein the accessed first cache line is in a shared or owned state;issuing an upgrade prefer exclusive command on a coherent bus from the first cache to a second cache local to a second processor;determining that the second cache hit at the target address in response to the upgrade prefer exclusive command, wherein a reservation granule of the accessed second cache line is in a not tagged state; andupgrading the line requested by the first processor to an exclusive state in response to the second cache line being in a shared state.15. The method of claim 14 further comprising:invalidating the cache line in the second cache in response to the second cache line initially being in the shared state.16. The method of claim 14 further comprising:upgrading the line requested by the first processor to a modified state in response to the second cache line being in an owned state.17. The method of claim 14 further comprising:determining that the second cache hit at the target address in response to the upgrade prefer exclusive command and a reservation granule of the hit second cache line is tagged with the same address as the line requested by the first processor; andreturning to monitor for further coherent bus commands without modifying the cache line state for the first cache and for the second cache.18. The method of claim 14, wherein the upgrade prefer exclusive command is identified by a snoop unit in a cache controller for the second cache.19. A computer readable non-transitory medium encoded with computer readable program data and code, the program data and code when executed operable to: determine that a first cache local to a first processor missed at a target address in response to a load exclusive instruction issued from the first processor;issue a read prefer exclusive command on a coherent bus from the first cache to a second cache local to a second processor; andinvalidate the cache line in the second cache in response to a reservation granule in the second cache for this address being in a not tagged state.20. An apparatus for semaphore management in a multi-processing system, the apparatus comprising:means for issuing a read prefer exclusive command on a coherent bus from a first cache to a second cache in response to the first cache having a miss for data at a target address provided by a load exclusive instruction, wherein the first cache is coupled to a first processing agent that issued the load exclusive instruction and the second cache is coupled to a second processing agent; and means for snooping the coherent bus and in response to a snooped read prefer exclusive command and a reservation granule in the second cache being tagged for this target address, ensures a state of the line in the second cache is in a valid and shared state. |
METHODS AND APPARATUS FOR IMPROVING PERFORMANCE OF SEMAPHORE MANAGEMENT SEQUENCES ACROSS ACOHERENT BUSPRIORITY APPLICATIONS[0001] The present application claims priority to U.S. Patent Application Serial Number 13/933,337, filed July 2, 2013, entitled "METHODS AND APPARATUS FOR IMPROVING PERFORMANCE OF SEMAPHORE MANAGEMENT SEQUENCES ACROSS A COHERENT BUS," which further claims priority to U.S. Provisional Patent Application Serial Number 61/810,889, filed April 11, 2013, entitled "METHODS AND APPARATUS FOR IMPROVING PERFORMANCE OF SEMAPHORE MANAGEMENT SEQUENCES ACROSS A COHERENT BUS," both of which are incorporated herein by reference in their entireties.Field of the Disclosure[0002] Embodiments of the present invention relate generally to aspects of semaphore management, and more specifically to semaphore management across a coherent bus.Background[0003] Many portable products, such as cell phones, laptop computers, personal data assistants (PDAs) and the like, utilize a processing system that executes programs, such as communication and multimedia programs. A processing system for such products may include multiple processors, complex memory systems including multi-levels of caches and memory for storing instructions and data, controllers, peripheral devices such as communication interfaces, and fixed function logic blocks configured, for example, on a single chip.[0004] Multiple processors (MPs), such as a dual processor or a quad processor, are generally designed as a shared memory system utilizing a multi-level memory hierarchy. In such a shared-memory MP, data may be organized as private data and shared data. The private data is further organized for use locally by each processor in the MP. The shared data requires a mechanism to efficiently communicate data among the processors and to efficiently maintain coherence of the data between the processors. One mechanism to efficiently communicate data among the processors is to use a coherent bus, within the multi-level memory hierarchy, which supports a coherent protocol to ensure data that is shared is consistent between each of the processors.[0005] For example, a bus may be used at a cache level that requires coherence of the shared data, such as at a level 2 cache position in the shared memory hierarchy. The coherent bus is utilized between each level 2 cache associated with each processor in the MP. Various protocols have been developed to maintain consistency of data that is shared, such as the modified owned exclusive shared Invalid (MOESI) protocol. In the MOESI protocol, each cache line is tagged in such a way as to indicate whether the cache line is present only in the current cache and is dirty (modified), the cache line is present only in the current cache and is clean (exclusive), the cache line may be stored in other caches in the MP and is dirty in the present cache (owned), the cache line may be stored in other caches in the MP and is clean in the present cache (shared), the cache line is invalid in the present cache (invalid). The MOESI states are checked whenever a cache line is written to in order to determine the effect of that write on the corresponding data shared in the multiple caches.[0006] In a multi-processor, specialized instructions are used by each processing agent for semaphore management. Semaphore management often involves a pair of specialized load and store instructions to read a memory location, set a reservation granule, and conditionally write the memory location based on the state of the reservation granule. Systems that maintain cache coherence across a bus have the potential for these semaphore management instructions to result in live-lock or poor performance, if two or more processors are competing for the same semaphore.SUMMARY[0007] Among its several aspects, the present disclosure recognizes that it is desirable to provide more efficient methods and apparatuses for semaphore management across a coherent bus. To such ends, an embodiment of the invention addresses a method for semaphore management across a coherent bus in a multi-processor. A first cache local to a first processor is determined to have missed at a target address in response to a load exclusive instruction issued from the first processor. A read prefer exclusive command is issued on a coherent bus from the first cache to a second cache local to a second processor. In response to the read prefer exclusive command, a reservation granule in the second cache is determined to be in a not tagged state. The cache line in the second cache is invalidated in response to the determination the reservation granule in the second cache for this address is in the not tagged state.[0008] Another embodiment addresses an apparatus for semaphore management across a coherent bus in a multi-processing system. A first cache controller is configured to issue a read prefer exclusive command on a coherent bus from a first cache to a second cache in response to the first cache having a miss for data at a target address provided by a load exclusive instruction, wherein the first cache is coupled to a first processing agent that issued the load exclusive instruction and the second cache is coupled to a second processing agent. A second cache controller is configured to snoop the coherent bus and in response to a snooped read prefer exclusive command and a reservation granule in the second cache being tagged for this target address, ensures a state of the line in the second cache is in a valid and shared state.[0009] Another embodiment addresses a method for semaphore management across a coherent bus in a multi-processor. A first cache local to a first processor is determined to have hit at a target address in response to a load exclusive instruction issued from the first processor, wherein the accessed first cache line is in a shared or owned state. An upgrade prefer exclusive command is issued on a coherent bus from the first cache to a second cache local to a second processor. The second cache is determined to have hit at the target address in response to the upgrade prefer exclusive command, wherein a reservation granule of the accessed second cache line is in a not tagged state. The line requested by the first processor is upgraded to an exclusive state in response to the second cache line being in a shared state.[0010] Another embodiment addresses a computer readable non-transitory medium encoded with computer readable program data and code. A first cache local to a first processor is determined to have missed at a target address in response to a load exclusive instruction issued from the first processor. A read prefer exclusive command is issued on a coherent bus from the first cache to a second cache local to a second processor. The cache line in the second cache is invalidated in response to a reservation granule in the second cache for this address being in a not tagged state.[0011] A further embodiment addresses an apparatus for semaphore management across a coherent bus in a multi-processing system. Means is utilized to issue a read prefer exclusive command on a coherent bus from a first cache to a second cache in response to the first cache having a miss for data at a target address provided by a load exclusive instruction, wherein the first cache is coupled to a first processing agent that issued the load exclusive instruction and the second cache is coupled to a second processing agent. Means is utilized to snoop the coherent bus and in response to a snooped read prefer exclusive command and a reservation granule in the second cache being tagged for this target address, ensures a state of the line in the second cache is in a valid and shared state.[0012] It is understood that other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein various embodiments of the invention are shown and described by way of illustration. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGS[0013] Various aspects of the present invention are illustrated by way of example, and not by way of limitation, in the accompanying drawings, wherein:[0014] FIG. 1 illustrates a dual core multi-processor (MP) system;[0015] FIG. 2A illustrates a basic example of semaphore management between two processing agents;[0016] FIG. 2B illustrates a first scenario of semaphore management across a coherent bus;[0017] FIG. 2C illustrates a second scenario of semaphore management across a coherent bus;[0018] FIG. 2D illustrates a third scenario of semaphore management across a coherent bus illustrating a live-lock situation; [0019] FIG. 3A illustrates a semaphore management process having support for efficient live-lock avoidance;[0020] FIG. 3B illustrates a read prefer exclusive semaphore management process;[0021] FIG. 3C illustrates an upgrade prefer exclusive semaphore management process;[0022] FIG. 4A illustrates an exemplary first semaphore management technique using a read prefer exclusive bus command;[0023] FIG. 4B illustrates an exemplary second semaphore management technique using the read prefer exclusive bus command;[0024] FIG. 4C illustrates an exemplary third semaphore management technique using an upgrade prefer exclusive bus command; and[0025] FIG. 5 illustrates a particular embodiment of a portable device that utilizes an exemplary semaphore management with efficient live-lock avoidance in accordance with embodiments of the invention.DETAILED DESCRIPTION[0026] The detailed description set forth below in connection with the appended drawings is intended as a description of various exemplary embodiments of the present invention and is not intended to represent the only embodiments in which the present invention may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the present invention.[0027] FIG. 1 illustrates a multi-processor (MP) system 100. The MP system 100 comprises a dual core system 102 having a first processing agent (PXl) 104, such as a first core processor, a level 1 data cache-1 (LlDcache-1) 105, an L2 cache and controller-1 106, a second processing agent (PX2) 108, such as a second core processor, a level 1 data cache- 2 (LI Dcache-2) 109, an L2 cache and controller-2 110, a coherent bus 114, and main memory 116. The L2 cache and controller-1 106 includes an L2 cachel 120, an L2 controller-1 121, a reservation granule- 1 (RG-1) 122, and a snoop 1 unit 123. The L2 cache and controller-2 110 includes an L2 cache2 126, an L2 controller-2 127, a reservation granule-2 (RG-2) 128, and a snoop2 unit 129. A coherent bus may also be used between caches in other levels in the memory hierarchy using similar techniques as described herein. It is noted that the system 100 is not limited to a homogeneous machine as other types of processing agents, such as processors or hardware accelerators, in a heterogeneous machine organization may execute specialized instructions for semaphore management. A reservation granule (RG), such as RG-1 122 and RG-2 128, comprises a program accessible storage location having a valid indication, such as a valid bit, and a tag field for storage of an address. While the RG-1 122 and RG-2 128 are shown in the associated L2 cache controller, the location of the reservation granule is not so limited and may be located elsewhere in the dual core system 102, such as a controller for a different level of the memory hierarchy, for example.[0028] In the dual core MP system 100, specialized instructions are used by each processing agent, such as PX1 104 and PX2 108, for semaphore management. Semaphore management often involves a pair of specialized load and store instructions to read a memory location, set a reservation granule, and conditionally write the memory location based on the state of the reservation granule. These specialized instructions are referred to as load-exclusive (LDEX) and store-exclusive (STEX). The reservation granule (RG) is used to determine if a data value returned for the LDEX has been changed by another processing agent between the execution of the LDEX and the STEX. In other words, the RG is used to allow two discrete instructions to behave together as if they are atomic even though they are individually executed. Specialized commands for efficient semaphore management including a read prefer exclusive command and an upgrade prefer exclusive command are also described in further detail with regard to FIGs. 3A-3C and FIGs. 4A-4C.[0029] The MP system 100 provides for semaphore management across a coherent bus 114. Means, such as the L2 cache and controller-1 106, is utilized to issue a read prefer exclusive command on a coherent bus from a first cache to a second cache in response to the first cache having a miss for data at a target address provided by a load exclusive instruction, wherein the first cache is coupled to a first processing agent that issued the load exclusive instruction and the second cache is coupled to a second processing agent. Means, such as the L2 cache and controller-2 110, is utilized to snoop the coherent bus and respond to a snooped read prefer exclusive command by providing data to the first cache at the target address. In response to a reservation granule in the second cache being tagged for this target address, a state of the line in the second cache ends in a valid state.[0030] For example, the L2 cache and controller- 1 106, associated with the first processing agent (PXl) 104 that executes a load exclusive (LDEX) or a store exclusive (STEX), may be configured with decoders for identifying commands on the coherent bus 114. The L2 cache and controller- 1 106 is also configured with hardware for identifying a state of an accessed cache line and with comparators for determining whether a current cache line state or current reservation granule (RG) state has changed from a corresponding previous state. The cache line state, such as state of an accessed cache line in the L2 cachel 120, and the state of the RG-1 122 are determined by separate mechanisms that access stored state values in parallel. The determined state values are then combined by logical means to identify whether a bus command needs to be issued. If a bus command needs to be issued an appropriate bus command is selected to issue. While the cache state and the RG state may be checked serially, such an approach may not be as efficient as checking the states in parallel. For example, snoopers, such as snoop 1 123 and snoop2 129, operate separately and in parallel by decoding bus commands on the coherent bus 114. The particular operation detected and selected bus command follow operations shown in FIGs. 3A-3C and 4A-4C, which include changing state of a cache line, changing an RG state, and providing a response to the processing agent that executes an LDEX or STEX that causes commands on the coherent bus 114. A further response to the processing agent executing the LDEX may include providing data, such as shown below with regard to blocks 348, 350, 356, and 358 of FIG. 3B, for example.[0031] FIG. 2A illustrates a basic example of semaphore management 200 between two processing agents, PXl 202 and PX2 203. In a first operation 204, PXl executes an LDEX A which causes the reservation granule (RG) associated with PXl to be tagged with the memory address A of the LDEX. In a second operation 205, PXl takes no action and PX2 also executes an LDEX A which results in the RG associated with PX2 to be tagged with the memory address A. In the multiprocessor (MP) environment, a STEX from one processing agent is required to untag another processing agent's RG if tagged with the STEX address. Thus, in a third operation 206, the PXl executes a STEX A which completes execution since PXl's RG is tagged and a bus command is issued to remove the tag from PX2's RG. In response to the bus command, PX2's RG is untagged. In a fourth operation 207, PX2 attempts to execute a STEX which fails, since PX2's RG is no longer tagged. In a different scenario, if PX2 updated the memory addressed by the PXl's RG before the PXl executes the STEX, the tag in the PXl's RG is cleared, and PXl's STEX would then not update memory because the RG is no longer tagged. The clearing of the tag indicates to PXl that the value its LDEX returned is now old and not valid.[0032] Systems that maintain cache coherence across a bus have the potential for these semaphore management instructions to result in live-lock or poor performance. The best performance occurs when the cache associated with the processor executing the semaphore management instruction contains the cache line in either the modified or exclusive states when the STEX executes. If the cache line addressed by the STEX is in any other state, then a bus request must be made to obtain the line in the modified or exclusive state prior to the STEX being allowed to complete.[0033] FIG. 2B illustrates a first scenario of semaphore management 220 across a coherent bus. For this first scenario, processing agents PXl 202 and PX2 203 are not trying to access the same semaphore at the same time and PX2 holds the cache line including data at address A. Also, PX2's RG is not tagged. In a first operation 224, PXl executes an LDEX A and obtains the line in a shared state. PXl 's RG is then tagged with address A. Since the line is in a shared state, PX2's copy of the line must be invalidated in order to change PXl's line to an exclusive state. In a second operation 225, PXl issues a STEX A which causes a bus command to be issued to invalidate the PX2's line. The command issued to invalidate PX2's line with address A causes additional latency when PXl is trying to acquire the semaphore.[0034] FIG. 2C illustrates a second scenario of semaphore management 240 across a coherent bus. For this second scenario, PXl 202, PX2 203, and a coherent bus are configured to obtain a line exclusive upon executing an LDEX. PX2 holds the cache line including data at address A. Also, PX2's RG is not tagged prior to PXl's LDEX A being issued. In a first operation 244, PXl executes an LDEX A which causes a read exclusive bus command to be issued and obtains the line in an exclusive state. Since the line is in an exclusive state, PX2's copy of the line is invalidated and PXl's RG is then tagged with address A. In a second operation 245, PXl issues a STEX A which completes execution since PXl's RG is tagged, and does not need to issue a bus command because PXl's LDEX already obtained line A in the exclusive state. While this semaphore management technique 240 helps performance by not requiring PXl 's STEX A to make an additional bus command before completing execution, it can lead to a live-lock situation as described with regard to FIG. 2D.[0035] FIG. 2D illustrates a third scenario of semaphore management 260 across a coherent bus illustrating a live-lock situation. For this third scenario, PXl 202, PX2 203, and a coherent bus are configured to obtain a line exclusive upon executing an LDEX. PXl and PX2 both hold copies of the cache line including data at address A. In a first operation 264, PXl executes an LDEX A which causes a read exclusive bus command to be issued and obtains the line in an exclusive state. Since the line is in an exclusive state, PX2's copy of the line is invalidated and PXl's RG is then tagged with address A. If PX2's RG was tagged, it is untagged. In a second operation 265, PX2 executes an LDEX A which causes a read exclusive bus command to be issued and obtains the line in an exclusive state. Since the line is in an exclusive state, PXl's copy of the line is invalidated, PXl's RG is untagged, and PX2's RG is then tagged with address A. In a third operation 266, PXl issues a STEX which fails since its RG is no longer tagged causing the LDEX/STEX process to be repeated. In a fourth operation 267, PXl executes an LDEX A which causes a read exclusive bus command to be issued and PXl obtains the line in an exclusive state. Since the line is in an exclusive state, PX2's copy of the line is invalidated, PXl's RG is then tagged with address A, and PX2's RG is untagged. In a fifth operation 268, PX2 issues a STEX which fails since its RG is no longer tagged causing the LDEX/STEX process to be repeated. In a sixth operation 269 and continuing, the LDEX/STEX process in both PXl and PX2 are repeated due to the live-lock situation.[0036] To ensure a live-lock situation does not occur, it is noted that the STEX operation is always preceded by an LDEX operation and thus, the LDEX can be used as a hint to obtain the line in a modified or exclusive state in anticipation of the STEX executing. However, an implementation cannot demand the line in a modified or exclusive state upon execution of the LDEX as indicated by the operations of FIGs. 2C and 2D. Such an approach could cause a live-lock if two processors were competing for the same semaphore, as shown in FIG. 2D. To avoid this live-lock situation, previous implementations have allowed the LDEX to obtain the line in the exclusive state if all other caches have the line invalid, but require the line to be taken shared if any other cache has the line in a state other than invalid. In the case where the line is taken shared by the LDEX executing processing agent, the STEX must then make an additional bus request to invalidate the other cache's shared copy of the line as shown in FIG. 2B, which causes a loss in performance.[0037] The dual core system 102 is configured to execute software instructions that are stored in a non-transitory computer-readable medium, such as associated with the system memory 116, and that are executable to cause a computer, such as the first processing agent (PX1) 104 and the second processing agent (PX2) 108, to execute a program to operate as illustrated in FIGs. 3A - 3C and 4A - 4C. The PX1 104 and PX2 108 are configured to execute the software instructions that are accessed from the different levels of cache memories, 105, 120, 109, and 126, and the system memory 116.[0038] FIG. 3A illustrates a semaphore management process 300 having support for efficient live-lock avoidance. To avoid live-lock and the associated loss in performance, the coherent bus protocol is modified to include new commands that are issued, such as at block 308, to cause the read prefer exclusive command to issue and at block 312 to cause the upgrade prefer exclusive command to issue. The commands at blocks 308 and 312 operate in response to a cache access associated with block 306, a cache line state associated with block 310, and states of reservation granules (RGs) built into the cache controller, such as L2 cache and controller- 1 106 and L2 cache and controller-2 110.[0039] The process 300 begins at block 304, where a requesting core processing agent, such as PX1 104, issues an LDEX A instruction. At block 306, the cache, such as the L2 cachel 120 of FIG. 1, associated with the requesting PX1 determines if the line having data at address A is in the cache. If the line is not present in the cache, the process 300 proceeds to block 308. At block 308, a coherent bus controller, such as the L2 cache controller- 1 121 associated with the PX1, issues a read prefer exclusive command on the coherent bus 114. Returning to block 306, if the line is present in the cache, such as indicated by a hit in the cache, the process 300 proceeds to block 310. At block 310, a determination is made whether PXl's cache line state indicates shared, owned, exclusive, or modified. If the state is shared or owned, the process 300 proceeds to block 312. At block 312, the coherent bus controller, such as the L2 cache controller-1 121 associated with the PX1, issues an upgrade prefer exclusive command on the coherent bus 114. Returning to block 310, if the state is exclusive or modified, the process 300 proceeds to block 314 where the process associated with PXl's issue of the LDEX A instruction is completed.[0040] FIG. 3B illustrates a read prefer exclusive semaphore management process 330. An RG is tagged with the target cache line address in response to an LDEX instruction. A STEX instruction only updates memory if the RG, for the STEX-executing processing agent, is tagged at the time that the STEX instruction was issued for execution. A read prefer exclusive operation is used as a hint to other caches in the MP that the requesting master is likely to store to the cache line, and, if possible, each of the other caches should invalidate the line to allow the requesting master to transition to the exclusive state. In most cases, this will result in the other masters invalidating the line, which may also be referred to as giving up the line, and the requesting master taking the line exclusive. It is noted that a cache line may be marked shared even if data in that line is missing in the second cache. The cache line in the first cache is marked as shared because the second cache still has its RG tagged with the target address. Although the second cache does not have the line valid in its cache, the second cache's RG is still valid and the first cache takes the line shared to remember that there is an RG that is tagged in the second cache. The STEX instruction must still broadcast to un-tag the RG in the second cache even though the second cache does not have the line valid in its cache.[0041] The only time another cache is unable to give up the line is when the other cache is itself performing a semaphore management sequence and the other cache's RG is tagged with the same address as this could lead to the live-lock. In most cases, multiple processors are not performing a semaphore management sequence for the same address at the same time. As a result, this embodiment could significantly increase the number of times that a requesting master's LDEX is able to take the line in the exclusive state which increases performance in each processor of the MP.[0042] The process 330 begins at block 332 from a monitor that determines whether a read prefer exclusive command was detected on a coherent bus. Upon the command being detected, also referred to as snooped, from the coherent bus 114 by a snooper operating for PX2, the process 330 proceeds to block 334. At block 334, a determination is made whether the line associated with the LDEX instruction issued at block 304 is in the PX2's cache. If the determination is that the line is not in the PX2's cache, such as indicated by a miss in the cache, the process 330 proceeds to block 336. At block 336, a determination is made whether the cache line reservation granule (RG) associated with PX2 is tagged with the same address A of the LDEX instruction or if it is not tagged or is tagged with an address different than address A. If the line is not tagged or is tagged with an address different than A, the process 330 proceeds to block 338. At block 338, the requester, in this case PXl, takes the line exclusive and the data is fetched from the next level in the memory hierarchy, such as from an L3 cache. The process 330 then returns to block 332. Returning to block 336, if the line is tagged with the same address, the process 330 proceeds to block 340. At block 340, the requester, in this case PXl, takes the line shared and the data is fetched from the next level in the memory hierarchy. The process 330 then returns to block 332.[0043] Returning to block 334, if the determination is that the line is in PX2's cache, such as indicated by a hit in the cache, the process 330 proceeds to block 344. At block 344, a determination is made whether the cache line reservation granule (RG) associated with PX2 is tagged with the same address A of the LDEX instruction or if it is not tagged or is tagged with an address different than address A. If the line is not tagged or is tagged with an address different than A, the process 330 proceeds to block 346. At block 346, a determination is made whether the PX2 cache line state is shared or exclusive or if the PX2 cache line state is owned or modified. If the PX2 cache line state is shared or exclusive, the process 330 proceeds to block 348. At block 348, the requester PXl takes the line exclusive, invalidates the line in the PX2's cache, and provides the accessed data to PXl's cache. The process 330 then returns to block 332. Returning to block 346, if the PX2 cache line state is owned or modified, the process 330 proceeds to block 350. At block 350, the requester PXl takes the line modified, invalidates the line in the PX2's cache, and provides the accessed data to PXl's cache. The process 330 then returns to block 332.[0044] Returning to block 344, if the line is tagged with the same address, the process 330 proceeds to block 354. At block 354, a determination is made whether the PX2 cache line state is shared or exclusive or if the PX2 cache line state is owned or modified. If the PX2 cache line state is shared or exclusive, the process 330 proceeds to block 356. At block 356, the requester PX1 takes the line shared, the PX2's cache if in the exclusive state, transitions to the shared state, if in the shared state, remains in the shared state, and provides the accessed data to PXl's cache. The process 330 then returns to block 332. Returning to block 354, if the PX2 cache line state is owned or modified, the process 330 proceeds to block 358. At block 358, the requester PX1 takes the line shared, the PX2's cache if in the modified state, transitions to the owned state, if in the owned state, remains in the owned state, and provides the accessed data to PXl's cache. The process 330 then returns to block 332.[0045] FIG. 3C illustrates an upgrade prefer exclusive semaphore management process 360. The process 360 begins at block 362 from a monitor that determines whether an upgrade prefer exclusive command was detected on a coherent bus. Upon the command being detected, also referred to as snooped, from the coherent bus 114 by a snoop unit or snooper operating for PX2, the process 360 proceeds to block 364. At block 364, a determination is made whether the line associated with the LDEX instruction issued at block 304 is in the PX2's cache. If the determination is that the line is not in the PX2's cache, such as indicated by a miss in the cache, the process 360 proceeds to block 366. At block 366, a determination is made whether the cache line reservation granule (RG) associated with PX2 is tagged with the same address A of the LDEX instruction or if it is not tagged or is tagged with an address different than address A. If the line is not tagged or is tagged with an address different than A, the process 360 proceeds to block 368. At block 368, the requester, in this case PX1, upgrades the line state to exclusive. The process 330 then returns to block 332. Returning to block 366, if the line is tagged with the same address, the process 360 proceeds to block 370. At block 370, no action is taken and neither PX1 nor PX2 change cache state. The process 330 then returns to block 332.[0046] Returning to block 364, if the determination is that the line is in the PX2's cache, such as indicated by a hit in the cache, the process 360 proceeds to block 372. At block 372, a determination is made whether the cache line reservation granule (RG) associated with PX2 is tagged with the same address A of the LDEX instruction or if it is not tagged or is tagged with an address different than address A. If the line is not tagged or is tagged with an address different than A, the process 360 proceeds to block 374. At block 374, a determination is made whether the PX2 cache line state is shared or if the PX2 cache line state is owned. If the PX2 cache line state is shared, the process 360 proceeds to block 376. At block 376, the requester PX1 upgrades the line to an exclusive state and invalidates the line in the PX2's cache. The process 360 then returns to block 362. Returning to block 374, if the PX2 cache line state is owned, the process 360 proceeds to block 378. At block 378, the requester PX1 upgrades the line to a modified state and invalidates the line in PX2's cache. The process 360 then returns to block 362.[0047] Returning to block 372, if the line is tagged with the same address, the process 360 proceeds to block 380. At block 380, no action is taken and neither PX1 nor PX2 change cache state. The process 360 then returns to block 362.[0048] In an alternative embodiment, the read prefer exclusive command and the upgrade prefer exclusive command may be determined by including an appropriate attribute in an existing bus command. For example, to provide the function of the read prefer exclusive command, an attribute may be added to a read command that indicates a requesting processor might require the line exclusive. In most cases, the other processing agents would release the cache line in response to the attribute allowing the requesting agent to take the line exclusive. It is noted that a read command with an attribute set to indicate a requesting processor might require the line in an exclusive state may also be referred to as a read prefer exclusive command. Also, the function of the upgrade prefer exclusive command may be implemented by including an attribute in an upgrade command that indicates a requesting processor might require the line exclusive.[0049] FIG. 4A illustrates an exemplary first semaphore management technique 400 using a read prefer exclusive bus command. For this first technique, PX1 402, PX2 403, and a coherent bus are configured to opportunistically obtain a line exclusive upon executing an LDEX, when the LDEX by itself cannot demand the line exclusive. PX2 holds the cache line including data at address A. Also, PX2's RG is not tagged prior to PXl's LDEX A being issued. In a first operation 404, PX1 executes an LDEX A which causes a read prefer exclusive bus command to be issued and obtains the line in an exclusive state. Since the line is in an exclusive state, PX2's copy of the line is invalidated and PXl's RG is then tagged with address A. In a second operation 405, PX1 issues a STEX A which completes execution since PXl's RG is tagged and no additional bus command is issued. No further action is required by PX2. The semaphore management technique 400 helps performance by not requiring PXl 's STEX A to make an additional bus command before completing execution.[0050] FIG. 4B illustrates an exemplary second semaphore management technique 420 using the read prefer exclusive bus command. For this second semaphore management technique 420, PX1 402, PX2 403, and a coherent bus are configured to obtain a line exclusive upon executing an LDEX. PX1 and PX2 both hold copies of the cache line in their associated cache including data at address A. PX2's RG is not tagged prior to PX1 issuing an LDEX A. In a first operation 424, PX1 executes an LDEX A which causes a read prefer exclusive bus command to be issued and obtains the line in an exclusive state. Since the line is in an exclusive state, PX2's copy of the line is invalidated and PXl's RG is then tagged with address A. In a second operation 425, PX2 executes an LDEX A which causes a read prefer exclusive bus command to be issued and obtains the line in a shared state. Since the line is in a shared state, PXl's copy of the line is changed to a shared state, PXl's RG remains tagged with address A, and PX2's RG is tagged with address A. In a third operation 426, PX1 issues a STEX A which causes an additional bus command to be issued to invalidate PX2's line and untag PX2's RG. The STEX completes execution since PX2's line is invalidated and PXl's RG is tagged. The command issued to invalidate PX2's line with A causes additional latency when PX1 is trying to acquire the semaphore. However, there is no live-lock. The additional bus command is required only for the relatively uncommon case where two processing agents are trying to obtain the same semaphore at the same time.[0051] FIG. 4C illustrates an exemplary third semaphore management technique 440 using an upgrade prefer exclusive bus command. For this first technique, PX1 402, PX2 403, and a coherent bus are configured to opportunistically obtain a line exclusive upon executing an LDEX, when the LDEX by itself cannot demand the line exclusive. PX2 holds the cache line including data at address A. Also, PX2's RG is not tagged prior to PXl's LDEX A being issued. In a first operation 444, PX1 executes an LDEX A which hits in its L2 cache, causes an upgrade prefer exclusive bus command to be issued, and obtains the line in an exclusive state. Since the line is in an exclusive state, PX2's copy of the line is invalidated and PXl's RG is then tagged with address A. In a second operation 445, PX1 issues a STEX A which completes execution since PX1 holds the line in an exclusive state and PXl's RG is tagged. No further action is required by PX2. The semaphore management technique 440 helps performance by not requiring PXl's STEX A to make an additional bus command before completing execution.[0052] In an alternative embodiment for blocks 348, 350, 356, and 358, rather than have the data provided by the level 2 cache associated with PX2, data could be provided by the next level cache in the memory hierarchy or from the main system memory. In such a case, the cache line invalidation indicated in blocks 348 and 350 would occur regardless of where the data came from.[0053] While FIG. 1 illustrates a system with two processing agents, PX1 104 and PX2 108, the embodiments described herein are also applicable to systems having three or more processing agents. In this further embodiment, each additional L2 cache and controller snoops a coherent bus between the plurality of cache controllers and upon detecting a bus command, independently responds in the manner described with regard to FIGs. 3A-3C and 4A-4C. For example, in a system having three processing agents PX1, PX2, and PX3, and PX1 issues an LDEX, such as at block 304 of FIG. 3A and also missed in PXl's cache, such as at block 306, the L2 cache and controller- 1 106 would issue a read prefer exclusive command on the coherent bus. Both PX2 and PX3 subsystems would snoop the coherent bus and identify the read prefer exclusive command, and if the requested line is in both PX2's cache and PX3's cache, both PX2's and PX3's L2 cache and controller would check the state of the associated reservation granule and based on determined state values of the RG and cache line state would proceed to an appropriate block 348, 350, 356, or 358. In such a case with each cache controller having data to provide to the PX1 sub system, appropriate means are provided by a protocol on the coherent bus to choose whether the controller for PX2 or the controller for PX3 provides the data to the PX1 subsystem.[0054] FIG. 5 illustrates a particular embodiment of a portable device 500 that utilizes an exemplary semaphore management with efficient live-lock avoidance in accordance with embodiments of the invention. The portable device 500 may be a wireless electronic device and include a system core 504 which includes a processor complex 506 coupled to a system memory 508 having software instructions 510. The portable device 500 comprises a power supply 514, an antenna 516, an input device 518, such as a keyboard, a display 520, such as a liquid crystal display LCD, one or two cameras 522 with video capability, a speaker 524 and a microphone 526. The system core 504 also includes a wireless interface 528, a display controller 530, a camera interface 532, and a codec 534. The processor complex 506 may include a multi-processor (MP) system 554 which includes two core processing units, PX1 536 having local level 1 instruction and data (LI I & D) caches 549 and PX2 538 having local level 1 instruction and data (LI I & D) caches 550. The MP system 554 may correspond to the dual core system 102 of FIG. 1. The processor complex 506 may also include a modem subsystem 540, a flash controller 544, a flash device 546, a multimedia subsystem 548, a level 2 cacheO and controllerO 551, a level 2 cachel and controllerl 552, and a coherent bus 553. The flash device 546 may include a removable flash memory or may also be an embedded memory.[0055] In an illustrative example, the system core 504 operates in accordance with any of the embodiments illustrated in or associated with FIGS. 1, 3A-3C, and 4A-4C. For example, as shown in FIG. 5, the MP system 554 dual core processors are configured to access data or program instructions stored in the memories of the LI I & D caches 549 and 550 of their associated dual core processor, the L2 caches 551 and 552, and in the system memory 508 to provide operations as illustrated in FIGs. 3A - 3C.[0056] The wireless interface 528 may be coupled to the processor complex 506 and to the wireless antenna 516 such that wireless data received via the antenna 516 and wireless interface 528 can be provided to the MSS 540 and shared with MP system 554. The camera interface 532 is coupled to the processor complex 506 and also coupled to one or more cameras, such as a camera 522 with video capability. The display controller 530 is coupled to the processor complex 506 and to the display device 520. The coder/decoder (codec) 534 is also coupled to the processor complex 506. The speaker 524, which may comprise a pair of stereo speakers, and the microphone 526 are coupled to the codec 534. The peripheral devices and their associated interfaces are exemplary and not limited in quantity or in capacity. For example, the input device 518 may include a universal serial bus (USB) interface or the like, a QWERTY style keyboard, an alphanumeric keyboard, and a numeric pad which may be implemented individually in a particular device or in combination in a different device.[0057] The MP system 554 dual processors are configured to execute software instructions 510 that are stored in a non-transitory computer-readable medium, such as associated with the system memory 508, and that are executable to cause a computer, such as the dual core processors 536 and 538, to execute a program to provide operations as illustrated in FIGs. 3A - 3C and 4A - 4C. The PX1 536 and PX2 538 are configured to execute the software instructions 510 that are accessed from the different levels of cache memories and the system memory 508.[0058] In a particular embodiment, the system core 504 is physically organized in a system-in-package or on a system-on-chip device. In a particular embodiment, the system core 504, organized as a system-on-chip device, is physically coupled, as illustrated in FIG. 5, to the power supply 514, the wireless antenna 516, the input device 518, the display device 520, the camera or cameras 522, the speaker 524, the microphone 526, and may be coupled to a removable flash device 546.[0059] The portable device 500 in accordance with embodiments described herein may be incorporated in a variety of electronic devices, such as a set top box, an entertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, tablets, a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a video player, a digital video player, a digital video disc (DVD) player, a portable digital video player, any other device that stores or retrieves data or computer instructions, or any combination thereof.[0060] The various illustrative logical blocks, modules, circuits, elements, or components described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic components, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing components, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration appropriate for a desired application.[0061] The dual core processors 536 and 538 of FIG. 5 may be configured to execute instructions to service a real time task under control of a program. The program stored on a computer readable non-transitory storage medium either directly associated locally with processor complex 506, such as may be available through the instruction and data caches 549-552, or accessible through a particular input device 518 or the wireless interface 528. The input device 518 or the wireless interface 528, for example, also may access data residing in a memory device either directly associated locally with the processors, such as the processor local data caches, or accessible from the system memory 508. The methods described in connection with various embodiments disclosed herein may be embodied directly in hardware, in a software module having one or more programs executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), flash memory, read only memory (ROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), hard disk, a removable disk, a compact disk (CD)-ROM, a digital video disk (DVD) or any other form of non-transitory storage medium known in the art. A non-transitory storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.[0062] While the invention is disclosed in the context of illustrative embodiments for use in processor systems, it will be recognized that a wide variety of implementations may be employed by persons of ordinary skill in the art consistent with the above discussion and the claims which follow below. For example, a fixed function implementation may also utilize various embodiments of the present invention. |
A first set of instructions and incoming data are provided to a first processing unit of a data driven processor, to operate upon the incoming data. The first processing unit, in response to recognizing that the first set of instructions will require either reading from or writing to external memory, sets up a logical channel between a second processing unit of the processor and the external memory, to transfer additional data between the external memory and the second processing unit. This capability may be implemented by the addition of a control port, separate from data ports, to the first processing unit, where the control port allows the first processing unit to write addressing information and mode information (including the location of the additional data) for reading or writing the additional data via a memory access unit data channel of the processor. |
CLAIMS What is claimed is:1. A data driven processing method, comprising: providing a first set of instructions and incoming data to a first processing unit, of a data driven processor, to operate upon said mcoming data; configuring a data path for transferring data between a second processing unit of the data driven processor and external memory; and the first processing unit, in response to recognizing that the first set of instructions will require one of reading from and writing to external memory, provides addressing information to a memory access unit of the processor to enable the transfer of additional data between the external memory and the second processing unit via said data path.2. The method of claim 1 wherein the first processing unit recognizes an image processing motion vector in said first set of instructions, and said additional data is to be written to the external memory and includes a macro block generated by the second processing unit based on the motion vector.3. The method of claim 1 wherein the data path is configured by an external host controller.4. The method of claim 1 further comprising: the first processing unit providing an indication to the memory access unit of whether the transfer is one of a read and a write.5. A data processor comprising: a first direct memory access (DMA) unit; and a plurality of processing units each having a plurality of data ports, the data ports being coupled to each other and programmable to allow data flow from any one of the processing units to another and from any one of the processing units to the DMA unit, wherein one of the processing units has a control port from which it is to send information to the DMA unit about setting up a DMA channel through which one of data to be consumed and result data by one of the processing units is transferred. 6. The processor of claim 5 further comprising: memory interface circuitry, wherein the DMA unit is to access external memory via the memory interface circuitry.7. The processor of claim 6 wherein the memory interface circuitry is on-chip with the DMA unit, the plurality of processing units, and the host interface.8. The processor of claim 6 wherein the memory interface circuitry is designed to interface with external memory that is dynamic random access memory.9. The processor of claim 5 wherein the plurality of processing units are essentially identical units each having a plurality of sides, each side having a plurality of unidirectional data ports being an input port and an output port.10. The processor of claim 9 wherein the input port is programmable to route incoming data to any one of the output ports.11. The processor of claim 10 wherein each of the plurality of processing units has a plurality of control ports on each side including an input control port and an output control port, and wherein the input control port of a processing unit is programmable to route incorning command information to anyone of the output control ports of said processing unit.12. The processor of claim 9 further comprising an interface to an external device, and wherein the output ports of one of said processing units are coupled to the input ports of an adjacent one of said processing units except that some of the output ports of an outlying one of said processing units are coupled to the external device interface.13. The processor of claim 9 further comprising: a second DMA unit, wherein there are at least four of said plurality of processing units, the data ports on a north side of first and second ones of said four processing units are coupled to the first DMA unit, the data ports on a south side of third and fourth ones of said four processing units are coupled to the second DMA unit, and the data ports of a south side of the first and second processing units are coupled to the data ports of a north side of the third and fourth processing units.14. The processor of claim 13 further comprising an interface to an external device, wherein some of the data ports of east and west sides of the processing units are coupled to the external device interface. 15. The processor of claim 5 further comprising a central processing unit to read and execute instructions that configure the data ports and the DMA unit to create a data channel from one of the processing units to external memory.16. The processor of claim 5 further comprising a host interface unit to receive instructions, from an external host controller, that configure the data ports and the DMA unit to create a data path from one of the processing units to external memory.17. A system comprising: a host controller; external memory; a data driven processor having a memory access unit to interface the external memory, a plurality of processing units each having a plurality of data ports, the data ports being coupled to each other and programmable to allow data flow from any one of the processing units to another and from any one of the processing units to the memory access unit, and a host interface unit to receive instructions from the external host controller that configure the data ports and the memory unit to create a data path from one of the processing units through a data channel to the external memory, wherein one of the processing units has a control port which it uses to write data location information to the memory access unit; and one of a rechargeable battery and a fuel cell coupled to power the external memory, the host controller, and the data driven processor.18. The system of claim 17 wherein the host controller includes an embedded processor and its associated main memory.19. The system of claim 17 wherein the coupling of each pair of data ports from adjacent processing units is a point-to-point, unidirectional connection.20. The system of claim 19 wherein each of the processing units has a core programming element (PE) that can be programmed to execute instructions that operate on incoming data received via an input data port of that processing unit, an input PE that can read data from any one of a plurality of input data ports of that processing unit, and an output PE that can write data to any one of a plurality of output data ports of that processing unit.21. The system of claim 20 wherein the core PE of each processing unit can execute its instructions independently of a data path that is operating through a pair of said input and output data ports of that processing unit.22. The system of claim 17 wherem the data location information that is sent through the control port includes information about the size and display location of a block of image data. 23. A system comprising: external memory;- a data driven processor having a memory access unit to interface the external memory, a plurality of processing units each having a plurality of data ports, the data ports being coupled to each other and programmable to allow data flow from any one of the processing units to another and from any one of the processing units to the memory access unit, and a central processing unit to receive and execute instructions that configure the data ports and the memory unit to create a data path from one of the processing units through a data channel to the external memory, wherein one of the processing units has a control port which it uses to write data channel information to the memory access unit; and one of a rechargeable battery and a fuel cell coupled to power the external memory and the data driven processor.24. The system of claim 17 wherein each of the processing units has a plurality of control ports that are connected to each other in a mesh arrangement so that the data channel information, including one of a read and write command, address, and memory access unit channel identifier, can originate from any one of the processing units and be routed to the memory access unit via a logical control channel programmed in the mesh arrangement.25. The system of claim 23 wherein the coupling of each pair of data ports from adjacent processing units is a point-to-point, unidirectional connection. 26. The system of claim 23 wherein each of the processing units has a plurality of control ports that are coupled to each other and are programmable to allow data channel information to be sent from any one of the processing units to the memory access unit.27. A data processor comprising: means for translating higher level read and write commands into lower level memory access commands; a plurality of means for consuming data; means for implementing programmable data paths to supply data to and accept data from any one of said plurality of data consumption means; means for receiving instructions, from other than said plurality of data consumption means, to configure the programmable data path implementation means, the plurality of data consumption means, and the higher level read and write translation means; and means for implementing a programmable control path to transfer higher level read and write commands from one of said plurality of data consumption means to the higher level read and write translation means. 28. The processor of claim 27 further comprising means for ensuring that said lower level memory accesses meet signal level and timing requirements of external memory.29. The processor of claim 27 further comprising means for expanding the data processor. |
CONTROLLING MEMORY ACCESS DEVICES IN A DATA DRIVEN ARCHITECTURE MESH ARRAYBackground[0001] The embodiments of the invention described below are related to a practical implementation of a data driven processor, i.e. one that gives good performance over a wider range of applications but at a relatively low cost.[0002] The data driven architecture for a processor was developed to provide a better solution than the von Neumann architecture, to address the particular problem of processing a large amount of data using relatively few instructions. The von Neumann type processor is controlled by a clocked addressing scheme that can pull instructions and data from almost anywhere in memory. With little restriction on the type of instructions or the locations in memory that can be accessed, the von Neumann processor has the flexibility to run a wide range of different programs. In contrast, a data driven processor ("DDP") is designed to be fed blocks of data that are typically consecutively stored in memory (or arrive as a stream) and are to be processed according to a program that has only a small number of instructions that operate on the data. These types of programs can be found in applications such as digital encoding and filtering of documents (used in reprographics copiers, for example) and of audio and video data. Examples of audio and video data applications include compression and decompression in portable, consumer information products such as digital cameras, mobile general purpose computers, and small media devices such as MP3 players. The DDP may be particularly suited for such battery-powered products due to its inherent power efficiency, as its power consumption quickly drops to essentially zero when there is no more input data for it to consume.[0003] In most consumer products that have a DDP, a built-in host controller ("HC") assists the DDP by orchestrating the feeding of instructions and incoming data to the individual processing elements of the DDP. For example, a primary, general purpose processor or embedded processor of a consumer product can be programmed to act as the HC. The HC instructs each of the processing elements of a DDP as to the task to be performed. The HC also controls the formation of data paths between the DDP and external memory, to receive the outgoing data, i.e., the results of consumption by the processing elements. The DDP can be equipped with a direct memory access ("DMA") unit that delivers a stream of outgoing data, that originates from the individual processing elements of the DDP, to sequentially addressed memory locations that have been identified by the HC. A processing element of a typical DDP is not aware of the source of the incoming data; nor does it know where its result data is ultimately destined. That information is only known to the HC.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to "an" embodiment of the invention in this disclosure are not necessarily to the same embodiment, and they mean at least one.[0005] Fig. 1 shows a block diagram of a processor having a data driven architecture.[0006] Fig.2 illustrates example data paths that can be created in another embodiment of the processor.[0007] Fig. 3 depicts a block diagram of an electronic system featuring an embodiment of the processor. [0008] Fig.4 shows a block diagram of the components of an embodiment of the processor.[0009] Fig. 5 illustrates a block diagram of part of an input data port used in the processor.[0010] Fig. 6 illustrates part of an output data port used in the processor.[0011] Fig. 7 shows the input and output signals to first-in first-out logic used in the input and output data ports.[0012] Fig. 8 depicts a flow diagram of a method for processing data according to an embodiment of the invention.[0013] Fig. 9 illustrates a high level block diagram of a control port mesh arrangement for the data driven processor.[0014] Fig. 10 shows an example of a control word, a control port data word, and status indications that are used by the control ports to communicate with each other.[0015] Fig. 11 depicts a simplified block diagram of an arbiter that can be used in a control port transmitter.[0016] Fig. 12 gives an example of select registers that may be provided for a particular control port, for configuring the transmitter and receiver portions of that control port. DETAILED DESCRIPTION[0017] Beginning with Fig. 1, this figure shows a block diagram of a processor having a data driven architecture that is expected to provide good performance over a wider range of applications but at a relatively low cost. The processor is composed of a number of processing units (PUs) 104 (in this case, there are 6). Each PU 104 has a number of data ports (in this example, 16) that are coupled to each other as shown in a mesh arrangement. The data ports are programmable to allow data flow from any one of the processing units to another, and from any one of the processing units to a memory access unit 108.[0018] According to an embodiment of the invention, one or more of the PUs 104 are provided with a control port from which the PU can send information to the memory access unit 108 about the location of data to be read from or written to an external memory 120. The data and control ports are not explicitly labeled in Fig. 1, but are implied by virtue of the data lines 112 and the control lines 116, respectively. Addressing information and mode information, regarding a memory access channel, that is sent through a control port is destined to a memory access control register 127. The control register 127, for example, determines the settings for a DMA channel regarding the mode of operation of the DMA channel as well as location identifiers for the data that is to be transferred through the channel.[0019] A PU 104 is normally not aware of the particular location in external memory 120 from which data is read, or to which data is written. The PU 104 merely consumes data that comes in through its input port, based on instructions which have been programmed for it, and provides result data through its designated output port. The output port is part of the data path that has been created to deliver the result data to a particular location in external memory 120. A host controller (not shown) may be used to normally instruct or program a PU to read from and write data to one or more of its data ports which is logically connected, via a programmed data path, to the memory access unit 108. In the embodiment of Fig. 1, however, one or more of the PUs 104 is also capable of sending a read or write request to the memory access unit 108, thereby freeing the host controller from such tasks. This memory addressing capability in one or more PUs 104 allows the processor as a whole to be better suited to process certain applications where the next block of mcorning data to be processed is not consecutively stored in memory. Following a more detailed description of the processor, an example video decoding application will be described to illustrate some benefits of the added memory addressing capability of a PU 104.[0020] The memory access unit 108 of the processor may be a direct memory access (DMA) unit that can read and write the external memory 120 without intervention by a central processing unit (CPU) 124. The memory access unit 108 serves to translate higher level read and write commands, received from a control port of one or more of the PUs 104, the CPU 124, or an external host controller (not shown), into lower level memory access commands. The higher level read and write commands may be application specific, for example, specific to a video, document, or audio application. As an example, the PU 104 may generate a read request to the memory access unit 108 for accessing a given frame of image data, but that certain pixels are to be skipped. As another example, the read request may be for just a particular block of an entire image frame. Such high level requests may simply refer to pixel locations given by the Cartesian coordinates on a display, for example. In response, the memory access commands that are generated by the memory access unit 108 would refer to specific addresses in the external memory 120 at which the pixel values are stored. In some cases, memory interface circuitry 128 as shown in Fig. 1 would also be needed, to insure that these lower level memory accesses meet the signal level and timing requirements of the external memory 120. A number of DMA channels are available to request read and write transactions, and to transfer data from and to the external memory.[0021] The processor shown in Fig. 1 also has an I/O interface 132 to external devices (not shown). It can be seen that some of the output and input data ports of the PUs 104, and in particular those of the outlying PU1 and PU4 , are coupled to the I/O interface 132. The interface 132 allows incorning data and result data to be transferred between the PUs 104 and external devices such as hard disk drives, CD-ROM drives, and video displays. The interface 132 may thus be designed to translate between the signaling needed by the data ports of the PUs 104 and the signaling of a parallel computer peripheral bus or a high speed serial bus. The interface 132 may be used for mcoming and result streams of audio or video data. As an alternative, all of the incorning and result data, for example, entire image frames, may be stored in the external memory 120 and only after all processing has been completed will the result data be transferred to a mass storage device or other peripheral.[0022] To further improve the performance of the processor with respect to external memory, an additional memory access unit 136 may be provided that, via a separate memory interface 138, will allow the PUs 104 to also use the storage available in an additional external memory 140. In such an embodiment, the data ports on a north side of PUl - PU3 are coupled to the first memory access unit 108, while the data ports on a south side of PU4 - PU6 are coupled to the second memory access unit 136. To allow PUl - PU3 access to the external memory 140 (on the south side), the south side data ports of PUl - PU3 are coupled to the north side data ports of PU4 - PU6.[0023] The embodiment of the processor shown in Fig. 1 also has a CPU124. The CPU 124, which may be provided on chip with the PUs 104, is to read and execute instructions that configure the data ports of the PUs 104 and the memory access units 108, 136, to create data channels from any one of the PUs 104 to the external memory 120, 140. This functionality of creating data paths through the mesh arrangement of data ports, and instructing the PUs 104 with their individual tasks, may instead be delegated to an external host controller (not shown). Fig. 2 will be used to illustrate the flexibility of the data ports in creating multiple data paths between PUs.[0024] Fig. 2 illustrates example data paths that can be created between two points in another embodiment of the processor. In this embodiment, there are eight PUs 104 and five data paths are shown that link PUl with PU8. Each PU 104 has data ports that connect with four sets of data lines 112 on each side (the control ports and corresponding control lines 116 are not shown, but see Fig. 1). The data port mesh supports data flow from any one of the PUs 104 to another, and through a data channel to the external memory 120, 140. The external memory 120, 140 may be a dedicated, solid state memory of the buffer-type, suitable for relatively high speed access and relatively large storage, used in applications such as document processing, and video and audio processing. Alternatively, the external memory 120, 140 may be part of the main memory of the host controller (not shown) or other main memory in the electronic system. The external memory 120, 140 may be composed of double data rate or synchronous dynamic random access memory; other types of solid state memory that are suitable for the particular application of the processor may alternatively be used.[0025] As mentioned above, the data ports can be configured to establish a logical connection between any two points in the processor. The configuration of the data port mesh in this embodiment is controlled by a host processor (not shown) that is connected by way of a computer peripheral bus (not shown), through host interface 139. A relatively low speed, global bus (not shown) connects the host interface 139 to all of the PUs 104, the memory access units 108, as well as other components of the processor. This global bus is used to configure or program the processor by loading instructions or microcode for the PUs 104, as well as reading status registers (not shown). In addition, each of the outlying PUs, namely PUl, PU5, PU4, and PU8, has a pair of data ports coupled to a respective expansion interface (El) unit 141. The El units 141 permit the data port communications mesh to be extended over multiple processors, and allows the connection of external peripheral devices as mentioned above such as video displays, hard disk drives and printers, to the processor.[0026] The PUs 104 may be essentially identical units each having a number of sides, where each side has multiple, unidirectional, data ports of which at least one is an input port and at least one is an output port. In the embodiment of Fig. 2, the type of coupling of each pair of data ports, from adjacent PUs 104, is a point-to-point, unidirectional connection. The design of the data ports and their connections to programming elements of each PU (to be further described below in connection with Figs.4-7) is such that the programming elements within each PU need not be involved in any data that is being transferred through the data ports of that PU. Thus, in the example shown in Fig. 2, the only PUs 104 that are actually reading or writing the data ports involved are PUl and PU8. Note the five different possible data paths that can be configured between those two PUs, showing the flexibility of the data port mesh architecture. For the sake of clarity, not all possible data paths between PUl and PU8 are shown in Fig. 2; in addition, it should now be clear that similar programmable data paths may be created between any one of the PUs 104 and the external memory 120 or 140, via any desired data channel through memory access unit 108 or 136.[0027] Allowing the control registers 127 of the memory access unit 108(see Fig. 1) to be written via control ports of one or more PUs 104 is expected to make the processor as a whole better suited for a video processing application, in which the location of incoming data that needs to be processed does not always change progressively or sequentially on a block-by-block basis. As an example, consider video compression in which frame-to-frame temporal redundancies are reduced, in the compression stage, by generating a motion vector, for a current frame, that points to a change in the location of an image block from a previous frame, due to motion in the scene. Now consider the decoding or decompression stage, where PUl has been instructed to decode incoming video data to reconstruct a "current" frame. This incoming video data may also include a motion vector. Because a motion vector points to a block of image data that is in a previous frame, a separate access to external memory will be needed to fetch that block. In other words, the motion vector points to an image block of a prior frame that is stored in the external memory 120 and that needs to be copied to decode the current frame. Note that the exact display location (x,y) of this block is not known to the host controller in advance of the start of the decoding process. However, once that information becomes available to a processing unit (e.g., PUl) an access to external memory may be expeditiously performed as follows. First, PUl sends a command through its control port to read from location (x,y) via a certain data channel Z of the memory access unit. This results in configuring the control registers 127 of the memory access unit 108 with addressing and mode information relevant to the motion vector (in this case, read from location (x,y) via data channel Z). The read data is then fetched by the memory access unit 108 and made available through its channel Z. Since a logical data path had been previously programmed between channel Z of the memory access unit and PU5 (e.g., by the external host controller), the image block pointed to by the motion vector will be routed through that path and into PU5. PU5, as previously instructed, then consumes this data, and writes the result data, including in this case a decoded macro block that is generated by PU5 based on the motion vector and according to some previously programmed algorithm, back to the external memory for the current frame.[0028] In general, giving one or more PUs 104 control of what addressing information can be sent to external memory may yield greater progranτming freedom and flexibility for the processor as a whole. The provision of the control ports in one or more PUs allows more of the logical complexity of an algorithm to be contained in the PUs 104, and accordingly makes the CPU 124 (see Fig. 1) or host controller (not shown) more available to handle the more complex tasks of running the application. The addition of the control ports does not adversely affect the benefits of the data port mesh arrangement, which retains the advantages of a data driven architecture (including reduced timing issues and improved power efficiency), the parallel processing ability of multiple PUs, and the scalability and modularity of the PU design.[0029] Still referring to Fig. 2, the memory interface circuitry 128 may be on-chip with the memory access unit 108 all of the PUs 104, and the host interface 139. As an alternative, the components may be part of a multi-chip package, where for example each PU 104 is on a separate chip. [0030] Turning now to Fig. 3, a block diagram of an electronic system that contains a data driven processor 304 as described above is shown. The system may be any form of computing or communication device that can manipulate image, audio, or other media in preparation for being either displayed or audibilized (e.g. decompressing a video or audio file), stored (e.g. compressed prior to storage), or printed. The system has a connector 308 that allows the processor 304 to provide its result data directly to the peripherals (not shown) via for example the I/O interface 132 (see Fig. 1). The system also has a host controller 310 that is coupled to communicate with the processor 304 via a bus 314 which may be a serial or parallel computer peripheral bus. The host controller 310 is configured to execute an application program, such as the video decoding example given above, that contains tasks which are particularly suited for execution by a data driven architecture as provided by the processor 304. The host controller 310 may include an embedded processor and its associated main memory (not shown).[0031] The content data to be consumed by the processor 304 may be stored in an external memory 316 (e.g. entire image frames) and can be accessed by the individual processing units of the processor 304 via the data port mesh connection described above. A host interface unit (not shown) in the processor 304 is to receive instructions from the host controller 310 that instruct the individual processing units with their tasks and create data paths from the processing units through a data channel to the external memory 316. The processor 304 is also enhanced with control ports in one or more of its processing units, used for writing data channel addressing information to a memory access unit of the processor 304. The system shown in Fig. 3 also has a fuel cell or rechargeable battery 330 that is coupled to power the external memory 316, the host controller310, and the processor 304 by way of a voltage regulator (VR) module 334. Of course, if the output voltage of the fuel cell or rechargeable battery 330 is sufficiently stable to meet the requirements of the external memory, processor, and host controller, then the VR module 334 may not be needed. [0032] Referring now to Fig. 4, a block diagram of the PU 104 is shown. In this embodiment, the PU 104 has one or more core programming elements (PEs) that can be programmed to execute instructions that operate on incoming data received via any one of eight input data ports 408. Each PE has instruction memory as well as an arithmetic and logic unit (ALU) which implement a baseline instruction set architecture. In addition, there can be a multiply and accumulate function (MAC) unit that is added to one or more PEs 416. Additional PEs include one or more accelerator units 420 (e.g. for performing special operations such as two's-complement multiplication and application-specific digital filtering), and a memory command handler (MCH) 424 with integrated data RAM for local storage of data, constants, and instructions within the PU 104 may be provided. An input PE 428 can read data from any one of the input data ports 408 of the PU 104, while an output PE 432 can write the result data to any one of multiple output data ports 436. A set of general purpose registers 440 allow data to be exchanged between the PEs, according to a predefined semaphore protocol. See also U. S. Patent Application publication No. US 2002/0147768 of Vavro for a further example of a data driven digital signal processor having multiple processing elements that are coupled together via a number of general purpose registers. Note that in the embodiment of Fig. 4, each core PE of the PU 104 can execute its instructions independently of a data path that is operating through a pair of the input and output data ports 408, 436 of that PU. In other words, there is a data path between an input data port 408, the input PE 428, the general purpose registers 440, the output PE 432, and the output data port 436, independent of the operations of the PEs 412, 416, accelerator unit 420, and MCH 424. Additional details regarding the input and output data ports 408 and 436 are given below in connection with Figs 5-7.[0033] Turning now to Fig. 5, what is shown is a block diagram of part of an input data port 408 (see Fig. 4). The input data port 408 is to receive data from other PUs. The input data port communicates by way of a request/ grant protocol where a Request is presented from outside the PU along with Data. The input data port returns Grant when the data is accepted. In this embodiment, the request/ grant protocol requires that data has been transferred whenever the request and grant signals are active on the active edge of the input Clock signal. This input data is temporarily stored in a first-in first-out (FIFO) buffer 510. The data is thus stored in the FIFO 510 until a Grant signal from one of a number of so called transmitters P0-P7 is received. A multiplexer 514 is provided to select one of these, in this case, eight, grant signals from the device to which this input data port 408 is connected. The input data port 408 may be programmed by a register setting that controls the select input to the multiplexer 514, to determine which of the eight transmitters is to receive the data from the FIFO 510. This register setting may change either before or after a data transfer into the FIFO 510 has occurred.[0034] The eight possible grant signals in this instance refer to seven (out of a total of eight) output data ports of this PU, plus the input PE 428 of this PU 104 (see Fig. 4). Note that there are eight possible grant signals in this case, because there are only seven output data ports to which the data can be forwarded as the eighth data port simply corresponds to the one associated with the input data port 408. As shown in Fig. 5, there are paths for transferring the received Data, Request, and Initialize signals from the FIFO 510 to all, in this case eight, other devices in the PU 104.[0035] The Initialize (Init) signal is passed through each data port that makes up a data path. Thus, if a data port is initialized at the source point of the data, the Init signal will propagate through the entire logical connection being the data path, and thereby initializes the whole data path. The Init signal is registered and passed through the data port as if it were data, to prevent propagation delays from accumulating through long logical connections in the processor. All of the data port interface signals may be handled in this manner, including the data and request signals, to prevent large combinatorial delays through the logical connections. Other implementations of the input data port 408 that allow a logical connection to be established between the package pins of an input data port and those of an output data port are also possible. Fig. 6 illustrates a block diagram of part of an output data port 436.[0036] The output data port 436 shown in Fig. 6 may also be referred to as a "transmitter" port, because it transmits data to other PUs 104. A set of three multiplexers 614 are provided, to select the Request, Data, and Init signals that will be transmitted out of the PU. Note that each multiplexer 614 has, in this embodiment, eight inputs, which correspond to seven input data ports of the PU, plus one output PE 432 (see Fig.4). Once again, the select inputs to the multiplexers 614 may be controlled by a register setting that can change before or after a data transfer has occurred.[0037] The selected Request, Data, and Init information are placed in temporary storage in a FIFO 620. As in the case of the input data port 408 described in connection with Fig. 5 above, the FIFO 620 of the output data port provides the buffered Request, Data, and Init information in response to a received Grant from a device external to the PU.[0038] The combination of FIFO 510 and FIFO 620 can be illustrated as a 2- deep FIFO 720 as in Fig. 7. This 2-deep FIFO 720 is part of a logical connection, through a given PU, that is made of an input data port and an output data port. On the input side, data_in, request_in, and init_in are received and stored in the FIFO 720, and a grant_in is signaled when this set of input information has been accepted. The FIFO 720 is part of a programmed logical connection that transmits data_out, request_out, and init_out, in response to receiving a grant_out signal from a device external to the PU. As mentioned above, these interface signals are handled in a manner to prevent large combinatorial delays through the logical connection. This facilitates system on chip designs, without special concerns about data path routing. All routing between the data ports may be simple, point- to-point connections that are registered in each data port as described above. The logical connection is programmed by the simple register setting for the multiplexers 514 (Fig. 5) and 614 (Fig. 6). However, other implementations for a logical connection through a PU 104 may be possible.[0039] Turning now to Fig. 8, this figure shows a flow diagram of a generalized data processing method suitable for a flexible, data driven architecture. Operation begins with block 804 in which a first set of instructions and incoming data are provided to a first processing unit (PU) of a data driven processor. As suggested above, these instructions and incorning data (as well as the data path for the incoming data to reach the first PU) may be orchestrated and configured by an on-chip CPU or an external host controller, via a relatively low speed, global control bus of the processor. Additionally, one or more control paths are configured, for the first PU to send memory channel addressing information to a memory access unit of the processor.[0040] As the first PU operates upon the incoming data, it recognizes that this first set of instructions requires either reading from or writing to external memory. Accordingly, operation then continues with block 808 in which the first PU requests the memory access unit (via the control path) to fetch or expect data on a given memory channel. A logical data path between a second PU of the processor and the external memory (via the given memory channel) has been previously created to transfer additional data between the external memory and the second PU. According to an embodiment of the invention, the added control port structure described above is used for allowing the first PU to specify the location of the data to be transferred, through a previously programmed data path between the external memory and the second PU. This logical data path may be routed through a data port mesh arrangement that is independent of the individual programming elements within each processing unit.[0041] As an example, the first PU may recognize an image processing motion vector in the first set of instructions. In that case, the additional data is to be written to the external memory, and includes a macro block that will be generated by the second PU based on the motion vector. Many other types of data driven applications, such as audio compression, may also benefit from this added capability.[0042] Fig. 9 illustrates a high level block diagram of a control port mesh arrangement for the data driven processor. In this figure, only the control port mesh arrangement is illustrated, for an easier understanding. In this embodiment of the invention, the DMA units are slaves to the PUs in that DMA channels cannot initiate commands; rather, all commands to the DMA channels are initiated by either the north or south control ports of the PUs. Each PU in this example contains four control port sets labeled north, east, south, and west. Each has a port 0 and port 1. For example, the El port of PUl receives commands while E0 transmits commands (when configured to be a part of a control path in the mesh arrangement). A point-to-point bus connection is supported by a pair of control ports from adjacent PUs (or from a PU and a DMA unit).[0043] The point-to-point connection between a transmitter port and a receiver port (external to a PU) may be built using a parallel bus having a command portion and a status portion. The command portion may have a 16-bit bus and is used to transmit configuration instructions destined to the DMA unit, for a particular data channel of the DMA unit. A status bus may be provided with three bits that are used to send the status of the data channel back to the command initiator. A request and grant signaling protocol may be used, in addition to the use of Initialize (Init) signals, similar to the data port links described above. The transmission of a complete command may thus take two clock cycles. Referring now to Fig. 10, in the first cycle, a control word 1004 may be transmitted followed by, in the second cycle, a control port_data word 1008, to complete a command. As can be seen in Fig.10, the control word includes routing information (router identification, RID) that helps the control ports in routing the commands to their correct destination. More particularly, a receiver port determines the destination of a command based on the RID bits which specify either port 0 or port 1 in this example, as well as whether the command specifies a read or a write channel. Recall that the ultimate destination of this command will be a DMA unit which will set up a data channel (available through a data port, as described above) for servicing the requested read or write command. The port bit (RID#1) may be hard-wired in a transmitter port, depending on whether it is a 0 or a 1 port. The R/ W bit is driven by the command initiator and will be passed through unchanged by all the control ports in the control path.[0044] The control word 1004 also includes REG_SEL bits, in this case four, for defining the configuration of the data channel. For example, the control port_data word 1008 that follows a REG_SEL value of 0000 may be used to signal the start of the memory transfer. Another command may refer to the (x,y) data location for the read or write (CH_ADDRX and CH_ADDRY). Various other types of commands have been defined in Fig. 10, where these are particularly suited for still and video image processing applications. Other applications may have a different set of commands although most will include at least some form of data location or addressing information that allows the DMA unit to read or write content data (using a separate data path associated with a given data channel) from and to external memory. Note that this data path may have been previously configured in the data port mesh arrangement, by for example the host controller.[0045] Still referring to Fig.10, the receiver port may send a status indicator1012 in response to receiving a control word 1004 and control port_data word 1008. In this example, the status indicator is sent when one of two conditions occur: either an idle timer has expired, or an end of swath (EOS) has been reached. If neither of these two conditions has occurred, the command initiator will not receive any status indication. For example, an idle timer may expire if there is no EOS condition and there is no memory read or memory write activity for a certain number of clock cycles after the last read or write. In the write situation, the idle timer expires if a DMA channel has waited over a certain number of clocks for additional content data to become available at a data port, after the previous read or write by that channel. In a write situation, the idle timer may expire if the data channel has waited over a certain number of clocks for additional content data to be received from a PU over a given data path. In a read situation, the idle timer may expire if the last content data word has moved out of the data channel (on its way to a PU) over a certain number of clock cycles go. Other ways of defining the request and grant protocol for signaling between control ports are possible.[0046] Note that if the command initiator receives an idle timer expired condition, it can program an Init bit in a control port control register (not shown) to a predefined value, thereby clearing all commands that are in the control path as well as in the particular DMA channel associated with the control path. In addition, if an idle timer expired condition has been detected for a write data channel, the command initiator may program the Init bit in a data port control register to a predefined value, thereby clearing all the content data in the data port path as well as allowing the write data channel to be reconfigured by commands in the DMA unit's queue.[0047] Referring now to Fig. 11, what is shown is a simplified diagram of an arbiter that may be used for a control port, in this case one of the north ports NO and NI (see Fig. 9). The arbiter 1104 is in the transmitter and thus arbitrates requests from, in this example, eight possible connection paths of the associated PU (because there are six control ports from which a request may be received to transmit, an input programming element (IPE) which can source a read command, and an output programming element (OPE) which can source a write command. In other embodiments, there may be fewer than six or greater than six control ports that can source a request.[0048] Turning now to Fig. 12, an example select register for control portNorthO is illustrated. In this example, there are three bits that are used for the control port receiver selection, and three bits for the transmitter selection. The receiver select bits program the control port by indicating which device of the associated PU should receive commands that have arrived through the north port. Similarly, the transmitter select bits determine which device (from the possible eight mentioned above) will be able to transmit commands through the north port. Note that a command from OPE is generally directed to a DMA write channel, whereas a command from IPE is directed to a DMA read channel. Other ways for programming a control port to act as a transmitter (send commands out of the PU) and as a receiver (route commands into a device of the PU) may be possible.[0049] To summarize, various embodiments of a data driven processor that may be more effective for running a wider range of applications have been described. In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. For example, although the processors depicted in the figures have either six or eight constituent PUs, an architecture with as few as two PUs or more than eight PUs, connected to each other in a mesh arrangement, can also benefit from the addition of control ports to some or all of the PUs. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
By filling an air gap between tiers (31,32) of a stacked IC device with a thermally conductive material (320) heat generated at one or more locations within one of the tiers can be laterally displaced. The lateral displacement of the heat can be along the full length of the tier and the thermal materia can be electrically insulating. Through silicon-vias (331) can be constructe at certain locations to assist in heat dissipation away from thermally troubled locations (310). |
1.A three-dimensional integrated circuit including:A first die, which is stacked on a second die, the first die and the second die include an active surface and a substrate, the first die and the second die are active The surfaces are coupled together by multiple interlayer electrical paths that form a gap between the active surfaces of the first die and the second die; andA first thermally conductive material, which is disposed in the gap, the first thermally conductive material has a thermal conductivity higher than that of the first die and the second die.2.The three-dimensional integrated circuit according to claim 1, wherein the thermal conductivity of the first thermally conductive material is at least 10 W / m / K.3.The three-dimensional integrated circuit of claim 2, wherein the first thermally conductive material is electrically insulating.4.The three-dimensional integrated circuit of claim 1, wherein the first thermally conductive material is a patterned film.5.The three-dimensional integrated circuit according to claim 1, further comprising:A second thermally conductive material disposed at one end of at least one of the first die and the second die, the second thermally conductive material thermally coupled to the first die and the second die The first thermally conductive material between the second dies.6.The three-dimensional integrated circuit of claim 1, wherein the thermally conductive material is a list selected from the group consisting of: a diamond substrate; and a diamond film pattern.7.The three-dimensional integrated circuit of claim 1, further comprising at least one thermally conductive through-hole, the thermally conductive through-hole being positioned through a bare body at a position laterally displaced from a heat-disturbed area within the three-dimensional integrated circuit At least a part of the sheet body of the sheet.8.The three-dimensional integrated circuit of claim 7, wherein the through hole is at least partially filled with carbon nanotubes. |
Three-dimensional integrated circuit lateral heat dissipationTechnical fieldThe present invention relates to integrated circuits (ICs), and more specifically to multi-layer (3-D) ICs, and more specifically to systems and methods for enhancing heat dissipation in 3-D ICs.Background techniqueIn IC technology, there is a need to stack chips (die) together to form a multilayer or three-dimensional (3-D) IC device. One result of this 3-D IC stack is a reduction in signal propagation time during signal processing, which is attributed to the reduced distance signal must travel while it remains within a single package.One method for layer bonding is to bring two (or more) dies together and then encapsulate the dies into a single structure. Electrical conductors and / or electrical contacts on the surface of the corresponding die are used to carry electrical signals between components on different die.One problem when positioning the dies next to each other is the increase in thermal density. In addition, as the size of the stacked ICs is reduced, (substrate thickness from 700-100 microns to 20 microns or less), lateral thermal conductivity is reduced. Therefore, there may be hot spots with minimal ability to remove heat from the heat source.One method for increasing lateral thermal conductivity is to increase the substrate thickness of at least one of the layers. Another method is to increase the metal layer in the chip to be able to dissipate heat. This negatively affects the desired aspect ratio of the package and degrades the signal transmission speed.When joining more than one layer, there are additional problems. In that case, the stacked IC device contains multiple oxide layers between the layers. Oxide for poor thermal conductors increases heat dissipation issues.There are several methods that can be employed to help with thermal conductivity issues. One such method can use through-silicon-via (TSV) to move heat from the inner part to the surface layer, and then use traditional methods to remove the heat, for example, high thermal conductivity positioned on the surface of the IC package material. The problem with this solution is that since the device is constructed in various layers close to the hot spot where heat is generated, the circuit layout can prevent the TSV from being positioned where necessary.Another method is to circulate a cooling material through the stacked IC devices to cool various hot spots. This manufacturing cost is high because moving the liquid requires a suction mechanism and tight tolerances for the liquid channel. Also, it may not be possible to transport the cooling material channel to the necessary location. By forcing the cooling liquid through the substrate itself, the channelization problem can be overcome to some extent, but this method is accompanied by another set of problems and costs.Summary of the inventionEmbodiments of the invention fill the air gap between stacked dies with a thermally conductive material, which allows lateral transfer of heat generated at one or more locations within each die. The lateral transfer of heat may be along the entire length of the die or along a portion of the length. In one embodiment, the thermal material is electrically insulating. In one embodiment, TSVs (possibly using carbon nanotubes) may be constructed at certain locations to assist in heat dissipation away from thermally troubled locations.In one embodiment, a multi-tiered semiconductor has a thermally conductive material that is disposed between a first layer and a second layer, wherein the material has a thermal conductivity that is higher than that of the first and second layers High thermal conductivity.In another embodiment, a method of manufacturing a layered semiconductor is disclosed, wherein a thermally conductive material is applied to at least one mating surface of a first die, and the mating surface of the die and the second die The mating surface of the mating contact.In yet another embodiment, a method for heat dissipation in a stacked IC device is disclosed, which allows heat from a heat disturbed area of a layer of a multi-layer IC device to flow to the vicinity of the device The inter-layer bodies between the layers are such that lateral heat flow is promoted in the inter-layer body region to at least one heat dissipation position in thermal communication with the inter-layer body region. In one embodiment, the heat dissipation region is a through hole configured to pass through at least one layer body of the layer of the device. In another embodiment, the heat dissipation area is a gap between adjacent dies in the same layer.The foregoing has outlined the features and technical advantages of the present invention quite broadly so that the subsequent detailed description can be better understood. The additional features and advantages that form the main body of the claims of the present invention will be described below. Those skilled in the art should understand that the disclosed concepts and specific embodiments can be easily used as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. Those skilled in the art should also realize that the equivalent constructions do not depart from the spirit and scope of the present invention as set forth in the appended claims. When considered in conjunction with the accompanying drawings, the following description will provide a better understanding of the novel features believed to be unique to the invention (both in terms of its organization and method of operation), as well as other objectives and advantages. It should be clearly understood, however, that each of the figures is provided for illustration and description purposes only, and is not intended as a definition of the limitations of the invention.BRIEF DESCRIPTIONFor a more complete understanding of the present invention, reference is now made to the following description in conjunction with the accompanying drawings.Figure 1 is a cross-sectional side view illustrating one aspect of thermally disturbed conditions that may exist with a 3-D integrated circuit.2 is a cross-sectional side view illustrating an exemplary solution to the heat removal problem.3 is a cross-sectional side view showing one embodiment of the concept of the present invention.4 shows one embodiment of a method for constructing stacked IC devices according to the teachings of the present invention.detailed descriptionFigure 1 illustrates one aspect of thermally disturbed conditions that can exist with a 3-D integrated circuit. As shown in FIG. 1, the die 11 and the die 12 are stacked. The active layer body of the die 11 is the layer body 102, and the active layer body of the die 12 is the layer body 103. This is an exemplary arrangement because the active layer body of the die can be in any orientation (upper or lower).The through hole 105 extends through the substrate layer body 101 of the die 11. The through holes may be constructed in the layer bodies 102, 103, and / or 104 as needed. The electrical paths 107 and 108 form the interconnection between the dies. The seal 109 is used to prevent unwanted contaminants from entering the area 120 between the respective dies 11, 12.The element 108 is usually at a level of 30 microns or less, and usually copper or tin-copper metal is formed for metal bonding. The area 120 is generally an air gap. The gap 120 may be in the range of less than 10 microns.The hot spot 110 is on the die 12, and the challenge is to move heat from this relatively small area 110 to the outer portion of the die stack. Note that the element 111 is directly above the hot spot 110 and will be affected by the heat moving upward from the hot spot 110 through the layer bodies 103, 102, 101.Figure 2 illustrates a discussed solution to the heat removal problem. In this solution, the TSV array 200 with individual TSVs 201, 202, and 203 is positioned to provide thermal conductivity for heat from the hot spot 110. Heat passes through the layer body 103 (which is the active area of the bottom die 12). The heat then passes through the active layer body 102 of the die 11 and is then dissipated to the outside through the TSV array 200. The vias 201, 202, 203 may be copper or tungsten lined to increase thermal conductivity, but any thermally conductive material will work. In one embodiment, carbon nanotubes (CNT) may be used to fill the vias 201, 202, 203. In another embodiment, the CNT partially fills the through holes 201, 202, 203 and the metal fills the remaining portions of the through holes 201, 202, 203. The advantages of CNT are improved electrical and thermal conductivity, and improved current density.FIG. 3 shows an embodiment 30 that utilizes the concepts of the present invention. The thermally conductive material 320 is positioned within the gap between the dies 31 and 32. In another embodiment, the thermally conductive material 320 is between the metal layer bodies (not shown) of one of the active layer bodies 302, 303 of the layers 31, 32. The thermally conductive material 320 will ideally have a thermal conductivity greater than 10 W / m / K in order to promote lateral heat transfer. Material 320 is thermally conductive, and in one embodiment, is electrically insulating so that it does not short circuit the electrical connection to die 31, 32, and shorting the electrical connection will interfere with the operation of the elements contained in die 31, 32 . The material 320 may be positioned by various methods (eg, spin coating) or deposited by chemical vapor deposition (CVD) and / or physical vapor deposition (PVD). The material 320 may be a diamond matrix or a diamond film pattern.Although only shown on one layer 302 of one of the dies 31, the material 320 can be positioned on the surface of each of the two mating layer bodies 302, 303 so that when the dies 31, 32 are stacked At this time, the materials 320 on each layer 302, 303 actually touch each other. Alternatively, the material 320 may be placed on only one of the mating layers 302, 303.In operation, the heat from the hot spot 310 passes upward through the layer body 303 of the die 32 and into the material 320. The heat then travels laterally along the material 320 to the edge of the device (eg, above the hanging edge 330), or more generally, the heat will pass upward through one or more heat dissipation vias, eg, through the layer on the die 31 Through hole 331 in 301. Due to the lateral movement of heat, there is better temperature uniformity on the device 30. This benefit allows heat to spread relatively quickly throughout the device 30, causing the device 30 to heat up evenly. Removing heat from a larger area (eg, the entire device 30 or the package of the device) is easier to accomplish than removing heat from a small internal area.Note that the heat dissipation through holes 331 may pass through the die 31 upward or down through the die 32 (or both). One advantage of the thermally conductive material 320 is that the heat dissipation through holes 331 can be laterally offset from the heat disturbed area 310, thereby releasing the area directly above the disturbed area for use in circuits or constructed in various layers 301, 302, 303 Other components. Also, note that heat does not need to flow directly upward through the layers 301, 302, 303, but the through holes 331 may be angled and / or curved, for example. Another advantage of lateral heat dissipation is that less TSV is required.For a multi-layered device with more than two layers, a heat dissipation material structure between multiple layers can be used. Therefore, the heat can move laterally from the heat source in the first inter-body region by a first distance, and then pass through a layer upward through the through hole, and then laterally move in the second inter-body region again (in either direction), It is assumed that the thermally conductive material is positioned in both the first and second inter-body regions.A system that allows for even greater heat removal from the material 320 will make one of the layers (eg, the die 31) larger in circumference than the other die 32, thereby making the The protruding lip on the larger one creates a surface area, for example, surface area 330. Note that this same technique will work for several layers, which can be interleaved with respect to diameter if necessary. The composition of the material 320 may not be the same on the entire surface, and the difference in the material 320 may be used to assist the thermal conductivity away from the hot spot 310.In one embodiment, the bottom die is larger than the top die. Therefore, there will be a gap between the two top dies (single layer) resting on the bottom dies. According to the invention, a gap-filling material can be provided in this gap between the top dies. The gap-fill material may be thermally conductive, and may be any material with good thermal conductivity, for example, a diamond film. In one embodiment, a thermally conductive gap-fill material is thermally coupled to the material 320 to help transfer heat out of the stacked IC device.FIG. 4 shows an embodiment 40 of a method for constructing stacked IC devices according to the teachings of the present invention. Process 401 determines whether a die for constructing stacked IC devices has been selected. If not, the process 402 controls the waiting time. After the die has been selected, process 403 determines whether thermally conductive material should be added to at least one lateral surface of the die. The thermally conductive material can be deposited under any of the methods discussed above (eg, CVD or PVD processing) under the control of process 404, or the material can be spin-coated or coated as a film.Processes 405 and 406 wait for the next die to be selected for mating with the previously selected die. Processes 407 and 408 add thermally conductive material to this next die (if appropriate), and process 409 then bonds the die together. Process 410 determines whether more dies are to be added. When all the dies have been selected and coated with a thermally conductive material (if appropriate), process 411 completes the IC package that can then be used for testing and / or use.Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. For example, although the material 320 has been described as non-conductive, it may make the material conductive. In this embodiment, the conductive material should be patternable (ie, able to be patterned) so that it can be separated from some through holes to prevent electrical connection while still thermally dissipating heat.It is not intended to limit the scope of the present application to the specific embodiments of the processes, machines, manufacturing, material composition, means, methods, and steps described in the specification. As those skilled in the art will readily understand from the present invention, according to the present invention, processes, machines, processes, machines, Manufacturing, material composition, means, methods or steps. Therefore, it is intended that the appended claims include within their scope the process, machine, manufacturing, material composition, means, method, or steps. |
The present invention is directed to a transistor having an enhanced width dimension and a method of making same. In one illustrative embodiment, the transistor comprises a semiconducting substrate, a recessed isolation structure formed in the substrate, the isolation structure defining a recess thereabove, a gate electrode and a gate insulation layer positioned above the substrate, a portion of the gate electrode and the gate insulation layer extending into the recess above the recessed isolation structure, and a source region and a drain region formed in the substrate. In another illustrative embodiment, the transistor comprises a semiconducting substrate, a recessed isolation structure that defines an active area having an upper surface and an exposed sidewall surface, a gate insulation layer and a gate electrode positioned above a portion of the upper surface and a portion of the exposed sidewall surface of the active area, and a source region and a drain region formed in the active area. |
What is claimed: 1. A transistor, comprising:a semiconducting substrate; a recessed isolation structure formed in said substrate, said isolation structure defining a recess thereabove; a gate electrode and a gate insulation layer formed above said substrate, a portion of said gate electrode and said gate insulation layer extending into said recess above said recessed isolation structure; and a source region and a drain region formed in said substrate. 2. The transistor of claim 1, wherein said semiconducting substrate is comprised of silicon.3. The transistor of claim 1, wherein said recessed isolation structure is comprised of at least one of silicon dioxide and silicon oxynitride.4. The transistor of claim 1, wherein said recessed isolation structure has a surface that is positioned approximately 1000-1500 Å below a surface of said substrate.5. The transistor of claim 1, wherein said recessed isolation structure is formed in a trench having a width ranging from approximately 2000-3000 Å and a depth ranging from approximately 4000-5000 Å.6. The transistor of claim 1, wherein said gate electrode is comprised of polysilicon.7. The transistor of claim 1, wherein said gate insulation layer is comprised of at least one of silicon dioxide and silicon oxynitride.8. The transistor of claim 1, wherein said gate electrode has a thickness ranging from approximately 1000-2000 Å.9. The transistor of claim 1, wherein said gate insulation layer has a thickness ranging from approximately 20-50 Å.10. The transistor of claim 1, wherein said source region and said drain region are each comprised of an extension implant region and a source/drain implant region.11. The transistor of claim 1, wherein said gate electrode has first and second end portions that extend into said recess.12. The transistor of claim 11, wherein said first and second end portions extending into said recess are positioned above a portion of a sidewall of an active region of said substrate defined by said recessed isolation structure.13. A transistor, comprising:a semiconducting substrate comprised of silicon; a recessed isolation structure formed in said substrate, said isolation structure defining a recess thereabove; a gate electrode comprised of polysilicon and a gate insulation layer formed above said substrate, a portion of said gate electrode and said gate insulation layer extending into said recess above said recessed isolation structure; and a source region and a drain region formed in said substrate. 14. The transistor of claim 13, wherein said recessed isolation structure is comprised of at least one of silicon dioxide and silicon oxynitride.15. The transistor of claim 13, wherein said recessed isolation structure has a surface that is positioned approximately 1000-1500 Å below a surface of said substrate.16. The transistor of claim 13, wherein said recessed isolation structure is formed in a trench having a width ranging from approximately 2000-3000 Å and a depth ranging from approximately 4000-5000 Å.17. The transistor of claim 13, wherein said gate insulation layer is comprised of at least one of silicon dioxide and silicon oxynitride.18. The transistor of claim 13, wherein said gate electrode has a thickness ranging from approximately 1000-2000 Å.19. The transistor of claim 13, wherein said gate insulation layer has a thickness ranging from approximately 20-50 Å.20. The transistor of claim 13, wherein said source region and said drain region are each comprised of an extension implant region and a source/drain implant region.21. The transistor of claim 13, wherein said gate electrode has first and second end portions that extend into said recess in said substrate.22. The transistor of claim 21, wherein said first and second end portions extending into said recess in said substrate are positioned above a portion of a sidewall of an active region of said substrate defined by said recessed isolation structure.23. A transistor, comprising:a semiconducting substrate; a recessed isolation structure defining an active area having an upper surface and an exposed sidewall surface; a gate insulation layer and a gate electrode positioned above a portion of said upper surface and a portion of said exposed sidewall surface of said active area; and a source region and a drain region formed in said active area. 24. The transistor of claim 23, wherein said semiconducting substrate is comprised of silicon.25. The transistor of claim 23, wherein said recessed isolation structure is comprised of at least one of silicon dioxide and silicon oxynitride.26. The transistor of claim 23, wherein said recessed isolation structure has a surface that is positioned approximately 1000-1500 Å below a surface of said substrate.27. The transistor of claim 23, wherein said recessed isolation structure is formed in a trench having a width ranging from approximately 2000-3000 Å and a depth ranging from approximately 4000-5000 Å.28. The transistor of claim 23, wherein said gate electrode is comprised of polysilicon.29. The transistor of claim 23, wherein said gate insulation layer is comprised of at least one of silicon dioxide and silicon oxynitride.30. The transistor of claim 23, wherein said gate electrode has a thickness ranging from approximately 1000-2000 Å.31. The transistor of claim 23, wherein said gate insulation layer has a thickness ranging from approximately 20-50 Å.32. The transistor of claim 23, wherein said source region and said drain region are each comprised of an extension implant region and a source/drain implant region.33. The transistor of claim 23, wherein said gate electrode has first and second end portions that extend into a recess in said substrate defined by said recessed isolation structure.34. The transistor of claim 33, wherein said first and second end portions extending into said recess in said substrate are positioned above a portion of a sidewall of an active region of said substrate defined by said recessed isolation structure. |
BACKGROUND OF THE INVENTION1. FIELD OF THE INVENTIONThe present invention is generally directed to semiconductor devices and processing, and, more particularly, to a novel semiconductor device having an enhanced width dimension and a method of making same.2. DESCRIPTION OF THE RELATED ARTThere is a constant drive within the semiconductor industry to increase the operating speed of integrated circuit devices, e.g., microprocessors, memory devices, etc. This drive is fueled by consumer demands for computers and electronic devices that operate at increasingly greater speeds. This demand for increased speed has resulted in a continual reduction in the size of semiconductor devices, e.g., transistors. That is, the size of many components of a typical field effect transistor, e.g., channel length, source/drain junction depths, gate dielectric thickness, etc., are reduced. For example, all other things being equal, the smaller the channel length of the transistor, the faster the transistor will operate. Thus, there is a constant drive to reduce the size, or scale, of the components of a typical transistor to increase the overall speed of the transistor, as well as integrated circuit devices incorporating such transistors.By way of background, FIG. 1 and FIG. 2 depict an illustrative transistor 10 for purposes of explaining one or more problems that may be solved or reduced by the present invention. FIG. 1 is a cross-sectional front view of the transistor 10 showing the channel length or transistor length "L." FIG. 2 is a cross-sectional side view of the transistor 10 shown in FIG. 1 taken along the line "2-2," i.e., showing the transistor width "W." As shown in FIG. 1, the transistor 10 is formed in an active area 12 that is defined in a semiconducting substrate 14 by an isolation structure 16 formed therein. The transistor 10 is comprised of a gate insulation layer 18, a gate electrode 20, a sidewall spacer 24, and a plurality of source/drain regions 28. The transistor 10 is also comprised of metal silicide layers 29 formed above the source/drain regions 28 and the gate electrode 20.All of the various components of the transistor 10 depicted in FIG. 1 may be formed using a variety of known processing techniques, and they may be comprised of a variety of materials. For example, the gate insulation layer 18 may be comprised of a thermally grown layer of silicon dioxide, the gate electrode 20 may be comprised of polysilicon, the sidewall spacer 24 may be comprised of silicon dioxide, and the metal silicide regions 29 may be comprised of, for example, cobalt silicide or titanium silicide. The isolation structure 16 is typically comprised of an insulating material, such as silicon dioxide, or other like material. The isolation structure 16 may be constructed by forming a trench 17 in the substrate 14, filling the trench with an appropriate insulating material, e.g., silicon dioxide, and, thereafter, performing a chemical mechanical polishing operation to remove any excess material.In designing modern integrated circuit devices, one parameter of a transistor that is of particular importance is known as its drive current. Stated simply, the drive current of a transistor is the amount of current flowing from the drain region to the source region of a transistor. All other things being equal, it is desirable that transistors have as large a drive current as possible without otherwise adversely impacting the performance of the transistor, i.e., without generating excessive heat or excessive off-state leakage currents, etc.The drive current of the device may be increased by reducing the channel length of the transistor. However, all other things being equal, the smaller the channel length of the transistor, the greater the off-state leakage current. Moreover, the off-state leakage current increases exponentially as the channel length of the device decreases. Off-state leakage currents also increase as the transistor width increases, but at a rate that is less than the exponential rate associated with reductions in the channel length of a device. Thus, in attempting to increase the drive current of a transistor, increasing the width of the transistor results in lower off-state leakage currents, as compared to increasing the drive current the same amount by reducing the channel length. Moreover, to a great extent, reducing the channel length of the device is limited by available photolithography and etching processes.Typically, the amount of drive current that can be generated per unit width ("w") of the transistor is a known value. Thus, when a total drive current is desired or required for a particular circuit application, the required width of the transistor to accomplish this purpose may be readily determined. Thus, for a given type of transistor, an application requiring a transistor having a width of 30 w may be satisfied by a single transistor having a width of approximately 30 w or six transistors, arranged in parallel, each having a width of approximately 5 w. Using this process, the layout of integrated circuit devices across the surface of a portion of a semiconducting substrate is accomplished, with the ultimate goal being to minimize consumption of wafer plot space, i.e., to maximize the use of available substrate. Thus, it would be desirable to have a transistor in which the width dimension of a substrate can be maximized in a given plot space of semiconducting substrate.The present invention is directed to a method that solves or reduces some or all of the aforementioned problems.SUMMARY OF THE INVENTIONThe present invention is directed to a transistor having an enhanced width dimension and a method of making same. In one illustrative embodiment, the transistor comprises a semiconducting substrate, a recessed isolation structure formed in the substrate, the isolation structure defining a recess thereabove, a gate electrode and a gate insulation layer positioned above the substrate, a portion of the gate electrode and the gate insulation layer extending into the recess above the recessed isolation structure, and a source region and a drain region formed in the substrate. In another illustrative embodiment, the transistor comprises a semiconducting substrate, a recessed isolation structure that defines an active area having an upper surface and an exposed sidewall surface, a gate insulation layer and a gate electrode positioned above a portion of the upper surface and a portion of the exposed sidewall surface of the active area, and a source region and a drain region formed in the active area.In one illustrative embodiment, the method of making a transistor comprises providing a semiconducting substrate, forming a recessed isolation structure in a trench formed in the substrate, the recessed isolation structure thereby defining a recess in the substrate, forming a gate insulation layer and a gate electrode above the substrate, a portion of the gate insulation layer and gate electrode extending into the recess in the substrate and above the recessed isolation structure, and forming a plurality of source/drain regions in the substrate adjacent the gate electrode.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 is a cross-sectional front view of an illustrative prior art transistor in the transistor length direction;FIG. 2 is a cross-sectional side view of the prior art transistor shown in FIG. 1 along the line "2-2";FIG. 3 is a cross-sectional front view of a partially formed transistor in accordance with one illustrative embodiment of the present invention;FIG. 4 is a cross-sectional front view of the device shown in FIG. 3 after a gate insulation layer has been formed thereabove;FIG. 5 is a cross-sectional front view of the device shown in FIG. 4 after a layer of polysilicon has been formed thereabove;FIG. 6A is a cross-sectional front view of the device shown in FIG. 5 after a gate electrode has been patterned from the layer of polysilicon;FIG. 6B is a cross-sectional side view, in the transistor width direction, of the device shown in FIG. 6A taken along the line "6B-6B";FIG. 7A is a cross-sectional front view of the device shown in FIG. 6A after source/drain regions have been formed on the device;FIG. 7B is a cross-sectional side view, in the transistor width direction, of the device shown in FIG. 7A taken along the line "7B-7B"; andFIG. 7C is a plan view of the device shown in FIG. 7A.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF THE INVENTIONIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.The present invention will now be described with reference to the attached figures. Although the various regions and structures of a semiconductor device are depicted in the drawings as having very precise, sharp configurations and profiles, those skilled in the art recognize that, in reality, these regions and structures are not as precise as indicated in the drawings. Additionally, the relative sizes of the various features depicted in the drawings may be exaggerated or reduced as compared to the size of those features or regions on fabricated devices. Nevertheless, the attached drawings are included to describe and explain illustrative examples of the present invention. As will be readily apparent to those skilled in the art upon a complete reading of the present application, the present method is applicable to a variety of technologies, e.g., NMOS, PMOS, CMOS, etc., is readily applicable to a variety of devices, including, but not limited to, logic devices, memory devices, etc.In general, the present invention is directed to a transistor device having an enhanced width dimension and a method of making same. As will re recognized after a complete reading of the present application, the present invention provides a method to maximize the width of a transistor in a given plot space of semiconducting substrate. In turn, integrated circuit devices incorporating such transistors may be more efficient in terms of plot spacer consumption.As shown in FIG. 3, a trench 32 is formed in a semiconducting substrate 30. For reference, the transistor length direction "L" is also shown in FIG. 1. A recessed isolation material 34, comprised of materials such as silicon dioxide, silicon oxynitride, etc., is formed in the trench 32. The width 33 and the depth 31 of the trench 32 may be varied as a matter of design choice. In one embodiment, the depth 31 of the trench 32 ranges from approximately 3000-6000 Å. The width 33 of the trench 32 may vary from a minimum of the smallest feature size that can be patterned using existing photolithography and etching techniques, up to any desired width. In one illustrative embodiment, the width 33 of the trench 32 ranges from approximately 2000-3000 Å.The recessed isolation material 34 may be formed in the trench 32 by a variety of techniques. In one illustrative embodiment, a layer of insulating material (not shown) is blanket deposited above the surface 35 of the substrate 30 and in the trench 32. Thereafter, a chemical mechanical polishing operation is performed on the layer of insulating material such that the insulating material in the trench 32 is approximately planar (not shown) with the surface 35 of the substrate 30. An etching process, such as a wet etching process, may then be performed to reduce the level of insulating material 34 in the trench 32 to the level depicted in FIG. 3. In one illustrative embodiment, the insulating material 34 is removed until such time as a surface 36 of the isolation material 34 in the trench 32 is positioned approximately 1000-1500 Å beneath the surface 35 of the substrate 30. This process results in the definition of an active island 37 of substrate material having an exposed upper surface 35A and an exposed sidewall surface 35B. The island 37 of substrate material may be of any desired shape, i.e., circular, oval, rectangular, etc. This process also results in the definition of a recess 49 in the substrate 30 above the recessed isolation material 34. In the disclosed embodiment, the recess 49 has a depth of approximately 1000-1500 Å.Thereafter, one or more channel doping operations are performed at this stage of manufacturing. For example, in one illustrative embodiment, a threshold voltage implant process is performed on the device. This may be accomplished by performing a four-way angled channel implant process wherein each of the angled implants are spaced approximately 90 degrees apart with respect to one another. In one illustrative embodiment, this threshold voltage implant may be accomplished by a four-way angled implant process performed at an implant angle of approximately 30-50 degrees (with respect to a line perpendicular to the surface 35 of the substrate 30) using dopant atoms at a concentration ranging from approximately 1-3*10<12 >ions/cm<2 >per rotation. For an NMOS device, boron may be implanted during the threshold voltage implant step at an energy level ranging from approximately 10-15 keV. For a PMOS device, such a threshold voltage implant may be performed using phosphorous at an energy level ranging from approximately 90-110 keV. Additional channel doping implantation processes may be performed at this time if desired or required by the device under construction.Thereafter, as shown in FIG. 4, a gate insulation layer 40 for the transistor 10 is formed for the device. The gate insulation layer 40 may be formed from a variety of materials, such as silicon dioxide, silicon oxynitride, silicon nitride, or any dielectric material having a dielectric constant less than approximately four. The gate insulation layer 40 may be formed by a variety of processes, e.g., chemical vapor deposition (CVD), thermal growth, etc. In one illustrative embodiment, as shown in FIG. 4, the gate insulation layer 40 is comprised of a thermally grown layer of silicon dioxide having a thickness ranging from approximately 20-50 Å.Next, as shown in FIG. 5, a layer of polysilicon 41 is blanket deposited above the gate insulation layer 40 and above the recessed isolation material 34. The layer of polysilicon 41 may be formed by a variety of techniques, e.g., chemical vapor deposition, low pressure chemical vapor deposition, etc. In one illustrative embodiment, the layer of polysilicon 41 has a thickness ranging from approximately 1000-2000 Å, and it is formed by conformally depositing the layer of polysilicon 41 using a chemical vapor deposition process.Next, as shown in FIG. 6A, using traditional photolithography and one or more etching processes, a gate electrode 42 is patterned from the layer of polysilicon 41. The gate insulation layer 40 may also be patterned at this time, as shown in FIG. 6A. FIG. 6B is a side view of the structure depicted in FIG. 6A taken along the line "6B-6B." That is, FIG. 6B is a cross-sectional view of the device in the transistor width dimension "W." As shown therein, a portion 43 of the gate electrode 42 extends beyond the isolation structure 34. The portion 43 of the gate electrode 42 is coupled to a power supply (not shown) so that a voltage may be applied to the gate electrode 42.Then, halo implant regions (not shown) are formed in the device by performing a four-way angled halo implant process. Each of the angled implant processes are performed at an angle ranging from approximately 30-45 degrees with respect to a line generally perpendicular to the surface 35 of the substrate 30, and at a concentration level ranging from approximately 1-6*10<13 >ions/cm<2 >total (for all four rotations). In an illustrative NMOS device, boron atoms may be implanted at an energy level ranging from approximately 7-15 keV. In an illustrative PMOS device, arsenic may be implanted during this halo implant process at an energy level ranging from approximately 40-65 keV.Thereafter, source and drain regions 50 for the device may be formed using a variety of process flows. FIGS. 7A and 7B (side view taken along the line "7B-7B" in FIG. 7A) depict the transistor after the source/drain regions 50 have been formed. FIG. 7C is a plan view of the device shown in FIG. 7A. In one illustrative embodiment, a source/drain extension implant process is performed to form source/drain extensions 51 of the completed device. This extension implantation process may be performed at a relatively low energy level and at a relatively high concentration of dopant atoms. For example, the concentration of dopant atoms in the extension implant process may vary from approximately 1*10<14 >to 2*10<15 >ions/cm<2 >of the appropriate dopant atoms, e.g., arsenic (Ar) or phosphorous (P) for NMOS technology, boron (B) or boron difluoride (BF2) for PMOS technology, etc. The energy level for the extension implant process will vary depending upon the dopant material used in the process. For example, in one illustrative embodiment for forming the source/drain extension implants in an NMOS device, the extension implantation process is performed using arsenic as the dopant atoms at a concentration ranging from approximately 6*10<14 >to 2*10<15 >ions/cm<2 >and at an energy level ranging from approximately 3-15 keV. Typically, the extension implant process will result in implant regions in the substrate that are generally self-aligned with respect to the gate electrode 42 (as implanted). However, for a PMOS device, a small sidewall spacer (not shown) may be formed adjacent the gate electrode 42 prior to performing the extension implant step. This spacer is used in PMOS devices due to the increased mobility of the dopant atoms that may be implanted, e.g., boron.Next, as indicated in FIGS. 7A-7C, a sidewall spacer 44 is formed adjacent the gate electrode 42. The sidewall spacer 44 is formed by depositing a layer (not shown) of spacer material above the surface of the device and thereafter performing an anisotropic etching process to define the spacer 44. The layer of spacer material may be comprised of a variety of materials, such as silicon dioxide, silicon oxynitride, or other like materials. Moreover, it may be formed by any of a variety of techniques for forming such layers, such as chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.Next, a source/drain implant process is performed on the device. Note that the source/drain implant process is self-aligned with respect to the sidewall spacer 44. The dopant concentration levels and implant energy for the source/drain implant process may vary. For example, the concentration of dopant atoms in the source/drain implantation process may vary from approximately 5*10<14 >to 5*10<15 >ions/cm<2 >of the appropriate dopant atoms, e.g., arsenic or phosphorous for NMOS technology, boron for PMOS technology, etc. The energy level for the source/drain implantation process will vary depending upon the dopant material. For example, in one illustrative embodiment for forming the source/drain regions in an NMOS device, the source/drain implantation process is performed using arsenic as the dopant atoms at a concentration of approximately 5*10<14 >to 5*10<15 >ions/cm<2 >and at an energy level ranging from approximately 20-70 keV.Thereafter, one or more thermal anneal processes is performed to activate the dopant atoms introduced into the substrate 30 during various ion implant processes described above, and to repair the damage to the lattice structure of the semiconducting substrate 30 resulting from the ion implantation processes. In one embodiment, this anneal process may be at a temperature ranging from approximately 1000-1050[deg.] C. for a duration of approximately 5-20 seconds in a rapid thermal anneal chamber. Note that during this process, the previously implanted dopant atoms migrate, or move, from their original implanted position. This migration of the dopant atoms is generally isotropic in direction.As shown in FIGS. 6A, 6B and 7A-7C, the present invention is directed to a novel transistor structure. The gate electrode 42 has first and second ends 42A, 42B, respectively. The first and second ends 42A, 42B, as well as first and second portions 40A, 4B of the gate insulation layer 40 thereunder, are positioned above a portion of the exposed sidewall 35B of the active area 37. The remainder of the gate electrode 42 is positioned above the surface 35A of the active area 37. In the embodiment depicted in the attached drawings, the first end 42A of the gate electrode 42 is shown as being patterned such that it does not extend beyond the recessed isolation structure 34. Of course, if desired, the first end 42A of the gate electrode 42 may be patterned so as to extend completely beyond the recessed isolation structure 34 in a manner similar to the configuration depicted for the portion 43 of the gate electrode 42 depicted in FIGS. 6B and 7B. Stated another way, the novel transistor structure disclosed herein is comprised of a gate electrode 42 and a gate insulation layer 40 wherein a portion of the gate electrode 42 and the gate insulation layer 40 extends downwardly into a recess 49 in the substrate above a recessed isolation structure 34. In the disclosed embodiment, both ends 42A, 42B of the gate electrode 42 and both portions 40A, 40B of the gate insulation layer 40 extend into the recess 49 above the recessed isolation structure 34.After the formation of the gate electrode 42, additional insulating material (not shown), e.g., silicon dioxide, BPSG, etc., may be blanket deposited above the device shown in FIGS. 7A and 7B to fill the portions of the recess 49 not filled, or at least partially filled by the downwardly extending portions of the gate electrode 41 and gate insulation layer 40.The present invention is directed to a transistor having an enhanced width dimension and a method of making same. In one illustrative embodiment, the transistor comprises a semiconducting substrate 30, a recessed isolation structure 34 formed in the substrate 30, the isolation structure 34 defining a recess 49 thereabove, a gate electrode 42 and a gate insulation layer 40 positioned above the substrate 30, a portion of the gate electrode 42 and the gate insulation layer 40 extending into the recess 49 above the recessed isolation structure 49, and a source region 51A and a drain region 51B formed in the substrate 30. In another illustrative embodiment, the transistor comprises a semiconducting substrate 30, a recessed isolation structure 34 that defines an active area 37 having an upper surface 35A and an exposed sidewall surface 35, a gate insulation layer 40 and a gate electrode 42 positioned above a portion of the upper surface 35A and a portion of the exposed sidewall surface 35B of the active area 37, and a source region 51A and a drain region 51B formed in the active area 37.In one illustrative embodiment, the method of making a transistor comprises providing a semiconducting substrate 30, forming a recessed isolation structure 34 in a trench 32 formed in the substrate 30, the recessed isolation structure 34 thereby defining a recess 49 in the substrate 30, forming a gate insulation layer 40 and a gate electrode 42 above the substrate 30, a portion of the gate insulation layer 40 and gate electrode 42 extending into the recess 49 above the recessed isolation structure 49, and forming a plurality of source/drain regions 51A, 51B in the substrate 30 adjacent the gate electrode.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. For example, the process steps set forth above may be performed in a different order. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
An integrated circuit including a performance circuit occupying a first area of an integrated circuit substrate and a protection circuit coupled to the performance circuit and occupying a second area of an integrated circuit substrate separate from the first area. Also, a method of forming an integrated circuit including the steps of: Forming a performance circuit occupying a first area of an integrated circuit substrate, forming a protection circuit occupying a second area of an integrated circuit separate from the first area, and coupling the protection circuit to the performance circuit. |
What is claimed is:1. A method of forming an integrated circuit comprising:forming a performance circuit occupying a first well of an integrated circuit substrate;forming a protection circuit occupying a second well of the integrated circuit substrate separate from the first well, wherein forming the protection circuit includes:forming a plurality of unit cells, the plurality of unit cells separated from each other to form a plurality of islands in the second well surrounded by the second well, each of the plurality of unit cells comprised of:a block of a first doped region of a first dopant in the second well of the integrated circuit substrate occupying an area of the substrate sufficient to support a contact to the doped region, the first doped region forming an anode of a diode,a junction region completely surrounding the first doped region, anda contact to the doped region, whereinthe second well is doped with a first concentration of a second dopant,forming a third doped region in the second well adjacent the junction region, the third doped region surrounding the plurality of cells and doped with a greater concentration of the second dopant, the third doped region forming a cathode of the diode; andcoupling the protection circuit to the performance circuit.2. The method of claim 1, wherein forming a performance circuit includes forming a CMOS configuration.3. The method of claim 2, wherein coupling the protection circuit to the performance circuit includes coupling the protection circuit to a p-channel device of the CMOS configuration.4. The method of claim 2, wherein forming a protection circuit includes forming the diode and coupling the protection circuit to the performance circuit includes coupling the diode to a p-channel device of the CMOS configuration.5. A method of forming an integrated circuit comprising:forming a performance circuit occupying a first well of an integrated circuit substrate;forming a protection circuit occupying a second well of the integrated circuit substrate separate from the first well, wherein forming the protection circuit includes:forming a plurality of unit cells, the plurality of unit cells separated from each other to form a plurality of islands in the second well surrounded by the second well, each of the plurality of unit cells comprised of:a block of a first doped region of a first dopant in the second well of the integrated circuit substrate occupying an area of the substrate sufficient to support a contact to the doped region,a junction region completely surrounding the first doped region, anda contact to the doped region, wherein the second well is doped with a first concentration of a second dopant,forming a third doped region in the second well adjacent the junction region, the third doped region surrounding the plurality of cells and doped with a greater concentration of the second dopant; andcoupling the protection circuit to the performance circuit.6. The method of claim 5, wherein forming a performance circuit includes forming a CMOS configuration.7. The method of claim 6, wherein coupling the circuit to the performance circuit includes coupling the protection circuit to a p-channel device of the CMOS configuration.8. The method of claim 6, wherein forming a protection circuit includes forming the diode and coupling the protection circuit to the performance circuit includes coupling the diode to a p-channel device of the CMOS configuration.9. The method of claim 5, wherein the first concentration of a second dopant forms an N-type material in the second well.10. The method of claim 5, wherein the second concentration of the second dopant forms an N<+> material in the third doped region. |
This is a divisional of application Ser. No. 09/107,351, filed Jun. 30, 1998, now U.S. Pat. No. 6,137,143.BACKGROUND OF THE INVENTION1. Field of the InventionThe invention relates to integrated circuit devices and more particularly to layout techniques for such devices.2. Description of Related ArtOne area where parasitic capacitance is noted is in input/output (I/O) buffer circuits. For high speed I/O circuits, the parasitic capacitance is one limiter to the fast transitioning edges of the circuit. The larger the capacitance, the slower the charging or discharging, resulting in degraded bus performance. Thus, many efforts have been put forth to reduce the capacitive load and create faster transitions which in turn leads to faster I/O circuits.The input signals to an integrated circuit, for example, a metal oxide semiconductor (MOS) integrated circuit, are generally fed to transistors. If the voltage applied to the transistor becomes excessive, the gate oxide can break down, the junctions can be destroyed, and the metal to the transistor can be destroyed. Excessive voltages are voltages in excess of the normal operating voltages of the circuit. For example, voltages far in excess of the nominal operating voltage of an integrated circuit, may be impressed upon the inputs to the circuit during either human-operator or mechanical handling operations.The main source of excessive high voltages to integrated circuits is triboelectricity. Triboelectricity is caused when two materials are rubbed together. A common situation is a person developing very high static voltage (i.e., a few hundred to a few thousand volts) simply by walking across a room or by removing an integrated circuit from its plastic package, even when careful handling procedures are followed. If such a high voltage is applied to the pins of an integrated circuit package, its discharge, referred to as ElectroStatic Discharge (ESD), can cause breakdown of the devices to which the voltage is applied. The breakdown event may cause sufficient damage to produce immediate destruction of the integrated circuit, or it may weaken the device enough that it will fail early in the operating life of the integrated circuit.In general, all inputs (e.g., pins) of MOS integrated circuits are provided with protection circuits to prevent excessive voltages from damaging the MOS transistors. These protection circuits are normally placed at the input and output pads on a chip and the transistor gates to which the pads are connected. The protection circuits are designed to begin conducting or to undergo breakdown, thereby providing an electrical path to ground (or to the power-supply rail), in the presence of excessive voltages, generally ESD. Since the breakdown mechanism is designed to be non-destructive, the circuits provide a normally open path that closes only when a high voltage appears at the input or output terminals, harmlessly discharging the node to which it is connected.Typically, two types of protection circuits are used to provide protection against ESD damage: Diode breakdown and diode conduction. Diode protection is obtained by using the diode-breakdown or diode-conduction phenomenon to provide an electrical path in the semiconductor, e.g., silicon, substrate that consists of a diffused diode region of a doping type opposite to that of the substrate (for example, p-type and n-type doping, respectively). This diffused region is connected between the input pad and substrate. If a reverse-bias voltage greater than the breakdown voltage of the resultant pn junction is applied, the diffusion region (which otherwise works as a diode) undergoes breakdown. Furthermore, the diffused region will also clamp a negative-going ESD transition at the chip input to one diode drop below the substrate voltage. In CMOS technologies, an additional protection diode can be added by utilizing the pn junction that exists between a p-type region and the body region of the PMOS device (an n-type region that is connected to VCC). This diode is utilized as a protection device when a connection is made between the pad and a p-type region. This diode will generally clamp positive-going transitions to one diode drop above VCC (VCC is generally OV during ESD).FIG. 1 shows a known input/output (I/O) buffer circuit 10 having ESD protection components, diodes D1 and D2. CMOS I/O buffer circuit 10 includes PMOS device 20 coupled to NMOS devices 30. The devices of circuit 10 are connected to I/O pad 40. Between pad 40 and the devices is a negative zap protection diode D1. Between I/O pad 40 and PMOS circuit 20 is forward-biased protection diode D2.FIG. 2 shows a prior art layout of a portion of I/O buffer circuit 10 of FIG. 1, specifically illustrating the layout of PMOS device 20 and ESD protection diode D2. FIG. 2 shows PMOS field effect transistor (MOSFET) device 20 made up of polysilicon gate 60 separating source region 65 and drain region 70 with individual contacts 72 and 75 to source and drain regions 65 and 70, respectively. In this embodiment, PMOS device 20 is in an n-well with p-type (p<+> doped) source and drain regions 65 and 70, respectively. FIG. 2 also shows conventional PMOS diode D2 adjacent drain 70 of PMOS device 20. In FIG. 2, p-type area 70 acts as both a MOSFET drain 70 and the D2 diode anode. Adjacent drain/anode 70 is an n-type (n<+> -doped) cathode region 80 in the n-well.The critical size of a protection circuit and of a performance circuit are independent of one another. For example, protection diodes D1 and D2 are sized (i.e., a specific volume of semiconductor material allocated) in accordance with the amount of charge that is contemplated to be dissipated. If the power is dissipated into too small a volume of silicon, the silicon can be heated beyond its melting point and the device destroyed. Transistor devices 20 and 30 are likewise sized, for example, in accordance with the voltage drive capabilities of the output driver.In typical prior art structures, such as the I/O scheme illustrated in FIGS. 1 and 2, the size of PMOS protection D2 diode corresponds to the size of the PMOS device because they share a common junction (drain or anode). To accommodate layout concerns and processing conveniences, D2 diode is integrated with PMOS device 20. In other words, the critical size of either D2 diode or PMOS device 20 determines the size of the corresponding device. If D2 diode size is critical and controls, PMOS device 20 size is enlarged to accommodate the large diode. If, on the other hand, PMOS device 20 is critical and controls, D2 diode size is enlarged beyond what is necessary for an ESD protection circuit. It is to be appreciated that techniques for determining a critical diode size for addressing ESD concerns are well known and, so as not to obscure the invention, will not be discussed herein. For purposes of the invention, it is necessary to understand only that there is a critical, scaleable minimum size, for example, a minimum sized D2 diode, that will protect a performance circuit, such as a PMOS device or NMOS device, from ESD damage. Similarly, it is well known in the art how to size performance circuits, such as PMOS drivers. Accordingly, techniques for sizing performance circuits will not be presented herein.I/O circuit 10 pad capacitance has several elements, including the NMOS device, the PMOS device, the wire bond or C4 pad, the pad to VCCP diode (D2) and the VSS to pad diode (D1). The diffusion capacitance is high because it is a p<+> -type diffusion in an n-type well. As noted above, the typical PMOS device 20 of an I/O circuit includes a D2 diode, where one edge of the p-type drain serves as the drain and the other as the diode anode or edge. This sharing makes the diode scale up or down with PMOS device 20 size. For example, in a mixed voltage environment, when a high voltage technology wants to drive a low voltage I/O, PMOS device 20 size can be very large. Therefore, D2 diode size is much larger than required resulting in extra capacitive loading.On the other hand, there are also performance circuits that do not need a large PMOS pull-up device. One example is an open drain buffer. To meet the minimum diode size requirement, however, the PMOS size is increased (and generally tied off).Increasing the size of either the MOSFET device or the ESD protection circuit, e.g., diode, directly leads to increased capacitance. In general, the size of a device (e.g., area, volume, etc.) is directly related to its parasitic capacitance. Thus, what is needed is a layout, particularly an I/O layout, that minimizes parasitic capacitance contributed by the performance and protection circuits without sacrificing the required actions of either circuit.SUMMARY OF THE INVENTIONAn integrated circuit is disclosed. The integrated circuit includes a performance circuit occupying a first area of an integrated circuit substrate and a protection circuit coupled to the performance circuit commensurate with dissipating an amount of predetermined charge incident on the performance circuit and occupying a second area of an integrated circuit substrate separate from the first area.Additional features and benefits of the invention will become apparent from the detailed description, figures, and claims set forth below.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a typical I/O scheme having ESD protection components.FIG. 2 shows a circuit layout of an ESD diode integrated with a PMOS device with a shared p-type area acting as a MOSFET drain and a diode anode.FIG. 3 schematically illustrates the layout of a PMOS device partitioned from its ESD protection diode in accordance with an embodiment of the invention.FIG. 4 schematically illustrates the layout of one embodiment of the diode portion of a partitioned integrated circuit in accordance with the invention.FIG. 5 schematically illustrates a cross-sectional side view of the diode through line A-A of FIG. 4 in accordance with an embodiment of the invention.FIG. 6 schematically illustrates a layout of a second embodiment of the diode portion of a partitioned integrated circuit in accordance with the invention wherein the ratio of the periphery to the area of the anode is increased.FIG. 7 schematically illustrates an individual p-type anode of FIG. 6 and shows the distribution of current through the anode in accordance with the embodiment of the invention.FIG. 8 schematically illustrates a layout of a third embodiment of the diode portion of a partitioned integrated circuit in accordance with an embodiment of the invention wherein an n-type area faces each facet of individual p-type anodes.FIG. 9 schematically illustrates a single p-type anode of FIG. 8 and illustrates the current spreading for this type of structure in accordance with an embodiment of the invention.FIG. 10(a) schematically illustrates a layout of a prior art anode stripe having contacts removed from viable areas for unit cells of the anode.FIG. 10(b) schematically illustrates a layout of an improved anode design in accordance with an embodiment of the invention showing a maximization of periphery in a given area.FIG. 11 schematically illustrates a layout of an embodiment of a performance circuit of a partitioned integrated circuit in accordance with the invention.FIG. 12 schematically illustrates a layout of a second embodiment of a performance circuit of a partitioned integrated circuit in accordance with the invention.FIG. 13 schematically illustrates a top view of a layout of a portion of an integrated circuit chip showing ladder type I/O circuits at a corner.FIG. 14 schematically illustrates a top view of a layout of a portion of an integrated circuit chip showing waffle type I/O circuits at a corner.DETAILED DESCRIPTION OF THE INVENTIONThe invention relates to an integrated circuit and a method of forming an integrated circuit having a performance circuit occupying a first area of an integrated circuit substrate, and a protection circuit coupled to the performance circuit and occupying a second area of the integrated circuit substrate separate from the first area. The partitioning of the performance circuit and protection circuit is scaleable to different device circuit requirements and may be utilized wherever a protection circuit is used to prevent ESD from causing breakdown of integrated circuit devices. The partitioned performance circuit and protection circuit can be utilized in I/O circuits with the objective of maximizing the protection circuit current capability and minimizing the total capacitance at the I/O circuit pad.The following detailed description describes an improved circuit and a method of forming an improved circuit such as an I/O unit similar to the circuits described with reference to FIGS. 2, 1(a), and 1(b) and the accompanying text. More particularly, the following description relates to a PMOS device and the D2 ESD diode protection circuit for the PMOS device. It is to be appreciated, however, that the invention is not to be limited to I/O circuits or more specifically to PMOS/ESD circuits or CMOS performance circuits and diode protection circuits. Instead, the invention will apply anywhere ESD protection circuits are implemented and the objective is to increase the current capability of the protection circuit and decrease the capacitance of the performance circuit.FIG. 3 illustrates an embodiment of the invention where the ESD protection circuit is partitioned from the performance circuit. In this example, the performance circuit is, for example, PMOS device 110 of a CMOS I/O circuit in an n-well. ESD protection circuit is, for example, D2 diode 115.As illustrated in FIG. 3, the invention contemplates that the protection circuit, such as for example, ESD diode 115, is separate from the performance circuit, in this case PMOS device 110 in terms of area or volume utilization of a semiconductor substrate. In the case of D2 diode 115, D2 diode 115 is partitioned from PMOS device 110. The drain region of PMOS device 110 and the anode of D2 diode 115 are not formed of a common doped area of the substrate, such as was described in FIG. 2 and the accompanying text.The partitioning of D2 diode 115 from PMOS device 110 ensures the best utilization of integrated circuit space. The partitioning allows PMOS device 110 to be scaled up or down while maintaining D2 diode 115 at, for example, the ESD critical size. The partitioning reduces the capacitance (due to the reduction in excess area of either D2 diode 115 or PMOS device 110) while retaining the ESD current handling capability over a standard PMOS driver and ESD protection D2 diode 115. The reduction in capacitance leads to faster transition times, enhancing bus speed. In addition, correctly sized and improved protection circuits (e.g., ESD diodes) and performance circuits (e.g., PMOS drivers) result in an on-chip area reduction when compared to prior art devices.Comparisons have been made between the partitioned performance/protection circuits and prior art coupled circuits. In one embodiment, a partitioned bimodal driver having a large PMOS device yields an estimated 22% gain in capacitance reduction over a prior art driver coupled bimodal driver. In that same embodiment, the partitioned driver decreases the area for both the performance circuit and protection circuit by an estimated 22%. If a D2 diode size equal to that of an input buffer (e.g., input D2 diode) is used in the output (PMOS) section, the capacitance will see an estimated 36% capacitance reduction gain, and a 37%=area reduction.The above discussion illustrates how the capacitance and required area of an I/O driver with protection circuits are reduced by de-coupling or partitioning the protection circuit from the performance circuit. In addition to this reduction, the invention also contemplates that, in the case of a D2 diode, in particular, the current discharge capability of a diode can be enhanced. This allows a smaller diode to be used while maintaining the critical current discharging requirements necessary for an ESD protection circuit.FIG. 4 illustrates a layout of partitioned D2 diode 115. Partitioned D2 diode 115 is formed, for example, in an n-type well 120 with n-type doped area regions 125 and 145 serving as cathodes adjacent a p-type doped region 135 anode. Contacts 130 are made to n-type area regions 125 and 145 to, for example, dissipate any charge to a suitable power supply. Similarly, contacts 140 are made to p-type region 135 to, for example, link the diode to the performance circuit. FIG. 4 shows p-type area 135 illustratively represented as a plurality of unit cells, each unit cell represented by a contact 130 to the p-type area 135. A unit cell is the minimum area necessary to place a contact in a minimum area, e.g., a minimum p-type area. FIG. 5 shows a cross-sectional side view taken through line A-A of FIG. 4. This formation of unit cells of lateral stripes 125 and 145 adjacent lateral stripe 135 is referred to herein as a "striped" design.As illustrated by the arrows in FIG. 4, when ESD diode 115 discharges an ESD current, the current travels laterally (as indicated by the arrows) toward two edges of each p-type unit cell of an area 135. In diode conduction, the periphery or edges of the unit cell contribute more to current dissipation than the area. Thus, the periphery to area ratio as a measurement of p-type each unit cell of area 135 to dissipate charge toward the cathode is limited to two edges of each unit, i.e., 2/4 or 1/2 of the periphery of each unit cell is available.The invention contemplates, that in addition to the structure shown in FIGS. 4 and 5, the p-type unit cells of the diode may be made as islands. FIG. 6 shows one such island configuration. In FIG. 6, unit cells 122 of p-type doped area region 135 are formed in n-well 120 and are located adjacent stripes of n-type regions 125 and 145. Each unit cell 122 contains a contact 130. This formation of unit cell 122 adjacent lateral stripes 135 is referred to herein as an "island" design. FIG. 7 illustrates a top view of a single unit cell 122 taken from the diode layout of FIG. 6. Unit cell 122 sits in n-well region 120 separated from stripes of n-type regions 125 and 145 adjacent to the opposing sides of unit cell 122. In this manner, current paths forming along unit cell 122 sides substantially parallel to n-type stripes 125 and 145 have a direct path toward the n-type stripes, much like the prior art diode structures. In addition, since the edges of unit cell 122 that are orthogonal or substantially perpendicular to stripes 125 and 145 are adjacent n-well 120, current 150 can travel through these edges toward stripes 125 and 145 improving the discharge capability of unit cell 122 over the unit cells described with reference to FIGS. 4 and 5. As can be seen in FIG. 7, by creating unit cell 122 as an island, the p-type area region can dissipate charge in four directions.In the condition where the p-type/n-well diode is strongly forward biased, on the order of 0.8V, a conductivity modulation occurs in n-well 120. During conductivity modulation, there is sufficient hole injection into n-well 120 that even the electrons in n-well 120 exceed the doping density (electrons increase to maintain charge neutrality). Thus, the resistivity of n-well 120 falls dramatically at high conduction, thereby allowing all sides of unit cell 122 to conduct almost uniformly. In such cases, from a geometrical consideration, each unit cell 122 has at least four times the advantage over a diode shared as a drain as in prior art structures (FIG. 2 and the accompanying text), or twice the advantage over a striped diode using both edges as described with reference to the embodiment of the invention described in FIGS. 4 and 5 and the accompanying text.If higher current uniformity is desired, FIGS. 8 and 9 illustrate a third embodiment of the partitioned protection circuit of the invention. In FIG. 8, unit cells 133 of p-type doped regions 133 are formed in n-well 120. p-type regions 160 are formed adjacent each edge of p-type units 133. In this manner, each unit cell 133 becomes an island in n-well 120 surrounded by n-type region 160. This surrounding of unit cell 133 with n-type region 160 is referred to herein as a "waffle" design.FIG. 9 shows the current paths 165 from an edge of one unit cell 133 of FIG. 8. The current spreading improves the diode resistance over prior art diode structures. Resistance can be estimated and compared based on the length of the current path. Current path 165 has a trapezoidal shape, and the effective width of the path can be estimated as the average of the widths of the current source and sink. In FIG. 9, the current source has width "3S" and sink width "5S." The diode resistance is reduced by current spreading, spreading a distance "4S." Therefore, the resistive improvement with respect to a linear diode stripe implementation is about "1S/3S", or 33%.A comparison between a prior art coupled bimodal driver and an embodiment of a decoupled bimodal driver with improved unit cell diode design of the invention has been made. The de-coupling and improved unit cell diode reduces the capacitance of the bimodal driver by an estimated 34% and reduces the area by an estimated 27% for the waffle diode configuration of the invention compared to the integrated diode of the prior art.Comparing the island diode presented in FIGS. 6 and 7 to the waffle diode presented in FIGS. 8 and 9, one estimate is that the waffle diode occupies approximately half the area of the striped diode with other factors remaining the same. The capacitance savings is calculated at about 34%.The prior art has reported enhanced conduction at the corners of a unit cell of, for example, an anode area stripe such as described with reference to FIG. 2 and the accompanying text. In S. H. Voldman, V. P. Gross, M. J. Hargrove, J. M. Never, J. A. Slinkman, M. P. O'Boyle, T. S. Scott, J. D. Deleckl, "Shallow Trench Isolation Double Diode Electrostatic Discharge Circuit and Interaction With DRAM Output Circuits," Proc. EOS/ESD Symp. 1992, at page 277, and S. H. Voldman, "ESD Protection In A Mixed Voltage Interface and Multirail Disconnected Power Grid Environment in 0.5 [mu]m and 0.25 [mu]m Channel Length CMOS Technology," Proc. EOS/ESD Simp., 1994 at 253, the authors report up to 56% higher currents were observed at the ends or corners compared to a length edge of a diode. In those cases, the enhanced conduction at the corner led to diode destruction, because of uneven current sharing between the length edge and the corners leading to higher temperature at the corners. This enhanced conduction was explained as a three-dimensional implant effect where the junction at the corner becomes cylindrical, as opposed to planar over a straight edge for a trench isolated technology. For a Local Oxidation of Silicon (LOCOS) technology, the junction shapes are spherical at the corner and cylindrical at the straight edge).The solution to the problem proposed by the prior art was to eliminate the unit cell at or near the corners, thus reducing the current conduction at the corners. This could be done, for example, by removing the contacts near the ends of, for example, an anode area stripe, as shown in FIG. 10(a). FIG. 10(a) shows anode stripe 190 having unit cells 1902, 1903, and 1904. In areas 1901 and 1905, contacts are not placed and viable areas for unit cells are not utilized and a capacitance penalty is paid.In contrast to the prior art teachings, particularly the teachings of Voldman, et al. noted above, the invention contemplates that the diode consists entirely of corners, with very short straight segments. This is shown in FIG. 10(b) in the contrasting structure of anode area 195 in accordance with an embodiment of the invention. In FIG. 10(b), each unit cell 1951-1955 is a diode made up of a multitude (4) of only corners. Therefore, uneven current distribution will not occur. The overall diode performance will be biased toward the enhanced conduction mode and the "problem" recognized by the prior art of enhanced conduction is turned into a beneficial gain.The p-type to n-well diode is a common ESD protection device employed in many input and input/output pads, including CMOS, mixed voltage, etc. By partitioning the diode and the I/O circuit, and also enhancing the current capability of the diode itself, the area of the semiconductor substrate is significantly reduced and the capacitive load on an I/O pad and on a bus is significantly reduced. The reduction in the capacitive load enhances speed and, to a smaller degree, saves system power. The enhanced current capability of the island and waffle unit cell diodes also reduce the resistance which helps to protect the I/O circuit during an ESD occurrence. Similar area, capacitance, and resistance improvements can be achieved by applying similar principles to other performance/protection circuit, including, in this case, the VSS to pad D1 diode. With regard to the D1 diode, for example, the partitioning and unit cell designs apply equally as well. In the case of implementing the D1 diode in a p-type epitaxial substrate, the D1 diode need not be in a well, but can be made simply by placing an n-type tap or region in the substrate and forming a contact to the tap. It is also to be appreciated that, although logic families conventionally use p-type epitaxial substrates, if another type of substrate, e.g., an n-type substrate, is used, the construction of the D1 and D2 diodes can be suitably adjusted.Much of the above discussion has focused on optimizing the partitioned diode portion of an I/O circuit. In much the same way, the performance portion of the I/O circuit can similarly be enhanced. As shown in FIG. 2, a prior art PMOS driver 20 utilizes one edge of drain 70 of a MOSFET for transistor action and the other for creating an ESD diode. Each contact 75 to drain region 70 to define a unit cell has a width W and a capacitance C.FIG. 11 shows a top view of a portion of the partitioned performance circuit in accordance with one embodiment of the invention. FIG. 11 shows a PMOS device 250 in an n-well 220. PMOS device 250 includes polysilicon gate 260 between source region 235 and drain region 225. Adjacent drain region 225 opposite the edge adjacent source region 235 is second source region 260. Similarly, polysilicon gate 270 overlies a semiconductor substrate having p-type doped source region 235 and drain region 245. Adjacent the edge of drain region 245 opposite source region 235 is a second source region 255. Drain regions 225 and 245 are each divided into a plurality of unit cells, each unit cell having a contact 230 to drain region 225 and 245, respectively.In the structure shown in FIG. 11, the absence of an integrated protection allows the transistor devices to be scaled independent of the protection circuits, e.g., independent of the ESD protection diode. The absence of an integrated protection circuit such as a diode also allows both sides of drains 225 and 245 of respective PMOS transistor devices to be exploited. Thus, for each contact 230 to a unit cell of drain region 225 and 245, respectively, there is twice the width (2W) for a given capacitance C. Thus, a doubling of the width to capacitance ratio is obtained over the prior art structure shown in FIG. 2.FIG. 12 shows a layout of a second embodiment of the performance portion of the partitioned integrated circuit of the invention. In FIG. 12, individual unit cells 280 include a p-type doped region 285 and contact 290 in n-well 220. Here again, a unit cell is that minimum amount of p-typed doped area that will support a contact. Overlying and surrounding the periphery of unit cell 280 is polysilicon gate 295. In this case, p-type doped regions 285 of unit cells 280 serve as drain regions for the PMOS FET device. Surrounding drain region 285 of unit cells 280 is p-type source region 310. Summarizing the unit cell 280 structure as a waffle structure, one drain contact 290 serves four sides. Accordingly, the width to capacitance ratio is 4W/C, a gain of four times the width to capacitance ratio over prior art structures such as described in FIG. 2 and the accompanying text.The waffle transistors described above can be analytically or empirically modeled similar to prior art "Ladder" transistors such as shown in FIG. 2 and the accompanying text. For a right angle edged waffle MOSFET, the width is four times the inner width (assuming small gate lengths). If the corners are not sharp and a right triangle is placed at each corner, the effective width of the corners is diminished, such that the effective width is estimated by the known relationship: [mathematical formula - see original document]where Ws is the length of the triangle's side. Straight gate edges should be added to this number.Another advantage of the waffle design of transistors is that asymmetries arising in I/O circuits due to chip layout are avoided. This occurs generally on the corners of a chip where ladder type devices of the prior art that were laid out in one direction changed direction at the corner, for example, going from vertical to horizontal. FIG. 13shows the example of a ladder type transistor layout in the corner showing horizontal ladder I/O device 310 and vertical ladder I/O device 320. The changing of direction can lead to small asymmetries in the device that in turn lead to skews in timing. FIG. 14 shows the waffle transistor design of the invention wherein symmetric I/O devices are used in I/O circuits eliminating asymmetries in corners of the chip and therefore benefiting the timing margin.By improving the drain width to capacitance ratio in accordance with the embodiments described above, the capacitance on, for example, an I/O pad is reduced for the same current drive capability. This reduction in capacitance leads to faster transition times and enhances bus performance (e.g., bus speed). Further, the four-fold symmetry of the waffle FET design, in particular, reduces effects due to orientation, leading to less skew in the timing of the circuit performance.In the preceding detailed description, the invention is described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
The present disclosure includes methods for operating a memory system, and memory systems. One such method includes updating transaction log information in a transaction log using write look ahead information; and updating a logical address (LA) table using the transaction log. |
What is Claimed is: 1. A method for power interrupt management in memory, comprising: updating transaction log information in a transaction log using write look ahead information; and updating a logical address (LA) table using the transaction log information. 2. The method of claim 1, wherein the write look ahead information includes information about the location where data would have next been written in a memory system. 3. The method of claim 1, wherein the write look ahead information includes information about the location where data had most recently been written in a memory system. 4. The method of claim 1 , further comprising creating write look ahead information using information from a wear leveling algorithm about the location where data had most recently been written and would have next been written. 5. The method of claim 1, further comprising creating write look ahead information using information from a garbage collection algorithm about the location where data had most recently been written and would have next been written. 6. The method of claim 1, further including periodically storing the LA table in non-volatile memory by copying a LA table in volatile memory. 7. The method of claim 6, further including recording the transaction log information in the transaction log, wherein the transaction log information includes information about writes that occurred in a memory system including the non-volatile memory after the last time the LA table in volatile memory was copied in non-volatile memory. 8. The method of any one of claims 1 -7, wherein updating the transaction log comprises recreating a page of transaction log information. 9. The method of claim 8, wherein the method includes using the write look ahead information to find a location in memory and verifying a valid write occurred at the location by locating a revision number associated with the data at the location. 10. A method for operating a memory system, comprising: creating write look ahead information; updating a transaction log using the write look ahead information; and updating a logical address (LA) table in non-volatile memory using the updated transaction log. 1 1. The method of claim 10, further comprising, prior to a power interrupt, storing the LA table in non-volatile memory on a periodic basis, the stored LA table being a copy of a LA table in volatile memory. 12. The method of claim 1 1, wherein the LA table in volatile memory is updated after each write operation in the memory system. 13. The method of claim 1 1, wherein, prior to updating the LA table, the LA table in non-volatile memory does not include information about write operations that occurred between a last time the LA table was stored in nonvolatile memory and a power interruption. 14. The method of claim 10, further comprising copying the updated LA table to volatile memory upon power up of a memory system after a power interruption. 15. The method of any one of claims 10-14, wherein creating write look ahead information comprises using a wear leveling algorithm to create the write look ahead information. 16. The method of claim 15, wherein the write look ahead information includes information about the location where data would have been written next in the memory system. 17. The method of claim 16, wherein the method includes verifying a valid write occurred at the location where data would have been written next in the solid state drive by identifying a revision number associated with the data at the location. 18. A method for power interrupt management in a memory system, comprising: finding information about a write operation that was not in a transaction log or a logical address (LA) table in non-volatile memory using write look ahead information; verifying the write operation information is valid by identifying a revision number associated with the write operation information; updating a transaction log using the found write operation information; updating the LA table in non-volatile memory using the updated transaction log; and storing the LA table in volatile memory after a power interrupt. 19. The method of claim 18, wherein finding information about the write operation includes determining the location where data would have been written next in the memory system from the write look ahead information. 20. The method of claim 19, wherein determining the location where the data would have been written next comprises using a wear leveling algorithm. 21. The method of claim 18, wherein updating the transaction log comprises updating a last page of the transaction log that became corrupt after the power interrupt. 22. The method of any one of claims 18-21 , wherein the method includes storing a copy of the LA table in volatile memory in the non-volatile memory on a periodic basis. 23. The method of any one of claims 18-21 , wherein the method includes recording information about read and write operations in the transaction log for operations that occur after storing a copy of the LA table in the non- volatile memory. 24. The method of any one of claims 18-21, wherein updating the LA table in non-volatile memory using the updated transaction log comprises adding information about a write that occurred between a last time the LA table was copied into the non-volatile memory and the power interrupt. 25. A memory system, comprising: solid state non-volatile memory configured to store a logical address (LA) table and a transaction log; and a controller configured to: update the transaction log information in the transaction log using write look ahead information; and update the LA table using the transaction log information. 26. The memory system of claim 25, wherein the transaction log is configured to record information about writes occurring in the memory system after storing the LA table in non-volatile memory. 27. The memory system of claim 25, wherein the controller is configured to use the write look ahead information to recreate a last page of transaction log information in the transaction log. 28. The memory system of claim 25, wherein the controller is configured to use a wear leveling algorithm to create the write look ahead information. 29. The memory system of any one of claims 25-28, wherein a capacitor is coupled to the non-volatile memory to provide power temporarily to the memory system after a power interruption. 30. A memory system, comprising: a solid state non-volatile memory, wherein the non- volatile memory is configured to store a logical address (LA) table and a transaction log ; and a controller configured to: recreate transaction log information using write look ahead information; and update the LA table using the transaction log information to rebuild the LA table with information missing from the LA table after a power interruption. 31. The memory system of claim 30, wherein the LA table is in non- volatile memory and wherein the controller is further configured to store a copy of the updated LA table in volatile memory. 32. The memory system of claim 31 , wherein the controller is configured to store a copy of the updated LA table in volatile memory in the non-volatile memory at least once every 300 seconds. 33. The memory system of any one of claims 30-32, wherein the controller is configured to create the write look ahead information using a wear leveling algorithm. 34. The memory system of claim 33, wherein the controller is configured to determine where data would have been written next in the memory system using the wear leveling algorithm. 35. The memory system of any one of claims 30-32, wherein the LA table is a logical block address (LBA) table. |
POWER INTERRUPT MANAGEMENT Technical Field [0001] The present disclosure relates generally to semiconductor memory devices, methods, and systems, and more particularly, to power interrupt management. Background [0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and includes random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Nonvolatile memory can provide persistent data by retaining stored information when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and phase change random access memory (PCRAM), among others. [0003] Memory devices can be combined together to form a solid state drive (SSD). A solid state drive can include non-volatile memory, e.g., NAND flash memory and NOR flash memory, and/or can include volatile memory, e.g., DRAM and SRAM, among various other types of non-volatile and volatile memory. Flash memory devices, including floating gate flash devices and charge trap flash (CTF) devices using semiconductor-oxide-nitride-oxide- semiconductor and metal-oxide-nitride-oxide-semiconductor capacitor structures that store information in charge traps in the nitride layer, may be utilized as nonvolatile memory for a wide range of electronic applications. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. [0004] An SSD can be used to replace hard disk drives as the main storage device for a computer, as the solid state drive can have advantages over hard drives in terms of performance, size, weight, ruggedness, operating temperature range, and power consumption. For example, SSDs can have superior performance when compared to magnetic disk drives due to their lackof moving parts, which may avoid seek time, latency, and other electromechanical delays associated with magnetic disk drives. SSD manufacturers can use non-volatile flash memory to create flash SSDs that may not use an internal battery supply, thus allowing the drive to be more versatile and compact. [0005] An SSD can include a number of memory devices, e.g., a number of memory chips (as used herein, "a number of something can refer to one or more of such things, e.g., a number of memory devices can refer to one or more memory devices). As one of ordinary skill in the art will appreciate, a memory chip can include a number of dies and/or logical units (LUNs). Each die can include a number of memory arrays and peripheral circuitry thereon. The memory arrays can include a number of memory cells organized into a number of physical pages, and the physical pages can be organized into a number of blocks. [0006] Solid state drives can include a logical address (LA) table, such as a logical block address (LBA) table. An LBA table can be used to record the information that links the logical address of data to the physical location of the data in the memory arrays of a solid state drive. The LBA table can be stored in volatile memory in the solid state drive and a copy of the LBA table can also be stored in non-volatile memory in the solid state drive. The LBA table can be used to locate the physical location of data in the solid state drive to read the data when a read request is initiated in the solid state drive. A read request for data at a specific logical address can be initiated by a host. The logical address can be found in the LBA table and a corresponding physical address can then be indicated. The solid state drive can read the data from the indicated physical address to complete the read request for the solid state drive. [0007] A solid state drive that does not have an LBA table with the current, e.g., most recent, information about relationships between the logical address and the physical address for the data in the solid state drive can make some data in the solid state drive inaccessible. Therefore, an LBA table that is current is desirable for complete access to all of the data in the solid state drive. An LBA table in the solid state drive can be lost or incomplete after a power interrupt due the LBA table being stored in volatile memory and/or the LBA table being periodically stored in non-volatile memory. Therefore, a power interrupt can cause a solid state drive to have an LBA table that does not haveinformation about data that was written to the solid state drive in a time just prior to the power interrupt. Brief Description of the Drawings [0008] Figure 1 is a functional block diagram of a computing system including at least one memory system, in accordance with one or more embodiments of the present disclosure. [0009] Figure 2 is a functional block diagram of a memory system in accordance with one or more embodiments of the present disclosure. [0010] Figure 3 illustrates a block diagram of a transaction log, block table, and logical block address (LBA) table in non- volatile memory in accordance with one or more embodiments of the present disclosure. [0011] Figure 4 is a functional block diagram of a reclamation unit in accordance with one or more embodiments of the present disclosure. [0012] Figure 5 is a table that illustrates transaction log in accordance with one or more embodiments of the present disclosure. [0013] Figure 6 is a table that illustrates a block table in accordance with one or more embodiments of the present disclosure. [0014] Figure 7 is a table that illustrates a logical block address (LBA) table in accordance with one or more embodiments of the present disclosure. Detailed Description [0015] The present disclosure includes methods and devices for power interrupt management in memory. One method embodiment includes updating transaction log information in a transaction log using write look ahead information; and updating a logical address (LA) table using the transaction log. [0016] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice embodiments of this disclosure, and it is to be understood that other embodiments may be utilized andthat process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. [0017] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designators "N", "M", and "R", particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with one or more embodiments of the present disclosure. [0018] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 108 may reference element "08" in Fig. 1, and a similar element may be referenced as 208 in Fig. 2. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, as will be appreciated, the proportion and the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present invention, and should not be taken in a limiting sense. [0019] Figure 1 is a functional block diagram of a computing system 100 including at least one memory system 104, in accordance with one or more embodiments of the present disclosure. In the embodiment illustrated in Figure 1 , the memory system 104, e.g., a solid state drive (SSD), can include a physical host interface 106, a controller, e.g., memory system control circuitry 108, and one or more solid state memory devices 1 10-1 , . . ., 1 10-N. The solid state memory devices 1 10-1, . . ., 1 10-N can provide a storage volume for the memory system, e.g., with a file system formatted to the memory devices. In one or more embodiments, the memory system control circuitry 108 can be an applicationspecific integrated circuit (ASIC) coupled to a printed circuit board including the physical interface 106 and solid state memory devices 1 10-1 , . . ., 1 10-N. [0020] As illustrated in Figure 1 , the memory system control circuitry 108 can be coupled to the physical host interface 106 and to the solid state memory devices 1 10-1 , . . ., 1 10-N. The physical host interface 106 can be used to communicate information between the memory system 104 and another device such as a host system 102. Host system 102 can include a memory access device, e.g., a processor. One of ordinary skill in the art will appreciate that "a processor" can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc. Examples of host systems include laptop computers, personal computers, digital cameras, digital recording and playback devices, mobile telephones, PDAs, memory card readers, interface hubs, and the like. For one or more embodiments, the physical host interface 106 can be in the form of a standardized interface. For example, when the memory system 104 is used for data storage in a computing system 100, the physical host interface 106 can be a serial advanced technology attachment (SATA), peripheral component interconnect express (PCIe), or a universal serial bus (USB), among other connectors and interfaces. In general, however, physical host interface 106 can provide an interface for passing control, address, data, and other signals between the memory system 104 and a host system 102 having compatible receptors for the physical host interface 106. [0021] The memory system control circuitry 108 can communicate with the solid state memory devices 1 10-1, . . ., 1 10-N to read, write, and erase data, among other operations. Memory system control circuitry 108 can have circuitry that may be one or more integrated circuits and/or discrete components. For one or more embodiments, the circuitry in memory system control circuitry 108 may include control circuitry for controlling access across the solid state memory devices 1 10-1 , . . ., 1 10-N and circuitry for providing a translation layer between a host system 102 and the memory system 104. Thus, a memory controller could selectively couple an I/O connection (not shown in Figure 1) of a solid state memory device 1 10-1, . . ., 1 10-N to receive the appropriate signal at the appropriate I/O connection at the appropriate time. Similarly, the communication protocol between a host system 102 and the memory system 104 may be different than what is required for access of a solid state memory device1 10-1 , . . ., 1 10-N. Memory system control circuitry 108 could then translate the commands received from a host into the appropriate commands to achieve the desired access to a solid state memory device 1 10-1 , . . ., 1 10-N. [0022] A solid state memory device 1 10-1, . . ., 1 10-N can include one or more arrays of memory cells, e.g., non-volatile memory cells. The arrays can be flash arrays with a NAND architecture, for example. In a NAND architecture, the control gates of memory cells of a "row" can be coupled with an access, e.g., word, line, while the memory cells can be coupled in series source to drain in a "string" between a select gate source transistor and a select gate drain transistor. The string can be connected to a data, e.g., bit, line by the select gate drain transistor. The use of the terms "row" and "string" implies neither a linear nor an orthogonal arrangement of memory cells. As will be appreciated by those of ordinary skill in the art, the manner of connection of the memory cells to the bit lines and source lines depends on whether the array is a NAND architecture, a NOR architecture, or some other memory array architecture. [0023] The solid state memory devices 110-1, . . ., 1 10-N can include a number of memory cells that can be grouped. As used herein, a group can include one or more memory cells, such as a page, block, plane, die, an entire array, or other groups of memory cells. For example, some memory arrays can include a number of pages of memory cells that make up a block of memory cells. A number of blocks can be included in a plane of memory cells. A number of planes of memory cells can be included one a die. As an example, a 128 GB memory device can include 4314 bytes of data per page, 128 pages per block, 2048 blocks per plane, and 16 planes per device. [0024] In a memory device, a physical page can refer to a unit of writing and/or reading, e.g., a number of cells that are written and/or read together or as a functional group of memory cells. Accordingly, an even page and an odd page can be written and/or read with separate writing and/or reading operations. For embodiments including multilevel cells (MLC), a physical page can be logically divided into an upper page and a lower page of data. For example, one memory cell can contribute one or more bits to an upper page of data and one or more bits to a lower page of data. Accordingly, an upper page and a lower page of data can be written and/or read as part of one writing and/or reading operation, as thelogical upper page and logical lower page are both part of the same physical page. [0025] The memory system 104 can implement wear leveling to control the wear rate on the solid state memory devices 1 10-1 , . . .. 1 10-N. A solid state memory array can experience failure after a number of program and/or erase cycles. Wear leveling can reduce the number of program and/or erase cycles performed on a particular group. Wear leveling can include dynamic wear leveling to minimize the amount of valid blocks moved to reclaim a block. Dynamic wear leveling can include a technique called garbage collection in which blocks with more than a threshold amount of invalid pages are reclaimed by erasing the block. An invalid page, for example, can be a page of data that has been updated and stored in a different page. Static wear leveling can include writing static data to blocks that have high erase counts to prolong the life of the block. [0026] The embodiment of Figure 1 can include additional circuitry that is not illustrated so as not to obscure embodiments of the present disclosure. For example, the memory system 104 can include address circuitry to latch address signals provided over I/O connections through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder to access the solid state memory devices 1 10-1 , . . ., 1 10-N. It will be appreciated by those skilled in the art that the number of address input connections can depend on the density and architecture of the solid state memory devices 1 10-1, . . ., 1 10-N. [0027] Figure 2 is a functional block diagram of a memory system 204 in accordance with one or more embodiments of the present disclosure. The memory system 204 can include memory system control circuitry 208. The memory system control circuitry 208 can be coupled to one or more solid state memory devices, e.g., non- volatile memory 210 and/or volatile memory 212. Memory system 204 and memory system control circuitry 208 can be analogous to memory system 104 and memory system control circuitry 108 respectively, illustrated in Figure 1. [0028] The memory system control circuitry 208 can include host interface circuitry 214, host-memory translation circuitry 216, memory management circuitry 218, a switch 220, non-volatile memory control circuitry 222, and/or volatile memory control circuitry 224. As described herein, thememory system control circuitry 208 can be provided in the form of an ASIC, however, embodiments are not so limited. [0029] The host interface circuitry 214 can be coupled to host-memory translation circuitry 216. The host interface circuitry 214 can be coupled to and/or incorporated with a physical interface to a host system, such as physical interface 106 illustrated in Figure 1. [0030] In general, the host interface circuitry 214 is responsible for converting command packets received from the host system, e.g., from a PCIe bus, into command instructions for the host-memory translation circuitry 216 and for converting memory responses into host system commands for transmission to the requesting host. For example, the host interface circuitry 214 can construct SATA command packets from PCIe based transaction layer packets. [0031] The host-memory translation circuitry 216 can be coupled to the host interface circuitry 214, to the memory management circuitry 218, and/or to the switch 220. The host-memory translation circuitry 216 can be configured to translate logical (e.g., host) addresses (e.g., associated with a received command) to physical memory addresses. For example, the host-memory translation circuitry 216 can convert host sector read and write commands to commands directed for specific portions of the non-volatile memory 210. Each host operation can be translated into single or multi-sector non- volatile memory 210 operation. [0032] The memory management circuitry 218 can be coupled to the host-memory translation circuitry 216 and/or to the switch 220. The memory management circuitry 218 can control a number of processes including but not limited to initialization, wear leveling (e.g., garbage collection and/or block reclamation) and, error correction, e.g., via operation of processor 228. Memory management circuitry 218 can access a group, e.g., block table 236, to determine candidates for wear leveling. The memory management circuitry 218 can update an LB A table, e.g., LB A table 234, with a new physical address corresponding to a logical address when data associated with the logical address is written to the new physical address (e.g., as part of wear leveling or an update to the data). [0033] The memory management circuitry 218 can, e.g., as part of a static wear leveling operation, search for blocks that have a high erase count inblock table 236. The memory management circuitry can compare the erase count of a particular block with a threshold count. For example, the erase count of the block with the lowest erase count can be subtracted from the particular block. If the difference is greater than the threshold count, then the particular block can be indicated as a candidate for block reclamation. [0034] The memory management circuitry 218 can, e.g., as part of a dynamic wear leveling operation, search for blocks that have a garbage collection threshold amount of invalid, e.g., unused, portions, e.g., pages, therein. The memory management circuitry 218 can include reclamation circuitry 230. Reclamation is a process that can be invoked by memory management circuitry 218 as a result of garbage collection. Reclamation can involve moving all valid data from location in a block to be erased to locations in another block before the block is erased. [0035] The switch 220 can be coupled to the host-memory translation circuitry 216, the memory management circuitry 218, the non- volatile control circuitry 222, and/or the volatile memory control circuitry 224. The switch 220 can be a crossbar switch and can include and/or be coupled to one or more buffers, e.g., static random access memory (SRAM) buffers. The switch 220 can provide an interface between various components of the memory system control circuitry 208. The switch 220 can account for variations in defined signaling protocols that may be associated with different components of the memory system control circuitry 208 in order to provide consistent access and implementation between components. In one or more embodiments, the switch 220 can be a direct memory access (DMA) module. [0036] The controller, e.g., non-volatile memory control circuitry 222, can be coupled to the switch 220 and to one or more non-volatile memory devices 210. Among other information, the one or more non- volatile memory devices 210 can store a transaction log 238, a copy of a logical address (LA) table, such as logical block address (LBA) table 234-C, and/or a group table, such as block table 236-C, as described herein. In some embodiments, the memory system control circuitry 208 can include one non-volatile memory controller for all memory channels. In other embodiments, each memory channel is coupled to a discrete non-volatile memory controller.[0037] The volatile memory control circuitry 224 can be coupled to the switch 220 and to one or more volatile memory devices 212. Among other information, the one or more volatile memory devices can store an LBA table 234 and/or a block table 236. The LBA table 234 can store the physical address of pages in the one or more non-volatile memory devices 210 and include corresponding logical addresses. The LBA table 234 can be indexed by the LBA that is contained in an associated SATA command. The LBA table 234 can be used by the host-memory translation circuitry 216, for example, to look-up physical page addresses that correspond to logical block addresses. The block table 236 can store information for erasable blocks in the one or more nonvolatile memory devices 210. Information stored in the block table 236 can include valid page information, erase count, and other status information. Information accessed from the block table 236 can be indexed by physical block address. [0038] Figure 3 illustrates a block diagram of a transaction log 338, block table 334, and logical block address (LBA) table 336 in non-volatile memory 310 in accordance with one or more embodiments of the present disclosure. Among other information, the non-volatile memory can store write operation information in an LBA table 336, a block table 334, and/or a transaction log 338. [0039] A copy of an LBA table stored in volatile memory can be periodically stored as a LBA table 336 in the non- volatile memory 310, such as at least every 300 seconds, among other periodic intervals. For example, the LBA table 336 can be stored in the non-volatile memory 310 every 120 seconds. The LBA table in volatile memory can be updated after each write in the solid state drive. The time period for the frequency of updating the LBA table in non-volatile memory devices can be dependent on the frequency of the writes that the memory system performs and/or the speed at which data is written, among other factors. [0040] A transaction log 338 can be stored in the non- volatile memory and used to record information about every write that occurs in the memory devices. A memory system having a number of memory devices can include a transaction log that includes information about every write that occurs in the memory devices. The transaction log can be striped across a number of memorydevices in a memory system. As one of ordinary skill in the art will appreciate, striping includes splitting data so that it is stored on more that one device. Striping can include dividing write data, such as the transaction log information, into fragments and storing at least one fragment in each of a number of memory devices. In one or more embodiments, the controller can update the transaction log with transaction log information for each write as each write is executed in the memory devices. The transaction log can contain information about all writes that occur in the memory devices during a time period. The transaction log can include information about all writes to the memory devices that occurred since the last time that the LBA table 336 was saved in the non- volatile memory 310. [0041] In one or more embodiments, information from the transaction log 338 can be used to update the copy of the LBA table 336 with information about writes that occurred in the memory device(s) from the time after the LBA table 336 was last saved in the non-volatile memory, e.g., between the last save and a power interrupt. The copy of the LBA table 336 in nonvolatile memory 310 may otherwise be missing information because the LBA copy 336 in nonvolatile memory 310 only has the information that was in the LBA table in volatile memory at the time it was copied into non-volatile memory. Also, the LBA tale in volatile memory is erased during a power interrupt, so the LBA copy in non-volatile memory cannot otherwise be updated with the information that would have been stored in the LBA table in volatile memory between the last time it was copied to non- volatile memory 310 and when it was erased. Therefore, the transaction log 338 in non- volatile memory 310 can be used to update the information in the LBA table in non-volatile memory. The transaction log 338 can contain information about the location of data and time that the data was written to the memory devices. The information can be confirmed by the memory devices and then input into the LBA table to update the LBA table 336. In one or more embodiments, the last page of the transaction log can become corrupt during the power interrupt, so the last page of information in the transaction log does not contain information about some of the most recent data that was written to the memory arrays. [0042] The reclamation unit, as shown in Figure 4, can use information from a wear leveling algorithm (e.g., a garbage collection algorithm), forexample, to create write look ahead information. The write look ahead information can contain the location of the recently written data and the location of where the next data would have been written. The wear leveling algorithms move data to unused and/or less used portions of memory, thus creating newly free blocks for writing data. The wear leveling algorithms can include the location of the newly free blocks and have the controller write to the newly free blocks next. The information from the wear leveling and/or garbage collection algorithms about the location of the newly free blocks and where data has recently been written and would have been written next is included in the write look ahead information. The controller can determine if and/or what data, such as a valid write, is at the locations indicated by the write look ahead information by checking for a revision number at those locations. The revision number can be found in the metadata associated with the data at a location and can indicate that a valid write has occurred at a location. The transaction log can be updated with the transaction information the data found at those locations. The write look ahead information can be used to recreate the corrupt last page of information in the transaction log. The LBA table in non-volatile memory can then be updated with the now complete transaction log. [0043] In one or more embodiments, a capacitor can be included to give the memory devices enough power to save the last page of the transaction log in the event of a power interrupt. In such embodiments, the power from the capacitor is used to finish saving the updates to the transaction log that occurred just prior to a power interruption, therefore the transaction log has information about the writes that occurred since the last save of the LBA table in non-volatile memory and can be used to update the LBA table. [0044] Figure 4 is a functional block diagram of a reclamation unit 430 in accordance with one or more embodiments of the present disclosure. In Figure 4, reclamation unit 430 can include a wear leveling unit 444. The reclamation unit 430 can use information from the wear leveling unit 444 to create write look ahead information 446. Write look ahead information 446 can be data that indicates the location in the memory arrays where the last write was performed and/or where the next write would have been performed. The write look ahead information can be used by the reclamation unit 444 to determine the location of the last data write before a power interrupt and update the transactionlog with that information. A transaction log that is updated with the location of data before a power interrupt can be used to update the LBA table to include information about writes between the last save of the LBA table in non-volatile memory and the time of a power interrupt. [0045] Figure 5 is a table that illustrates transaction log 538 in accordance with one or more embodiments of the present disclosure. In Figure 5, the transaction log 538 can include transaction log information that includes the physical address 550 and the logical address 552 for the data that is in the memory devices. The transaction log 538 can record the location of every write that occurs in the memory devices and the transaction log 538 can be stored in the memory devices. The transaction log can be striped across a number of memory devices in a memory system. In one or more embodiments, a transaction log can log each transaction that occurs in the memory devices and can be a reference for the memory devices and/or controller of the transactions performed on the memory devices. The transaction log can be erased after a copy of the LBA table from volatile memory is made in non-volatile memory. The transaction log can be updated with new entries corresponding to transactions that occur after erasing the transaction log. [0046] In Figure 5, transaction log 538 can include a number of entries 556-1, 556-2, 556-3,..., 556-N that indicate each transaction that has occurred in the memory devices. The entries 556-1, 556-2, 556-3,..., and 556-N in the transaction log 538 can include the command 550 for the transaction, such as a write, a read, or an erase, the physical address 552 of the transaction, and the logical address 554 of the transaction. [0047] Figure 6 is a table that illustrates a block table 634 in accordance with one or more embodiments of the present disclosure. The block table 634 can store information about the blocks in the memory devices. The information stored in block table 634 can include data validity information 660, erase count 662, and status information 664. The block table 634 can include a number of entries 666-1 , 666-2, 666-3,..., and 666-M. Each entry in the block table 634 can include the physical address 652, data validity information 660, erase count 662, and status information 664 for data, such as a block and/or page of data. The data validity information 660 in block table 634 can include information about the validity of each page in a block, e.g., whether the data is valid orinvalid. The erase count 662 in block table 634 can indicate the number of times a block has been erased. The status information 664 in block table 634 can indicate whether a block is erased and/or contains data, among other status indicators for a block. [0048] Figure 7 is a table that illustrates a logical block address (LBA) table 736 in accordance with one or more embodiments of the present disclosure. The LBA table 736 can store the logical address 752 and physical address 750 for each data entry in the memory devices and can provide the translation for the logical address 752 to physical address 750 for each data entry in the memory devices. The LBA table 736 can be indexed by the LBA for each write to the memory devices and can include a number of entries 770-1, 770-2, 770-3,..., and 770-R that include the logical address 754 and the physical address 752 for each data entry in the LBA table 736. The LBA can be used to look-up the corresponding physical address where the data in each entry is stored. The LBA table can be stored in volatile memory of a memory system and a copy of the LBA table in volatile memory can be made in non-volatile memory on a periodic basis. Once a copy of the LBA table is made in non-volatile memory, the LBA table in volatile memory can be erased and LBA table in volatile memory will be updated with new entries corresponding to transactions that occur after erasing the LBA table in volatile memory. Conclusion [0049] The present disclosure includes methods and devices for power interrupt management in memory. One method embodiment includes updating transaction log information in a transaction log using write look ahead information; and updating a logical address (LA) table using the transaction log. [0050] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skillin the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. [0051] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
In described examples, a low dynamic resistance, low capacitance diode (114) of a semiconductor device (100) includes a heavily-doped n-type substrate (102). A lightly-doped n-type layer (104) 1 micron to 5 microns thick is disposed on the n-type substrate (102). A lightly-doped p-type layer (106) 3 microns to 8 microns thick is disposed on the n-type layer (104). The low dynamic resistance, low capacitance diode (114) of the semiconductor device (100) includes a p-type buried layer (120), with a peak dopant density above 1 * 1017 cm-3, extending from the p-type layer (106) through the n-typelayer (104) to the n-type substrate (102). The low dynamic resistance, low capacitance diode (114) also includes an n-type region (122) disposed in the p-type layer (106), extending to a top surface (124) of the p-type layer (106). |
1.A semiconductor device comprising:An n-type substrate having an average dopant density greater than 1×10 18 cm −3 ;a 1 micron to 5 micron thick n-type layer disposed on the n-type substrate, the n-type layer having an average dopant density of less than 1 x 1016 cm-3;a p-type layer of 3 microns to 8 microns thick disposed on the n-type layer, the p-type layer having an average dopant density of less than 1 x 1015 cm-3;a low resistance low capacitance diode, ie, an LR-LC diode, comprising: a p-type buried layer extending from the p-type layer through the n-type layer to the n-type substrate, the p The type buried layer has a peak dopant density greater than 1×10 17 cm −3 ; and an n-type region disposed in the p-type layer and extending to a top surface of the p-type layer.2.The semiconductor device according to claim 1, wherein said n-type substrate has an average dopant density of 5 × 10 19 cm -3 to 7 × 10 19 cm -3 ; and said n - type layer has a thickness of 1.5 μm to 2.5 μm And the p-type buried layer has a peak dopant density of 5×10 18 cm −3 to 1×10 19 cm −3 .3.The semiconductor device according to claim 2, wherein said LR-LC diode has a breakdown voltage of 6 volts to 8 volts.4.The semiconductor device according to claim 1, wherein said n-type substrate has an average dopant density of from 1 × 10 19 cm -3 to 5 × 10 19 cm -3 ; and said n - type layer has a thickness of from 2.5 μm to 3.0 μm And the p-type buried layer has a peak dopant density of 5×10 17 cm −3 to 2×10 18 cm −3 .5.The semiconductor device according to claim 4, wherein said LR-LC diode has a breakdown voltage of 20 volts to 40 volts.6.The semiconductor device according to claim 1, wherein said n-type region comprises a heavier doped inner portion having an average doping density of 1 × 10 17 cm -3 to 3 × 10 19 cm -3 , and inside said heavier doping a lightly doped outer portion partially under and surrounding the heavierly doped inner portion having an average doping density of 1 x 10 16 cm -3 to 1 x 10 17 cm -3 .7.A semiconductor device according to claim 1, comprising an isolation structure laterally surrounding said LR-LC diode, said isolation structure extending from said top surface of said p-type layer to under said p-type buried layer The n-type substrate.8.The semiconductor device of claim 1 comprising:a parallel diode comprising a p-type region disposed in the p-type layer and extending to the top surface of the p-type layer and vertically separated from the n-type layer At least one micron, the p-type region has an average dopant density of at least 1×10 17 cm −3 , and the parallel diode does not have the p-type buried layer;At least one isolation structure laterally separating the LR-LC diode from the parallel diode, the isolation structure extending from the top surface of the p-type layer to under the p-type buried layer Said n-type substrate;a first terminal, the first terminal being directly electrically coupled to the n-type region and the p-type region;A second terminal, the second terminal being directly electrically coupled to the n-type substrate.9.A method of forming a semiconductor device, the method comprising:Providing an n-type substrate having an average dopant density greater than 1 x 1018 cm-3;Forming an n-type layer of 1 micron to 5 micrometers thick on the n-type substrate by an epitaxial process such that the n-type layer has an average dopant density of less than 1×10 16 cm −3 ;Forming a first implantation mask over the n-type layer, the first implantation mask exposing a region for the p-type buried layer in a region for the LR-LC diode;Injecting a p-type dopant into the n-type layer in the region exposed by the first implantation mask at a dose of at least 3×10 13 cm −2 ;Removing the first implantation mask after the p-type dopant is implanted;Forming a p-type layer of 3 micrometers to 8 micrometers thick on the n-type layer by an epitaxial process such that the p-type layer has an average dopant density of less than 1×10 15 cm −3 ;Performing a heat treatment that diffuses the implanted p-type dopant to form a p-type buried layer from which the p-type buried layer extends through the n-type layer to the n-type substrate; as well asAn n-type region is formed in the p-type layer, the n-type region extending to a top surface of the p-type layer.10.The method of claim 9 comprising including a phosphorus dopant during said epitaxial process to form said n-type layer.11.The method according to claim 9, wherein: said n-type substrate has an average dopant density of 5 × 10 19 cm -3 to 7 × 10 19 cm -3; said epitaxial process for forming said n-type layer is provided The thickness of the n-type layer of 1.5 μm to 2.5 μm; and the p-type dopant is implanted at a dose of 6×10 14 cm −2 to 2×10 15 cm − 2 to form the p-type buried layer.12.The method according to claim 9, wherein: said n-type substrate has an average dopant density of from 1 × 10 19 cm -3 to 5 × 10 19 cm -3; said epitaxial process for forming said n-type layer is provided a thickness of the n-type layer of 2.5 micrometers to 3.0 micrometers; and the p-type dopant is implanted at a dose of 6 x 1013 cm-2 to 3 x 1014 cm-2 to form the p-type buried layer.13.The method of claim 9 wherein forming the n-type region in the p-type layer comprises:Forming a second implantation mask over the top surface of the p-type layer, the second implantation mask exposing a region for the n-type region;A dose of 1 x 1015 cm-2 to 1 x 1016 cm-2 and an energy to provide an average depth of 50 nm to 200 nm into the p-type layer in the region exposed by the second implantation mask Injecting a first set of n-type dopants into the p-type layer; at a dose of 1 x 1013 cm-2 to 1 x 1014 cm-2 and an energy of 250 keV to 600 keV exposed at the second implantation mask Injecting a second set of n-type dopants comprising phosphorus into the p-type layer;The second implantation mask is removed.14.The method of claim 9 including forming an isolation structure around the LR-LC diode by a process comprising:Forming a deep trench around the LR-LC diode, the deep trench extending into the substrate under the p-type buried layer;Forming a thermal oxide layer on the sidewalls and the bottom of the deep trench;Forming a polysilicon layer over the top surface of the p-type layer and extending into the deep trench on the thermal oxide layer;The polysilicon layer is removed from above the top surface of the p-type layer.15.The method of claim 9 including forming a parallel diode by a process comprising:Forming a third implantation mask over the top surface of the p-type layer, the third implantation mask exposing a region for a p-type region of the parallel diode, the The region does not have the p-type buried layer;Injecting a p-type dopant into the p-type layer in the region exposed by the third implantation mask;Removing the third implantation mask;Performing a thermal operation to activate the implanted p-type dopant to form a p-type region in the p-type layer in the region for the parallel diode, the p-type region extending to the p-type layer The top surface and having an average doping density of at least 1×10 17 cm −3 ; such that a vertical spacing of at least one micron is present between the p-type region and the n-type layer;At least one isolation structure is formed, the isolation structure laterally separating the LR-LC diode from the parallel diode, the at least one isolation structure extending from the top surface of the p-type layer to the n-type substrate.16.A semiconductor device comprising:N-type substrate;a 1 micron to 5 micron thick n-type layer disposed on the n-type substrate, the n-type layer having an average dopant density of less than 1 x 1016 cm-3;a p-type layer of 3 micrometers to 8 micrometers thick disposed on the n-type layer, the p-type layer having an average dopant density of less than 1 x 1015 cm-3;a first bidirectional diode comprising:a first LR-LC diode comprising: a first p-type buried layer extending from the p-type layer through the n-type layer to the n-type substrate; and first An n-type region, the first n-type region being disposed in the p-type layer and extending to a top surface of the p-type layer;a first parallel diode comprising a first p-type region, the first p-type region being disposed in the p-type layer and extending to the top surface of the p-type layer;a second bidirectional diode comprising:a second LR-LC diode comprising: a second p-type buried layer extending from the p-type layer through the n-type layer to the n-type substrate; a second n-type region, the second n-type region being disposed in the p-type layer and extending to a top surface of the p-type layer;a second parallel diode comprising a second p-type region, the second p-type region being disposed in the p-type layer and extending to the top surface of the p-type layer;At least one isolation structure separating the first LR-LC diode, the first parallel diode, the second LR-LC diode, and the second parallel diode laterally from each other;a first terminal electrically coupled directly to the first n-type region and the first p-type region;a second terminal electrically coupled directly to the second n-type region and the second p-type region;Wherein the first parallel diode and the second parallel diode do not have the first p-type buried layer and the second p-type buried layer.17.The semiconductor device according to claim 16, wherein: said n-type substrate has an average dopant density of more than 1 × 10 18 cm -3 ; said first p-type buried layer has a peak dopant density of more than 1 × 10 17 cm a peak dopant density of the second p-type buried layer is equal to the peak dopant density of the first p-type buried layer; an average dopant density of the first p-type region is at least 1 × 10 17 cm -3 ; and the average dopant density of the second p-type region is equal to the average dopant density of the first p-type region.18.The semiconductor device according to claim 16, wherein said n-type substrate has an average dopant density of 5 × 10 19 cm -3 to 7 × 10 19 cm -3 ; and said n - type layer has a thickness of 1.5 μm to 2.5 μm The peak density of the first p-type buried layer is 5×10 18 cm −3 to 1×10 19 cm −3 ; and the peak dopant density of the second p-type buried layer is equal to the first p-type Peak dopant density of the buried layer.19.The semiconductor device according to claim 16, wherein said n-type substrate has an average dopant density of from 1 × 10 19 cm -3 to 5 × 10 19 cm -3 ; and said n - type layer has a thickness of from 2.5 μm to 3.0 μm The peak density of the first p-type buried layer is 5×10 17 cm −3 to 2×10 18 cm −3 ; and the peak dopant density of the second p-type buried layer is equal to the first p-type Peak dopant density of the buried layer.20.The semiconductor device according to claim 16, wherein said first n-type region and said second n-type region each comprise: heavier doping having an average doping density of from 1 × 10 17 cm -3 to 3 × 10 19 cm -3 An inner portion, and a lightly doped outer portion of at least 100 nanometers thick below the heavier doped inner portion and surrounding the heavierly doped inner portion, the lighter doped outer portion having 1 x 1016 cm - Average doping density of 3 to 1 x 1017 cm-3. |
Low dynamic resistance low capacitance diodeTechnical fieldThe present invention relates generally to semiconductor devices and, more particularly, to diodes in semiconductor devices.Background techniqueDiodes with low dynamic resistance and low capacitance are useful in electronic circuits such as electrostatic discharge (ESD) protection circuits. The low capacitance is achieved by a lightly doped layer of a forward biased diode in series with the reverse biased diode. The reverse biased diode has a heavily doped buried layer over the substrate, the heavily doped buried layer setting a breakdown voltage. Minimizing dynamic resistance and capacitance of the diode is desirable while providing the desired breakdown voltage. In some applications, the desired breakdown voltage can be from 6 volts to 8 volts. In other applications, the desired breakdown voltage can be significantly higher, such as in the range of 20 volts to 40 volts. The dynamic resistance is limited by the conductivity of the substrate. Increasing the dopant density in the substrate to improve the dynamic resistance will disadvantageously reduce the breakdown voltage. At the same time, achieving the desired values of dynamic resistance and breakdown voltage is already problematic.Summary of the inventionIn the depicted example, the semiconductor device includes an n-type substrate having a dopant density of 1×10 18 cm −3 or more. An n-type layer having a dopant density of less than 1 × 10 16 cm -3 from 1 μm to 5 μm thick is disposed on the n-type substrate. A p-type layer having a dopant density of less than 1 x 1015 cm-3 from 3 microns to 8 microns thick is disposed on the n-type layer. A low dynamic resistance low capacitance diode (referred to herein as an LR-LC diode of a semiconductor device) includes a p-type buried layer having a peak dopant density of 1×10 17 cm −3 or more, the p-type buried layer from p The layer extends through the n-type layer to the n-type substrate. The LR-LC diode also includes an n-type region disposed in the p-type layer that extends to the top surface of the p-type layer.In some examples, the semiconductor device includes a first bidirectional diode and a second bidirectional diode in a back-to-back configuration. Each bidirectional diode contains an LR-LC diode and a parallel diode separated by an isolation structure.DRAWINGS1 is a cross section of an example semiconductor device having a bidirectional diode including an LR-LC diode.2A-2E are cross sections of the semiconductor device of FIG. 1 depicted in successive stages of an exemplary forming method.3 is a cross section of another example semiconductor device having a pair of bidirectional diodes including LR-LC diodes.Detailed waysThe drawings are not necessarily to scale. Some of the illustrated acts may occur in different orders and/or concurrently with other acts or events. In addition, not all illustrated acts or events are required to implement the method.The semiconductor device includes an LR-LC diode. The semiconductor device has an n-type substrate. A lightly doped n-type layer of 1 micron to 5 micron thick is disposed on the n-type substrate. A lightly doped p-type layer of 3 microns to 8 microns thick is placed over the n-type layer. The LR-LC diode includes a localized p-type buried layer that extends from the p-type layer through the n-type layer to the n-type substrate. The LR-LC diode further includes an n-type region (possibly an n-type well) disposed in the p-type layer, the n-type region extending to a top surface of the p-type layer; the n-type region is separated from the buried p-type layer At least 2 microns. The first pn junction at the boundary between the p-type buried layer and the n-type substrate sets the breakdown voltage of the LR-LC diode. A second pn junction at the boundary between the p-type layer and the n-type region sets the capacitance of the LR-LC diode.The LR-LC diode can be part of a bidirectional diode comprising a parallel diode having a third pn junction between the p-type layer and the n-type layer. Parallel diodes do not have any p-type buried layers. The LR-LC diode is laterally isolated from the parallel diode, for example by a deep trench isolation structure; the LR-LC diode and the parallel diode share an n-type substrate. A first terminal of the bidirectional diode is coupled to the n-type region of the LR-LC diode and coupled to the p-type layer of the parallel diode by a p-type region (possibly a p-type well) in the p-type layer over the n-type layer. A second terminal of the bidirectional diode can be coupled to the n-type substrate. A pair of bidirectional diodes sharing an n-type substrate may have a back-to-back configuration, wherein a first external connection is connected to a first terminal of the first two-way diode of the pair, and a second external connection is connected to the pair a first terminal of the second bidirectional diode.1 is a cross section of an example semiconductor device having a bidirectional diode including an LR-LC diode. The semiconductor device 100 includes an n-type substrate 102. The n-type substrate 102 has an average dopant density greater than 1 x 1018 cm-3. For example, substrate 102 can be part of a bulk silicon wafer. Semiconductor device 100 includes a lightly doped n-type layer 104 of a silicon-based semiconductor material (eg, phosphorus-doped crystalline silicon) disposed on substrate 102. The n-type layer 104 is 1 micron to 5 microns thick and has an average dopant density of less than 1 x 1016 cm-3. The n-type layer 104 can be an epitaxial layer formed on the substrate 102. Semiconductor device 100 includes a lightly doped p-type layer 106 of a silicon-based semiconductor material (eg, boron-doped crystalline silicon) disposed on n-type layer 104. The p-type layer 106 is 3 microns to 8 microns thick and has an average dopant density of less than 1 x 1015 cm-3. The p-type layer 106 may be an epitaxial layer formed on the n-type layer 104.One or more isolation structures 108 laterally isolate regions for the LR-LC diode 114 and laterally isolate regions for the parallel diodes 116. The isolation structure 108 can be a deep trench isolation structure 108 having a dielectric liner 110 and a polycrystalline silicon called polysilicon on the dielectric liner 110. Field plate 112, as depicted in FIG. Other physical forms of isolation structure 108 are within the scope of this example. LR-LC diode 114 and parallel diode 116 are components of bidirectional diode 118.The p-type buried layer 120 is disposed in the LR-LC diode 114 and extends from the p-type layer 106 through the n-type layer 104 to the substrate 102. The n-type layer 104 is depicted in the p-type buried layer 120 in dashed lines in FIG. The p-type buried layer 120 has a peak dopant density greater than 1 x 1017 cm-3. The p-type buried layer 120 can extend laterally across the LR-LC diode 114, as depicted in FIG. The n-type region 122 is disposed in the p-type layer 106 in the LR-LC diode 114. The n-type region 122 can be an n-type well 122 that extends to the top surface 124 of the p-type layer 106. The n-type region 122 can include a lighter doped outer portion 126 that contacts the p-type layer 106 that is at least 100 nanometers thick, and a heavier doped inner portion 128 that is below and around the lighter doped outer portion 126. For example, the lighter doped outer portion 126 can have an average dopant density of 1 x 1016 cm-3 to 1 x 1017 cm-3. Also, for example, the heavier doped inner portion 128 may have an average dopant density of 1 x 10 17 cm -3 to 3 x 10 19 cm -3 .The first pn junction 130 of the LR-LC diode 114 is at the boundary between the p-type buried layer 120 and the n-type substrate 102. The second pn junction 132 is at the boundary between the p-type layer 106 and the n-type region 122. The first pn junction 130 and the second pn junction 132 are connected in series.The breakdown voltage of the LR-LC diode 114 is determined by both the dopant density of the p-type buried layer 120 at the first pn junction 130 and the dopant density of the substrate 102 at the first pn junction. The desired value of the dopant density of the p-type buried layer 120 at the first pn junction 130 and the dopant density of the substrate 102 at the first pn junction may be through the thickness of the n-type layer 104 and the p-type buried layer. The peak dopant density of 120 is chosen to be an appropriate value. Increasing the thickness of the n-type layer 104 will cause the peak of the doping profile in the p-type buried layer 120 to move away from the substrate 102, and thus reduce the dopant density of the p-type buried layer 120 at the first pn junction 130 and The dopant density of the substrate 102 at the first pn junction, and thus the breakdown voltage. Conversely, increasing the peak dopant density of the p-type buried layer 120 will increase the dopant density of the p-type buried layer 120 at the first pn junction 130 and the dopant of the substrate 102 at the first pn junction. Density, and therefore reduced breakdown voltage.If the dopant density of the p-type buried layer 120 at the first pn junction 130 and the dopant density of the substrate 102 at the first pn junction are about 2 x 1018 cm-3 to about 5 x 1018 cm-3, The breakdown voltage can be 6 volts to 8 volts. This range of breakdown voltage may pass through an n-type substrate 102 having an average dopant density of 5 × 10 19 cm -3 to 7 × 10 19 cm -3 , an n -type layer 104 having a thickness of 1.5 μm to 2.5 μm, and having 5 The p-type buried layer 120 having a peak dopant density of ×1018 cm-3 to 1×10 19 cm-3 is realized. The use of an LR-LC diode 114 having a breakdown voltage of 6 volts to 8 volts in an ESD protection circuit can advantageously provide protection for logic input/output terminals having an operating range of 3 volts to 5 volts.If the dopant density of the p-type buried layer 120 at the first pn junction 130 and the dopant density of the substrate 102 at the first pn junction are about 1×10 17 cm −3 to about 5×10 17 cm −3 , The breakdown voltage can be 20 volts to 40 volts. This range of breakdown voltage may pass through an n-type substrate 102 having an average dopant density of 1 × 10 19 cm -3 to 5 × 10 19 cm -3 , an n -type layer 104 having a thickness of 2.5 μm to 3.0 μm, and having 5 The p-type buried layer 120 having a peak dopant density of ×1017 cm-3 to 2×10 18 cm-3 is realized. The use of an LR-LC diode 114 having a breakdown voltage of 20 volts to 40 volts in an ESD protection circuit can advantageously provide protection for analog input/output terminals having an operating range of 10 volts to 25 volts.The capacitance of the LR-LC diode 114 is primarily determined by the capacitance of the second pn junction 132, which is affected by the dopant density of the p-type layer 106 and the n-type region 122 at the second pn junction 132. The n-type region 122 is spaced apart from the p-type buried layer 120 by at least 2 microns, which provides sufficient distance for the depletion region in the p-type layer 106 when the second pn junction 132 is reverse biased, and at the second pn junction 132. A resistor of a suitable low ohmic value is provided in the p-type layer 106 when forward biased.The dynamic resistance of the LR-LC diode 114 is primarily determined by the dopant density in the substrate 102. The average dopant density in the substrate 102 can exceed 5 x 1019 cm-3 to reduce dynamic resistance. The desired value of the dopant density in the substrate 102 (e.g., above 5 x 1019 cm-3) can advantageously be achieved without reducing the breakdown voltage because the breakdown voltage can be through the thickness of the n-type layer 104 and The peak dopant density of the p-type buried layer 120 is achieved by selecting an appropriate value regardless of the dopant density in the substrate 102. The lateral dimensions of the LR-LC diode 114 are selected to provide the desired current capacity.A p-type region 134 (eg, p-type well 134) is disposed in p-type layer 106 in parallel diode 116 to extend to top surface 124 of p-type layer 106. There is a vertical spacing of at least one micron between the p-type region 134 and the n-type layer 104. The p-type region 134 may extend across the parallel diode 116 as depicted in FIG. 1 or may be recessed from the isolation structure 108. The p-type region 134 has an average dopant density of at least 1 x 1017 cm-3 and can provide a desired low resistance contact to the parallel diode 116. The third pn junction 136 of the parallel diode 116 is at the boundary between the n-type layer 104 and the p-type layer 106. The p-type buried layer 120 in the LR-LC diode 114 does not extend into the parallel diode 116. The breakdown voltage of the third pn junction 136 is significantly higher due to the lower dopant density of the n-type layer 104 and the p-type layer 106 compared to the dopant density of the p-type buried layer 120 and the substrate 102. The breakdown voltage of the first pn junction 130 of the LR-LC diode 114. The capacitance of the shunt diode 116 is determined by the dopant density of the n-type layer 104 and the p-type layer 106 at the third pn junction 136 and may be comparable to the capacitance of the LR-LC diode 114. Since the n-type layer 104 and the p-type layer 106 have a lower dopant density than diodes having a higher dopant density, the capacitance of the LR-LC diode 114 and the parallel diode 116 is advantageously lower. . The dynamic resistance of the parallel diode 116 is primarily determined by the dopant density in the substrate 102. A dopant density in the substrate 102 of above 1 x 1018 cm-3 and especially above 5 x 1019 cm-3 may advantageously provide the desired low dynamic resistance.The first terminal 138 of the bidirectional diode 118 is electrically coupled to the n-type region 122 of the LR-LC diode 114 and is electrically coupled to the p-type region 134 of the parallel diode 116. The first terminal 138 may be integrated into the semiconductor device 100 or may include a separate external connection such as wire bonding or bump bonding. The second terminal 140 is electrically connected to the substrate 102, possibly including solder or a conductive die attach material. During operation of the bidirectional diode 118, the parallel diode 116 is forward biased on the first terminal 138 with a positive voltage offset relative to the second terminal 140, and thus shunted by the parallel diode 116. A negative voltage offset relative to the second terminal 140 on the first terminal 138 forward biases the second pn junction 132 and causes breakdown of the first pn junction 130 in the LR-LC diode 114, thus passing LR- The LC diode 114 is shunted.2A-2E are cross sections of the semiconductor device of FIG. 1 depicted in successive stages of an exemplary forming method. Referring to FIG. 2A, substrate 102 has an average dopant density greater than 1 x 1018 cm-3. The dopant may comprise phosphorus and arsenic and possibly germanium. The average dopant density can exceed 5 x 1019 cm-3 to advantageously reduce the dynamic resistance of the bidirectional diode 118 of FIG. The n-type layer 104 is formed on the substrate 102 by an epitaxial process, for example, by thermal decomposition of silane at 550 °C. An n-type dopant (e.g., phosphorous) from the substrate 102 diffuses into the n-type layer 104 during the epitaxial process. Additional n-type dopants (eg, phosphorus and/or arsenic in the form of phosphine and/or arsine) may be introduced into the n-type layer 104 during the epitaxial process. The average dopant density from all sources in the n-type layer 104 is less than 1 x 1016 cm-3. The thickness of the n-type layer 104 is selected between 1 micron and 5 microns to provide the desired breakdown voltage of the first pn junction 130 of FIG. 1, as explained above.A first pad oxide layer 142 may be formed on the n-type layer 104. The first pad oxide layer 142 may be formed by thermal oxidation, and may be 5 nm to 50 nm thick. The pad oxide layer 142 protects the surface of the n-type layer during subsequent processing. A first implant mask 144 is formed over the first pad oxide layer 142 that exposes regions of the LR-LC diode 114 for the p-type buried layer 120 of FIG. The first implant mask 144 covers the area for paralleling the diodes 116. The first implantation mask 144 may include a photoresist formed by a photolithography process, and may include a hard mask material such as silicon dioxide or silicon nitride. A first set of p-type dopants 146 (eg, boron and possibly gallium or indium) are implanted into the n-type layer 104 in a region exposed by the first implant mask 144 to be directly below the pad oxide layer 142. A buried layer implant region 148 is formed in the pattern layer 104. The pad oxide layer 142 reduces the channeling effect of the implanted p-type dopant 146, limiting the depth of the buried layer implant region 148, which may advantageously provide a more reproducible dopant distribution in the p-type buried layer 120. And thus providing a more stable breakdown voltage in the first pn junction 130. The p-type dopant 146 can have a dose of at least 3 x 1013 cm-2 to provide a peak dopant density greater than 1 x 1017 cm-3. To provide a peak dopant density of 5 x 1018 cm-3 to 1 x 1019 cm-3, as described in the example of LR-LC diode 114 having a breakdown voltage of 6 volts to 8 volts as described with reference to Figure 1, p The type dopant 146 may have a dose of 6 x 10 14 cm -2 to 2 x 10 15 cm -2 . To provide a peak dopant density of 5 x 1017 cm-3 to 2 x 1018 cm-3, as described in the example of LR-LC diode 114 having a breakdown voltage of 20 volts to 40 volts as described with reference to Figure 1, p The type dopant 146 may have a dose of 6 x 10 13 cm -2 to 3 x 10 14 cm -2 .After implanting the p-type dopant 146, the first implantation mask 144 is removed. The photoresist in the first implant mask 144 can be removed by an ashing process using a water mixture of sulfuric acid and hydrogen peroxide, or ammonium hydroxide and hydrogen peroxide, followed by a wet cleaning process. Water mixture. The silicon nitride in the first implantation mask 144 can be removed by a plasma etching process.The annealing process activates the implanted p-type dopant 146 in the buried layer implant region 148. The annealing process may be, for example, a rapid thermal process that heats the substrate 102 and the n-type layer 104 to a temperature of 1000 ° C to 1050 ° C for 20 seconds to 60 seconds, or may be, for example, furnace annealing, which will be annealed The substrate 102 and the n-type layer 104 are heated to a temperature of 850 ° C to 950 ° C for 30 minutes to 120 minutes. The first pad oxide layer 142 is then removed, for example, by a dilute water buffer solution of hydrofluoric acid.Referring to FIG. 2B, a p-type layer 106 is formed on the n-type layer 104 by another epitaxial process. A p-type dopant (e.g., boron in the form of borane) is introduced into the p-type layer 106 during the epitaxial process to provide an average dopant density of less than 1 x 1015 cm-3. The thickness of p-type layer 106 is selected between 3 microns and 8 microns to provide the desired low capacitance of second pn junction 132 of FIG. 1 while maintaining the desired low dynamic resistance of LR-LC diode 114. During the epitaxial process to form the p-type layer 106, the p-type dopant of the buried layer implant region 148 of FIG. 2A is diffused upward into the p-type layer 106 and diffused downward to contact the substrate 102 to form a p-type buried region. Layer 120. The p-type dopant of the p-type buried layer 120 is counter-doped for the n-type layer 104 in the region of the LR-LC diode 114. The n-type layer 104 is depicted by a dashed line in the p-type buried layer 120 of FIGS. 2B through 2E.Referring to FIG. 2C, a second pad oxide layer 150 can be formed over the top surface 124 of the p-type layer 106 to protect the top surface 124 during subsequent processing. The second pad oxide layer 150 can be formed similar to the first pad oxide layer 142 described with reference to FIG. 2A. A second implant mask 152 is formed over the second pad oxide layer 150 to expose regions of the LR-LC diode 114 for the n-type region 122 of FIG. A second implant mask 152 covers the area for paralleling the diodes 116. The second implantation mask 152 may include a photoresist formed by a photolithography process. A first set of n-type dopants 154 comprising phosphorus and arsenic and possibly germanium are implanted into the p-type layer 106 in a region exposed by the second implant mask 152 to be directly under the second pad oxide layer 150. A first n-type implant region 158 is formed in the p-type layer 106. For example, the first set of n-type dopants 154 may have a total dose of 1 x 1015 cm-2 to 1 x 1016 cm-2 and an energy to provide an average depth of 50 nm to 200 nm to be the n-type region of FIG. The heavier doped inner portion 128 of 122 provides an n-type dopant. A second set of n-type dopants 156 comprising phosphorus is implanted into the p-type layer 106 in a region exposed by the second implant mask 152 to form in the p-type layer 106 directly below the first n-type implanted region 158. The second n-type implant region 160. For example, the second set of n-type dopants 154 may have a total dose of 1 x 1013 cm-2 to 1 x 1014 cm-2 and an energy of 250 keV for 600 keV for phosphorus to be lighter for the n-type region 122 of FIG. Doped inner portion 126 provides an n-type dopant. Subsequently, the second implant mask 152 is removed, such as described with reference to the first implant mask 144 of FIG. 2A.Referring to FIG. 2D, a third implant mask 162 is formed over the second pad oxide layer 150 to expose a region of the shunt diode 116 for the p-type region 134 of FIG. A third implant mask 162 covers the area for the LR-LC diode 114. The third implantation mask 162 can be formed similar to the second implantation mask 152 of FIG. 2C. A second set of p-type dopants 164 are implanted into the p-type layer 106 in a region exposed by the third implant mask 162 to form a p-type well in the p-type layer 106 directly below the second pad oxide layer 150. Injection zone 166. For example, the p-type dopant 164 may have a total dose of 1 x 1015 cm-2 to 1 x 1016 cm-2. Subsequently, the third implant mask 162 is removed, such as described with reference to the first implant mask 144 of FIG. 2A.Referring to FIG. 2E, isolation structure 108 is formed through p-type layer 106 and n-type layer 104 and extends into substrate 102 to laterally surround the region for LR-LC diode 114 and also laterally surround for parallel diodes The area of 116. Isolation structure 108 may be formed by etching through isolation trenches through p-type layer 106 and n-type layer 104 and into substrate 102 under p-type buried layer 120. The thermal oxide layer may be grown on the sidewalls and bottom of the isolation trench, possibly followed by a conformal layer of silicon dioxide in the isolation trench over the top surface 124 of the p-type layer 106 and extending over the thermal oxide. The conformal layer of silicon dioxide is formed by a sub-atmospheric chemical vapor deposition (SACVD) process. The thermal oxide layer and the SACVD silicon dioxide layer provide a dielectric liner 110 of the isolation structure 108. A polysilicon layer is formed over the SACVD silicon dioxide and extends into the isolation trench to form the field plate 112. Polysilicon and SACVD silicon dioxide are removed from the top surface 124 of the p-type layer 106 by an etch back process and/or a chemical mechanical polishing (CMP) process, leaving the isolation structure 108. The thermal distribution during the growth of the thermal oxide in the dielectric liner 110 activates the implanted n-type dopant and diffuses the implanted n-type in the first n-type implant region 158 and the second n-type implant region 160 of FIG. 2C. Dopings to correspondingly form the heavier doped inner portion 128 and the lighter doped inner portion 126 of the n-type region 122 and activate the implanted p-type dopant and in the p-type well implant region 166 of FIG. 2D The implanted p-type dopant is diffused to form a p-type region 134. An optional thermal drive process, such as furnace annealing, can be performed to further diffuse the implanted n-type dopant and implanted p-type dopant before or after the isolation structure 108 is formed.Other methods of forming the isolation structure 108 may be combined with other methods of activating and diffusing the implanted n-type dopant and p-type dopant (eg, furnace annealing) within the scope of this example. The formation of the semiconductor device 100 is continued by forming electrical connections to the substrate 102, the n-type region 122, and the p-type region 134 to provide the structure of FIG.3 is a cross section of another example semiconductor device having a pair of bidirectional diodes including LR-LC diodes. Semiconductor device 300 includes a first bidirectional diode 318 and a second bidirectional diode 368. The first bidirectional diode 318 includes a first LR-LC diode 314 and a first parallel diode 316; the second bidirectional diode 368 includes a second LR-LC diode 370 and a second parallel diode 372. The first LR-LC diode 314, the first parallel diode 316, the second LR-LC diode 370, and the second parallel diode 372 are laterally separated by the isolation structure 308. The isolation structure 308 can be a deep trench structure as described with reference to Figure 1, or can be other types of isolation structures.The semiconductor device 300 is formed on an n-type substrate 302 having an average dopant density greater than 1 x 1018 cm-3 as described with reference to Figures 1 and 2A. The first bidirectional diode 318 and the second bidirectional diode 368 share an n-type substrate 302. An n-type layer 304 having an average dopant density of less than 1 x 1016 cm-3 from 1 micron to 5 microns thick is formed on the substrate 302. The n-type layer 304 can be formed by an epitaxial process. A p-type layer 306 having an average dopant density of less than 1 x 1015 cm-3 from 3 microns to 8 microns thick is formed on the n-type layer 304. The p-type layer 306 can also be formed by an epitaxial process. Other methods of forming n-type layer 304 and p-type layer 306 are within the scope of this example.The first LR-LC diode 314 includes a first p-type buried layer 320 that extends from the p-type layer 306 through the n-type layer 304 to the substrate 302. The first p-type buried layer 320 has a peak dopant density greater than 1 x 1017 cm-3. The first p-type buried layer 320 is limited to the first LR-LC diode 314. For example, the first LR-LC diode 314 further includes a first n-type region 322 having an average dopant density of 1 x 1016 cm-3 to 3 x 1019 cm-3, the first n-type region being disposed in the p-type layer 306. And extending to the top surface 324 of the p-type layer 306. The first LR-LC diode 314 operates as described with reference to FIG.The first shunt diode 316 includes a first p-type region 334 disposed in the p-type layer 306 that extends to the top surface 324 of the p-type layer 306. The first parallel diode 316 has no p-type buried layer, such as the first p-type buried layer 320. The first parallel diode 316 is used as part of the first bidirectional diode 318 as described with reference to FIG.The second LR-LC diode 370 is similar to the first LR-LC diode 314. The second LR-LC diode 370 includes a second p-type buried layer 374 that extends from the p-type layer 306 through the n-type layer 304 to the substrate 302. The second p-type buried layer 374 may be formed simultaneously with the first p-type buried layer 320 such that the peak dopant density of the second p-type buried layer 374 is equal to the peak dopant density of the first p-type buried layer 320. The second p-type buried layer 374 is limited to the second LR-LC diode 370. The second LR-LC diode 370 also includes a second n-type region 376 disposed in the p-type layer 306 that extends to the top surface 324 of the p-type layer 306. The second n-type region 376 can be formed simultaneously with the first n-type region 322 such that the dopant density distribution of the second n-type region 376 is equal to the dopant density distribution of the first n-type region 322. The second LR-LC diode 370 operates similar to the first LR-LC diode 314.The second shunt diode 372 is similar to the first shunt diode 316. The second shunt diode 372 includes a second p-type region 378 disposed in the p-type layer 306 that extends to the top surface 324 of the p-type layer 306. The second p-type region 378 can be formed simultaneously with the first p-type region 334 such that the average dopant density of the second p-type region 378 is equal to the average dopant density of the first p-type region 334. The second parallel diode 372 also has no p-type buried layer, such as a second p-type buried layer 374. The second shunt diode 372 is used as part of the second bidirectional diode 368, similar to the first bidirectional diode 318.The first terminal 338 of the semiconductor device 300 is directly electrically coupled to the first n-type region 322 and the first p-type region 334. The second terminal 380 of the semiconductor device 300 is directly electrically coupled to the second n-type region 376 and the second p-type region 378. During operation of semiconductor device 300, a voltage offset between first terminal 338 and second terminal 380 produces a current through first bidirectional diode 318 and second bidirectional diode 368. The back-to-back configuration of the first bidirectional diode 318 and the second bidirectional diode 368 can advantageously provide a symmetric shunt response to the voltage offset. The back-to-back configuration of the first bidirectional diode 318 and the second bidirectional diode 368 can also advantageously provide a higher effective breakdown voltage than a single bidirectional diode having the same structure.Modifications to the described embodiments are possible within the scope of the claims, and other embodiments are possible. |
Technologies for enforcing virtual machine network access control include a network computing device that includes a plurality of virtual machines. The network computing device is configured to receive an access request from a virtual function assigned to a requesting virtual machine of the network computing device. The network computing device is additionally configured to determine a first privilege level assigned to the requesting machine and a second privilege level assigned to the destination virtual machine, and determine whether the requesting virtual machine is authorized to access the destination virtual machine based on a comparison of the first and second privilege levels. Upon determining the requesting virtual machine is authorized to access the destination virtual machine, the network computing device is additionally configured to allow the requesting virtual machine access to the destination virtual machine. Other embodiments are described herein. |
WHAT IS CLAIMED IS:1. A network computing device for enforcing virtual machine network access control, the network computing device comprising:one or more processors; andone or more data storage devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network computing device to:receive an access request from a virtual function assigned to a requesting virtual machine, wherein the requesting virtual machine is one of a plurality of virtual machines initialized on the network computing device, wherein the access request includes a request to access at least a portion of a destination virtual machine, wherein the destination virtual machine is one of the plurality of virtual machines initialized on the network computing device;determine a first privilege level assigned to the requesting machine and a second privilege level assigned to the destination virtual machine;determine whether the requesting virtual machine is authorized to access the destination virtual machine based on a comparison of the first and second privilege levels; andallow, in response to a determination the requesting virtual machine is authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.2. The network computing device of claim 1, wherein the plurality of instructions further cause the network computing device to:initialize each of the plurality of virtual machines; andassign a privilege level to each of the plurality of virtual machines, wherein the privilege level comprises one of a privileged level or a non-privileged level.3. The network computing device of claim 2, wherein the plurality of instructions further cause the network computing device to:initialize one or more virtual functions for each of the plurality of virtual machines; andassign each of the one or more virtual functions to a corresponding one of the plurality of virtual machines.4. The network computing device of claim 2, wherein to assign the privilege level to each of the plurality of virtual machines comprises to assign the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine.5. The network computing device of claim 4, wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the privileged level.6. The network computing device of claim 4, wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the non-privileged level.7. The network computing device of claim 2, wherein the plurality of instructions further cause the network computing device to deny, in response to a determination the requesting virtual machine is not authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.8. The network computing device of claim 7, wherein to assign the privilege level to each of the plurality of virtual machines comprises to assign the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine, and wherein to deny the requesting virtual machine access to the destination virtual machine comprises to deny access subsequent to a determination that the first privilege level corresponds to the non-privileged level and the second privilege level corresponds to the privileged level.9. The network computing device of claim 1, wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access limited to at least the portion of the destination virtual machine corresponding to the access request.10. The network computing device of claim 1, wherein the first and destination virtual machines are the same virtual machine.11. The network computing device of claim 1, wherein the first and destination virtual machines are different virtual machines.12. The network computing device of claim 1, wherein the access request comprises one of a VM to VM access request or a VM to network access request.13. A method for enforcing virtual machine network access control, the method comprising:receiving, by a network computing device, an access request from a virtual function assigned to a requesting virtual machine, wherein the requesting virtual machine is one of a plurality of virtual machines initialized on the network computing device, wherein the access request includes a request to access at least a portion of a destination virtual machine, wherein the destination virtual machine is one of the plurality of virtual machines initialized on the network computing device;determining, by the network computing device, a first privilege level assigned to the requesting machine and a second privilege level assigned to the destination virtual machine;determining, by the network computing device, whether the requesting virtual machine is authorized to access the destination virtual machine based on a comparison of the first and second privilege levels; andallowing, by the network computing device and in response to a determination the requesting virtual machine is authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.14. The method of claim 13, further comprising:initializing, by the network computing device, each of the plurality of virtual machines; andassigning, by the network computing device, a privilege level to each of the plurality of virtual machines, wherein the privilege level comprises one of a privileged level or a non-privileged level.15. The method of claim 14, further comprising:initializing, by the network computing device, one or more virtual functions for each of the plurality of virtual machines; andassigning, by the network computing device, each of the one or more virtual functions to a corresponding one of the plurality of virtual machines.16. The method of claim 14, wherein assigning the privilege level to each of the plurality of virtual machines comprises assigning the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine.17. The method of claim 16, wherein allowing the requesting virtual machine access to the destination virtual machine comprises allowing access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the privileged level, or a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the non-privileged level.18. The method of claim 14, further comprising denying, by the network computing device and in response to a determination the requesting virtual machine is not authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.19. The method of claim 18, wherein assigning the privilege level to each of the plurality of virtual machines comprises assigning the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine, and wherein denying the requesting virtual machine access to the destination virtual machine comprises denying access subsequent to a determination that the first privilege level corresponds to the non-privileged level and the second privilege level corresponds to the privileged level.20. The method of claim 13, wherein allowing the requesting virtual machine access to the destination virtual machine comprises allowing access limited to at least the portion of the destination virtual machine corresponding to the access request.21. The method of claim 13, wherein the first and destination virtual machines are the same virtual machine.22. The method of claim 13, wherein the first and destination virtual machines are different virtual machines.23. The method of claim 13, wherein receiving the access request comprises receiving one of a VM to VM access request or a VM to network access request.24. A network computing device comprising:a processor; anda memory having stored therein a plurality of instructions that when executed by the processor cause the network computing device to perform the method of any of claims 13- 23.25. One or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a network computing device performing the method of any of claims 13-23. |
TECHNOLOGIES FOR ENFORCING NETWORK ACCESS CONTROL OF VIRTUALMACHINESCROSS-REFERENCE TO RELATED APPLICATION[0001] The present application claims priority to U.S. Utility Patent Application SerialNo. 14/979,134, entitled "TECHNOLOGIES FOR ENFORCING NETWORK ACCESS CONTROL OF VIRTUAL MACHINES," which was filed on December 22, 2015.BACKGROUND[0002] Network operators and communication service providers typically rely on complex, large-scale data centers comprised of a multitude of network computing devices (e.g., servers, switches, routers, etc.) to process network traffic through the data center. In order to provide scalability to meet network traffic processing demands and reduce operational costs, certain data center operations are typically run inside containers or virtual machines (VMs) in a virtualized environment of the network computing devices. To coordinate the functionality enabling physical hardware of a network computing device on which a VM is running with the virtual environment of the VM, the VM typically requires exposing a virtualized instance of a virtual function. For example, a virtual function, such as a PCI Express (PCIe) virtual function, can provide a mechanism for the direct transfer of data between the VM and a network interface controller (NIC) of the network computing device. To do so, the network computing device generally relies on a virtual function driver to manage the virtual function (e.g., read/write to the virtual function's configuration space).BRIEF DESCRIPTION OF THE DRAWINGS[0003] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0004] FIG. 1 is a simplified block diagram of at least one embodiment of a system for enforcing network access control of virtual machines by a network computing device;[0005] FIG. 2 is a simplified block diagram of at least one embodiment of the network computing device of the system of FIG. 1;[0006] FIG. 3 is a simplified block diagram of at least one embodiment of an environment that may be established by the network computing device of FIG. 2; [0007] FIG. 4 is a simplified block diagram of another embodiment of an environment that may be established by the network computing device of FIG. 2;[0008] FIG. 5 is a simplified flow diagram of at least one embodiment of a method for assigning a privilege level to an initialized virtual machine that may be executed by the network computing device of FIG. 2; and[0009] FIG. 6 is a simplified flow diagram of at least one embodiment of a method for enforcing network access control of an initialized virtual machine that may be executed by the network computing device of FIG. 2.DETAILED DESCRIPTION OF THE DRAWINGS[0010] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0011] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one of A, B, and C" can mean (A); (B); (C): (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).[0012] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media (e.g., memory, data storage, etc.), which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or nonvolatile memory, a media disc, or other media device).[0013] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0014] Referring now to FIG. 1, in an illustrative embodiment, a system 100 for enforcing network access control of virtual machines includes a source endpoint node 102 communicatively coupled to a destination endpoint node 110 via a network computing device 106 of a network 104. While only a single network computing device 106 is shown in the network 104 of the illustrative system 100, it should be appreciated that the network 104 may include a plurality of network computing devices 106 configured in various architectures.[0015] In use, the network computing device 106 performs various operations (e.g., services) on network traffic (i.e., network packets, messages, etc.) received at the network computing device 106. It should be appreciated that the received network traffic may be dropped or forwarded, such as to additional other network computing devices communicatively coupled to the network computing device 106 or to the destination endpoint node 110. To process the network traffic, the network computing device 106 is configured to spin up multiple virtual machines (VMs) at the network computing device 106. Accordingly, the network computing device 106 is configured to map virtual representations of physical components of the network computing device 106 to virtualized components of the various VMs.[0016] For example, a virtual network interface controller (NIC) may be initialized by the network computing device 106 to facilitate communications between a physical NIC (see, e.g., the NIC 212 of FIG. 2) and the virtual NIC. In such an embodiment, a virtual machine monitor (VMM) (see, e.g., the VMM 418 of FIG. 4) may be implemented to expose the virtual NICs to each of the instantiated VMs, such that all VM to VM communication passes through a single logical entity (i.e., the VMM). Similarly, the VMM may be configured to create virtual functions and virtual function drivers for assignment to the VMs to manage communications between the physical NIC and the virtual NIC. It should be appreciated that, in some embodiments, one or more of the VMs may be spawned on one or more other network computing devices communicatively coupled to the network computing device 106. [0017] Flow director capabilities of the NIC 212 are configured to direct network traffic to the proper virtual functions (e.g., using an access control list (ACL) established by the VMM) of the VMs; however, during processing of the network traffic, the virtual function drivers are susceptible to manipulation by disruptive network packets, such as from malformed network packets, invalid memory access requests, restricted memory region access requests, restricted hardware access requests, etc., which typically result in a reset of the virtual device to clear a state of the virtual device upon detection of a disruptive network packet.[0018] Accordingly, to pre-emptively determine whether the network traffic is allowable (e.g., within another VM of the network computing device 106, through another VM to a host external to the network computing device 106, etc.), the network computing device 106 (i.e., the NIC 212) is configured to implement hardware-based VM privilege levels. To do so, as described in further detail below, upon initialization of the VM, the VMM determines whether the VM is privileged or non-privileged and stores the privilege level (i.e., a privileged level or a non-privileged level) in a secure location, such as within a VM network privilege- level table at a secure memory of the NIC (see, e.g., the secure memory 214 of the NIC 212 of FIG. 2). In other words, the network computing device 106 is configured to control the network privileges rather than the execution privileges of the VM.[0019] The source endpoint node 102 and/or the destination endpoint node 110 may be embodied as any type of computation or computer device capable of performing the functions described herein, including, without limitation, a portable computing device (e.g., smartphone, tablet, laptop, notebook, wearable, etc.) that includes mobile hardware (e.g., processor, memory, storage, wireless communication circuitry, etc.) and software (e.g., an operating system) to support a mobile architecture and portability, a computer, a server (e.g., stand-alone, rack-mounted, blade, etc.), a network appliance (e.g., physical or virtual), a web appliance, a distributed computing system, a processor-based system, and/or a multiprocessor system.[0020] The network 104 may be embodied as any type of wired or wireless communication network, including a wireless local area network (WLAN), a wireless personal area network (WPAN), a cellular network (e.g., Global System for Mobile Communications (GSM), Long-Term Evolution (LTE), etc.), a telephony network, a digital subscriber line (DSL) network, a cable network, a local area network (LAN), a wide area network (WAN), a global network (e.g., the Internet), or any combination thereof. It should be appreciated that, in such embodiments, the network 104 may serve as a centralized network and, in some embodiments, may be communicatively coupled to another network (e.g., the Internet). Accordingly, the network 104 may include a variety of other network computing devices (e.g., virtual and physical routers, switches, network hubs, servers, storage devices, compute devices, etc.), as needed to facilitate communication between the source endpoint node 102 and the destination endpoint node 110, which are not shown to preserve clarity of the description.[0021] The network computing device 106 may be embodied as any type of network traffic processing device that is capable of performing the functions described herein, such as, without limitation, a server (e.g., stand-alone, rack-mounted, blade, etc.), a network appliance (e.g., physical or virtual), a switch (e.g., rack-mounted, standalone, fully managed, partially managed, full-duplex, and/or half-duplex communication mode enabled, etc.), a router, a web appliance, a distributed computing system, a processor-based system, and/or a multiprocessor system.[0022] As shown in FIG. 2, the illustrative network computing device 106 includes a processor 202, an input/output (I/O) subsystem 204, a memory 206, a data storage device 208, and communication circuitry 210. Of course, the network computing device 106 may include other or additional components, such as those commonly found in a computing device, in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory 206, or portions thereof, may be incorporated in the processor 202 in some embodiments. Further, in some embodiments, one or more of the illustrative components may be omitted from the network computing device 106.[0023] The processor 202 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 202 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Similarly, the memory 206 may be embodied as any type of volatile or non- volatile memory or data storage capable of performing the functions described herein. In operation, the memory 206 may store various data and software used during operation of the network computing device 106, such as operating systems, applications, programs, libraries, and drivers.[0024] The memory 206 is communicatively coupled to the processor 202 via the I/O subsystem 204, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 202, the memory 206, and other components of the network computing device 106. For example, the I/O subsystem 204 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 204 may form a portion of a system-on-a- chip (SoC) and be incorporated, along with the processor 202, the memory 206, and other components of the network computing device 106, on a single integrated circuit chip.[0025] The data storage device 208 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. It should be appreciated that the data storage device 208 and/or the memory 206 (e.g., the computer-readable storage media) may store various data as described herein, including operating systems, applications, programs, libraries, drivers, instructions, etc., capable of being executed by a processor (e.g., the processor 202) of the network computing device 106.[0026] The communication circuitry 210 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the network computing device 106 and other computing devices (e.g., the source endpoint node 102, the destination endpoint node 110, another network computing device, etc.) over a network (e.g., the network 104). The communication circuitry 210 may be configured to use any one or more communication technologies (e.g., wireless or wired communication technologies) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, LTE, 5G, etc.) to effect such communication.[0027] The illustrative communication circuitry 210 includes a NIC 212. The NIC 212 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the network computing device 106. For example, in some embodiments, the NIC 212 may be integrated with the processor 202, embodied as an expansion card coupled to the I/O subsystem 204 over an expansion bus (e.g., PCI Express), part of an SoC that includes one or more processors, or included on a multichip package that also contains one or more processors. Additionally or alternatively, in some embodiments, functionality of the NIC 212 may be integrated into one or more components of the network computing device 106 at the board level, socket level, chip level, and/or other levels.[0028] The illustrative NIC 212 includes a secure memory 214. The secure memory214 of the NIC 212 may be embodied as any type of memory that is configured to securely store data local to the NIC 212. It should be appreciated that, in some embodiments, the NIC 212 may further include a local processor (not shown) local to the NIC 212. In such embodiments, the local processor of the NIC 212 may be capable of performing functions (e.g., replication, network packet processing, etc.) that may be offloaded to the NIC 212. [0029] Referring again to FIG. 1, the illustrative network 104 may additionally include a network controller 108 communicatively coupled to the network computing device 106. The network controller 108 may be embodied as any type of device, hardware, software, and/or firmware capable of directing the flow of network packets and managing policies of the network computing device 106 and performing the functions described herein, such as, without limitation, a server (e.g., stand-alone, rack-mounted, blade, etc.), a network appliance (e.g., physical or virtual), a switch (e.g., rack- mounted, standalone, fully managed, partially managed, full-duplex, and/or half-duplex communication mode enabled, etc.), a router, a web appliance, a distributed computing system, a processor-based system, and/or a multiprocessor system.[0030] The network controller 108 may be configured to provide one or more policies(e.g., network policies) or instructions to the network computing device 106. It should be appreciated that, in some embodiments, the network controller 108 may be configured to operate in a software-defined networking (SDN) environment (i.e., an SDN controller) and/or a network functions virtualization (NFV) environment (i.e., an NFV manager and network orchestrator (MANO)). As such, the network controller 108 may include devices and components commonly found in a network control device or similar computing devices such as processors, memory, communication circuitry, and data storage devices, similar to those described for the network computing device 106 of FIG. 2, which are not shown in FIG. 1 for clarity of the description.[0031] Referring now to FIG. 3, in an illustrative embodiment, the network computing device 106 establishes an environment 300 during operation. The illustrative environment 300 includes a network communication module 310, a virtual machine management module 320, a data flow management module 330, and a virtual network policy enforcement module 340. Each of the modules, logic, and other components of the environment 300 may be embodied as hardware, software, firmware, or a combination thereof. For example, each of the modules, logic, and other components of the environment 300 may form a portion of, or otherwise be established by, the processor 202, the communication circuitry 210 (e.g., the NIC 212), and/or other hardware components of the network computing device 106. As such, in some embodiments, one or more of the modules of the environment 300 may be embodied as circuitry or a collection of electrical devices (e.g., network communication circuitry 310, virtual machine management circuitry 320, data flow management circuitry 330, virtual network policy enforcement circuitry 340, etc.).[0032] The illustrative environment 300 of the network computing device 106 additionally includes network policy data 302, access control data 304, and privilege level data 306, each of which may be accessed by the various modules and/or sub-modules of the network computing device 106. It should be appreciated that the network computing device 106 may include other components, sub-components, modules, sub-modules, and/or devices commonly found in a computing device, which are not illustrated in FIG. 3 for clarity of the description.[0033] The network communication module 310 is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the network computing device 106. To do so, the network communication module 310 is configured to receive and process network packets from other computing devices (e.g., the source endpoint node 102, the destination endpoint node 110, another network computing device communicatively coupled to the network computing device 106 via the network 104, etc.). Additionally, the network communication module 310 is configured to prepare and transmit network packets to another computing device (e.g., the source endpoint node 102, the destination endpoint node 110, another network computing device communicatively coupled to the network computing device 106 via the network 104, etc.). Accordingly, in some embodiments, at least a portion of the functionality of the network communication module 310 may be performed by the communication circuitry 210, and more specifically by the NIC 212.[0034] The virtual machine management module 320 is configured to manage the VMs of the network computing device 106, as well as each of the virtual functions associated therewith (see, e.g., the VMs 400 and virtual functions 410 of FIG. 4). To do so, the virtual machine management module 320 is configured to deploy (i.e., spin-up, perform instantiation, etc.) and close (i.e., wind-down, remove from the network, etc.) the VMs based on the various service functions (e.g., based on service functions of a service function chain corresponding to the network packet stream) to be performed on the network traffic. Accordingly, the virtual machine management module 320 is configured to manage each of the virtual function drivers associated with the respective VMs.[0035] The data flow management module 330 is configured to direct the flow of incoming network traffic to the appropriate virtual functions. In other words, the data flow management module 330 is configured to determine an intended destination (e.g., a VM) for which incoming network traffic is to be directed (i.e., based on an access request) and direct the incoming network traffic to an interface of the intended destination (i.e., a virtual function of the VM). However, prior to directing the network traffic to the intended, the access request is checked against a virtual network policy, such as may be performed by the virtual network policy enforcement module 340. In some embodiments, the virtual network policy may be stored in the network policy data 302. It should be appreciated that the access request may be a VM to VM access request, a VM to network access request (i.e., external network traffic targeted to go into or out of another VM), etc. It should be further appreciated that at least a portion of the flow director capabilities of the NIC 212, described above, may be performed by the data flow management module 330.[0036] The virtual network policy enforcement module 340 is configured to enforce the virtual network policies of the network computing device 106 (e.g., VM to VM traffic policies, external traffic policies, etc.). Accordingly, the virtual network policy enforcement module 340 is configured to make packet processing decisions (e.g., whether to allow an access request) based on the policy information (e.g., a privilege level associated with the request originating VM and/or the request destination VM). To do so, the illustrative virtual network policy enforcement module 340 includes a policy table access module 342, a privilege level determination module 344, and an authorized access determination module 346.[0037] The policy table access module 342 is configured to access an access control list(ACL) established by the VMM, which controls what network traffic is allowed between VMs. For example, upon initialization of a VM, the VMM determines whether that VM is privileged or non-privileged, and stores such information in the ACL. In some embodiments, such information may be stored in the access control data 304. The virtual network policy information may be based on an identifier of the network packet that may be contained in a header of the network packet, such as, for example, a media access control (MAC) address of the VM from which the network access control request was made, the MAC address of the destination VM. It should be appreciated that the virtual network policies may be received from a network controller or orchestrator (e.g., the network controller 108).[0038] The privilege level determination module 344 is configured to determine a privilege level of an access requesting VM and a privilege level of a destination VM. It should be appreciated that the requesting VM and the destination VM may be the same VM or different VMs, depending on the type of request. To determine the privilege levels, the privilege level determination module 344 is configured to access a VM network privilege level table that includes privilege levels of each of the VMs, as well as a corresponding identifier (e.g., a domain identifier) of each of the VMs. In some embodiments, the VM network privilege level table (i.e., the privilege levels and corresponding identifiers) may be stored in the privilege level data 306. It should be appreciated that, in some embodiments, the privilege level data 306 may be stored in a secure portion (e.g., the secure memory 214) of the NIC 212, which may be secured using a trusted platform module technology, for example. [0039] The authorized access determination module 346 is configured to determine whether to allow the access request to be transmitted to the destination VM, such as may be performed by the data flow management module 330. To do so, the authorized access determination module 346 is configured to compare the privilege level of the access requesting VM and the privilege level of the destination VM, such as may be determined by the privilege level determination module 344.[0040] Referring now to FIG. 4, in another illustrative embodiment, the network computing device 106 establishes an environment 400 during operation. The illustrative environment 400 includes a plurality of VMs 402 executed on the network computing device 106, each of which is communicatively coupled to one of a plurality of virtual functions 410 of the NIC 212. The illustrative VMs 402 include a first VM, which is designated as VM (1) 404, a second VM, which is designated as VM (2) 406, and a third VM, which is designated as VM (N) 408 (i.e., the "Nth" computing node of the VMs 402, wherein "N" is a positive integer and designates one or more additional VMs 402). The illustrative virtual functions 410 include a first virtual function, which is designated as VF (1) 412, a second virtual function, which is designated as VF (2) 414, and a third virtual function, which is designated as VF (N) 416 (i.e., the "Nth" computing node of the virtual functions 410, wherein "N" is a positive integer and designates one or more additional virtual functions 410). Each of the virtual functions 408 are managed by the NIC 212 and traffic therebetween is managed by the data flow management module 330 of FIG. 3, described in detail above. The data flow management module 330 is further coupled to the virtual network policy enforcement module 340 of FIG. 3, which is also described in detail above. As shown, the NIC 212 of the illustrative embodiment 400 includes the privilege level data 306 of FIG. 3.[0041] As also described previously, the contents of the privilege level data 306 (i.e., privilege levels and corresponding VM identifiers) are managed by the VMM 418, which is communicatively coupled to the NIC 212. The VMM 418 is responsible for controlling and handling of privileged instruction execution. Unlike traditional technologies that are configured to prevent applications from running or accessing platform shared resources, the network computing device 106 is configured to, as described previously, block undesirable network traffic prior to the undesirable network traffic being directed toward a particular VM via its corresponding virtual function. Accordingly, the network computing device 106 is configured to control network privileges rather than VM execution privileges. To do so, the network computing device 106 is configured to receive network privilege level information, such as from the network controller 108, during deployment of the VM hosting network related services. Upon the network controller 108 having selected a suitable node, the network controller 108 instructs the VMM 418 to apply the required privilege level, such as may be stored in the VM network privilege level table described previously.[0042] Referring now to FIG. 5, in use, the network computing device 106 may execute a method 500 for assigning a privilege level to an initialized VM. It should be appreciated that the method 500 may be executed for initial or unregistered access requests. The method 500 begins with block 502, in which the network computing device 106 determines whether a VM (e.g., one of the VMs 402 of FIG. 4) was requested for initialization (i.e., already instantiated) by the network computing device 106. If so, the method 500 advances to block 504, in which the network computing device 106 determines a privilege level (e.g., a privileged level or a non-privileged level) of the VM to be initialized. As described previously, the privilege level may be determined by a network controller 108 and received with or subsequent to having received a request for initialization of the VM.[0043] In block 506, the network computing device 106 stores the privilege level of theVM to be initialized with an identifier of the VM to be initialized. In some embodiments, in block 508, the network computing device 106 stores the privilege level in an entry of the VM network privilege level table. Additionally or alternatively, in some embodiments, in block 510, the network computing device 106 stores the privilege level and identifier of the VM in a secure memory of the NIC (e.g., the secure memory 214 of the NIC 212 of FIG. 2). In block 512, the network computing device 106 initializes the VM. In block 514, the network computing device 106 initializes the virtual function and virtual function drivers for the VM initialized in block 512. In block 516, the network computing device 106 assigns the initialized virtual function to the VM initialized in block 512.[0044] Referring now to FIG. 6, in use, the network computing device 106 may execute a method 600 for enforcing network access control of an initialized virtual machine. It should be appreciated that the method 600 may be executed subsequent to initial or unregistered access requests having been setup, as described in the method 500 FIG. 5. The method 600 begins with block 602, in which the network computing device 106 determines whether an access request was received from a VM (e.g., by the data flow management module 330 of FIGS. 3 and 4). As described previously, the access request may be a VM to VM access request, a VM to network access request (i.e., external network traffic targeted to go into or out of another VM), etc. If the network computing device 106 determines an access request was received from the VM, the network computing device 106 determines a privilege level of the requesting VM from which the access request was received. To do so, in some embodiments, in block 606, the network computing device 106 determines the privilege level of the requesting VM based on an entry of the VM network privilege level table that corresponds to the requesting VM.[0045] In block 608, the network computing device 106 determines a privilege level of the destination VM for which access has been requested. To do so, in some embodiments, in block 610, the network computing device 106 determines the privilege level of the destination VM based on an entry of the VM network privilege level table that corresponds to the destination VM. In block 612, the network computing device 106 determines whether the VM requesting network access (i.e., the requesting VM) is authorized to access the destination VM. To do so, in block 614, the network computing device 106 compares the privilege level of the requesting VM determined in block 604 to the privilege level of the destination VM determined in block 608.[0046] In block 616, the network computing device 106 determines whether the network access from the requesting VM to the destination VM is authorized based on the network policy. If not, the method 600 branches to block 618, in which the access request is denied; otherwise, if the access requested is authorized, the method 600 instead branches to block 620, in which the access request is allowed. For example, if the network computing device 106 determines the privilege level assigned to the requesting VM to be a privileged level and the privilege level assigned to the destination VM to be a privileged level, the network computing device 106 may allow the access request to be directed to the destination VM via the corresponding virtual function.[0047] In another example, if the network computing device 106 determines the privilege level assigned to the requesting VM to be a privileged level and the privilege level assigned to the destination VM to be a non-privileged level, the network computing device 106 may allow the access request to be directed to the destination VM via the corresponding virtual function. In still another example, if the network computing device 106 determines the privilege level assigned to the requesting VM to be a non-privileged level and the privilege level assigned to the destination VM to be a privileged level, the network computing device 106 may deny the access request to be directed to the destination VM via the corresponding virtual function.[0048] It should be appreciated that at least a portion of one or both of the methods 500 and 600 may be executed by the NIC 212 of the network computing device 106. It should be further appreciated that, in some embodiments, one or both of the methods 500 and 600 may be embodied as various instructions stored on a computer-readable media, which may be executed by the processor 202, the NIC 212, and/or other components of the network computing device 106 to cause the network computing device 106 to perform the methods 500 and 600. The computer-readable media may be embodied as any type of media capable of being read by the network computing device 106 including, but not limited to, the memory 206, the data storage device 208, a secure memory 214 of the NIC 212, other memory or data storage devices of the network computing device 106, portable media readable by a peripheral device of the network computing device 106, and/or other media.EXAMPLES[0049] Illustrative examples of the technologies disclosed herein are provided below.An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0050] Example 1 includes a network computing device for enforcing virtual machine network access control, the network computing device comprising one or more processors; and one or more data storage devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the network computing device to receive an access request from a virtual function assigned to a requesting virtual machine, wherein the requesting virtual machine is one of a plurality of virtual machines initialized on the network computing device, wherein the access request includes a request to access at least a portion of a destination virtual machine, wherein the destination virtual machine is one of the plurality of virtual machines initialized on the network computing device; determine a first privilege level assigned to the requesting machine and a second privilege level assigned to the destination virtual machine; determine whether the requesting virtual machine is authorized to access the destination virtual machine based on a comparison of the first and second privilege levels; and allow, in response to a determination the requesting virtual machine is authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.[0051] Example 2 includes the subject matter of Example 1, and wherein the plurality of instructions further cause the network computing device to initialize each of the plurality of virtual machines; and assign a privilege level to each of the plurality of virtual machines, wherein the privilege level comprises one of a privileged level or a non-privileged level.[0052] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the plurality of instructions further cause the network computing device to initialize one or more virtual functions for each of the plurality of virtual machines; and assign each of the one or more virtual functions to a corresponding one of the plurality of virtual machines.[0053] Example 4 includes the subject matter of any of Examples 1-3, and wherein to assign the privilege level to each of the plurality of virtual machines comprises to assign the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine.[0054] Example 5 includes the subject matter of any of Examples 1-4, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the privileged level.[0055] Example 6 includes the subject matter of any of Examples 1-5, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the non-privileged level.[0056] Example 7 includes the subject matter of any of Examples 1-6, and wherein the plurality of instructions further cause the network computing device to deny, in response to a determination the requesting virtual machine is not authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.[0057] Example 8 includes the subject matter of any of Examples 1-7, and wherein to assign the privilege level to each of the plurality of virtual machines comprises to assign the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine, and wherein to deny the requesting virtual machine access to the destination virtual machine comprises to deny access subsequent to a determination that the first privilege level corresponds to the non-privileged level and the second privilege level corresponds to the privileged level.[0058] Example 9 includes the subject matter of any of Examples 1-8, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access limited to at least the portion of the destination virtual machine corresponding to the access request.[0059] Example 10 includes the subject matter of any of Examples 1-9, and wherein the first and destination virtual machines are the same virtual machine.[0060] Example 11 includes the subject matter of any of Examples 1-10, and wherein the first and destination virtual machines are different virtual machines. [0061] Example 12 includes the subject matter of any of Examples 1-11, and wherein the access request comprises one of a VM to VM access request or a VM to network access request.[0062] Example 13 includes a method for enforcing virtual machine network access control, the method comprising receiving, by a network computing device, an access request from a virtual function assigned to a requesting virtual machine, wherein the requesting virtual machine is one of a plurality of virtual machines initialized on the network computing device, wherein the access request includes a request to access at least a portion of a destination virtual machine, wherein the destination virtual machine is one of the plurality of virtual machines initialized on the network computing device; determining, by the network computing device, a first privilege level assigned to the requesting machine and a second privilege level assigned to the destination virtual machine; determining, by the network computing device, whether the requesting virtual machine is authorized to access the destination virtual machine based on a comparison of the first and second privilege levels; and allowing, by the network computing device and in response to a determination the requesting virtual machine is authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.[0063] Example 14 includes the subject matter of Example 13, and further including initializing, by the network computing device, each of the plurality of virtual machines; and assigning, by the network computing device, a privilege level to each of the plurality of virtual machines, wherein the privilege level comprises one of a privileged level or a non-privileged level.[0064] Example 15 includes the subject matter of any of Examples 13 and 14, and further including initializing, by the network computing device, one or more virtual functions for each of the plurality of virtual machines; and assigning, by the network computing device, each of the one or more virtual functions to a corresponding one of the plurality of virtual machines.[0065] Example 16 includes the subject matter of any of Examples 13-15, and wherein assigning the privilege level to each of the plurality of virtual machines comprises assigning the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine.[0066] Example 17 includes the subject matter of any of Examples 13-16, and wherein allowing the requesting virtual machine access to the destination virtual machine comprises allowing access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the privileged level.[0067] Example 18 includes the subject matter of any of Examples 13-17, and wherein allowing the requesting, by the network computing device, virtual machine access to the destination virtual machine comprises allowing access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the non-privileged level.[0068] Example 19 includes the subject matter of any of Examples 13-18, and further including denying, by the network computing device and in response to a determination the requesting virtual machine is not authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.[0069] Example 20 includes the subject matter of any of Examples 13-19, and wherein assigning the privilege level to each of the plurality of virtual machines comprises assigning the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine, and wherein denying the requesting virtual machine access to the destination virtual machine comprises denying access subsequent to a determination that the first privilege level corresponds to the non-privileged level and the second privilege level corresponds to the privileged level.[0070] Example 21 includes the subject matter of any of Examples 13-20, and wherein allowing the requesting virtual machine access to the destination virtual machine comprises allowing access limited to at least the portion of the destination virtual machine corresponding to the access request.[0071] Example 22 includes the subject matter of any of Examples 13-21, and wherein the first and destination virtual machines are the same virtual machine.[0072] Example 23 includes the subject matter of any of Examples 13-22, and wherein the first and destination virtual machines are different virtual machines.[0073] Example 24 includes the subject matter of any of Examples 13-23, and wherein receiving the access request comprises receiving one of a VM to VM access request or a VM to network access request.[0074] Example 25 includes a network computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the network computing device to perform the method of any of Examples 13-24. [0075] Example 26 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a network computing device performing the method of any of Examples 13-24.[0076] Example 27 includes a network computing device for enforcing virtual machine network access control, the network computing device comprising network communication circuitry to receive an access request from a virtual function assigned to a requesting virtual machine, wherein the requesting virtual machine is one of a plurality of virtual machines initialized on the network computing device, wherein the access request includes a request to access at least a portion of a destination virtual machine, wherein the destination virtual machine is one of the plurality of virtual machines initialized on the network computing device; virtual machine network policy enforcement circuitry to (i) determine a first privilege level assigned to the requesting machine and a second privilege level assigned to the destination virtual machine and (ii) determine whether the requesting virtual machine is authorized to access the destination virtual machine based on a comparison of the first and second privilege levels; data flow management circuitry to allow, in response to a determination the requesting virtual machine is authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.[0077] Example 28 includes the subject matter of Example 27, and further including virtual machine management circuitry to initialize each of the plurality of virtual machines, wherein the virtual machine network policy enforcement circuitry is further to assign a privilege level to each of the plurality of virtual machines, wherein the privilege level comprises one of a privileged level or a non-privileged level.[0078] Example 29 includes the subject matter of any of Examples 27 and 28, and wherein the virtual machine management circuitry is further to (i) initialize one or more virtual functions for each of the plurality of virtual machines and (ii) assign each of the one or more virtual functions to a corresponding one of the plurality of virtual machines.[0079] Example 30 includes the subject matter of any of Examples 27-29, and wherein to assign the privilege level to each of the plurality of virtual machines comprises to assign the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine.[0080] Example 31 includes the subject matter of any of Examples 27-30, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the privileged level. [0081] Example 32 includes the subject matter of any of Examples 27-31, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the non-privileged level.[0082] Example 33 includes the subject matter of any of Examples 27-32, and wherein the data flow management circuitry is further to deny, in response to a determination the requesting virtual machine is not authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.[0083] Example 34 includes the subject matter of any of Examples 27-33, and wherein to assign the privilege level to each of the plurality of virtual machines comprises to assign the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine, and wherein to deny the requesting virtual machine access to the destination virtual machine comprises to deny access subsequent to a determination that the first privilege level corresponds to the non-privileged level and the second privilege level corresponds to the privileged level.[0084] Example 35 includes the subject matter of any of Examples 27-34, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access limited to at least the portion of the destination virtual machine corresponding to the access request.[0085] Example 36 includes the subject matter of any of Examples 27-35, and wherein the first and destination virtual machines are the same virtual machine.[0086] Example 37 includes the subject matter of any of Examples 27-36, and the first and destination virtual machines are different virtual machines.[0087] Example 38 includes the subject matter of any of Examples 27-37, and wherein the access request comprises one of a VM to VM access request or a VM to network access request.[0088] Example 39 includes a network computing device for enforcing virtual machine network access control, the network computing device comprising network communication circuitry to receive an access request from a virtual function assigned to a requesting virtual machine, wherein the requesting virtual machine is one of a plurality of virtual machines initialized on the network computing device, wherein the access request includes a request to access at least a portion of a destination virtual machine, wherein the destination virtual machine is one of the plurality of virtual machines initialized on the network computing device; means for determining a first privilege level assigned to the requesting machine and a second privilege level assigned to the destination virtual machine; means for determining whether the requesting virtual machine is authorized to access the destination virtual machine based on a comparison of the first and second privilege levels; data flow management circuitry to allow, in response to a determination the requesting virtual machine is authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.[0089] Example 40 includes the subject matter of Example 39, and further including virtual machine management circuitry to initialize each of the plurality of virtual machines, wherein the virtual machine network policy enforcement circuitry is further to assign a privilege level to each of the plurality of virtual machines, wherein the privilege level comprises one of a privileged level or a non-privileged level.[0090] Example 41 includes the subject matter of any of Examples 39 and 40, and wherein the virtual machine management circuitry is further to (i) initialize one or more virtual functions for each of the plurality of virtual machines and (ii) assign each of the one or more virtual functions to a corresponding one of the plurality of virtual machines.[0091] Example 42 includes the subject matter of any of Examples 39-41, and wherein to assign the privilege level to each of the plurality of virtual machines comprises to assign the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine.[0092] Example 43 includes the subject matter of any of Examples 39-42, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the privileged level.[0093] Example 44 includes the subject matter of any of Examples 39-43, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access subsequent to a determination that the first privilege level corresponds to the privileged level and the second privilege level corresponds to the non-privileged level.[0094] Example 45 includes the subject matter of any of Examples 39-44, and wherein the data flow management circuitry is further to deny, in response to a determination the requesting virtual machine is not authorized to access the destination virtual machine, the requesting virtual machine access to the destination virtual machine.[0095] Example 46 includes the subject matter of any of Examples 39-45, and wherein the means for assigning the privilege level to each of the plurality of virtual machines comprises means for assigning the first privilege level to the requesting virtual machine and the second privilege level to the destination virtual machine, and wherein to deny the requesting virtual machine access to the destination virtual machine comprises to deny access subsequent to a determination that the first privilege level corresponds to the non-privileged level and the second privilege level corresponds to the privileged level.[0096] Example 47 includes the subject matter of any of Examples 39-46, and wherein to allow the requesting virtual machine access to the destination virtual machine comprises to allow access limited to at least the portion of the destination virtual machine corresponding to the access request.[0097] Example 48 includes the subject matter of any of Examples 39-47, and wherein the first and destination virtual machines are the same virtual machine.[0098] Example 49 includes the subject matter of any of Examples 39-48, and wherein the first and destination virtual machines are different virtual machines.[0099] Example 50 includes the subject matter of any of Examples 39-49, and wherein the access request comprises one of a VM to VM access request or a VM to network access request. |
An apparatus and method are described for a non-uniform rasterizer. For example, one embodiment of an apparatus comprises: a graphics processor to process graphics data and render images using the graphics data; and a non-uniform rasterizer within the graphics processor to determine different resolutions to be used for different regions of an image, the non-uniform rasterizer to receive a plurality of polygons to be rasterized and to responsively rasterize the polygons in accordance with the different resolutions. |
1. A processor, comprising:Central processing unit CPU, said CPU including multiple cores;Graphics processor, used to render images;a memory controller for coupling the graphics processor and the CPU to an external memory device; anda shared cache for sharing by the plurality of cores and the graphics processor,The graphics processor includes:block memory for storing graphics data associated with one or more blocks of the image;an execution module coupled to the block memory, the execution module for executing a shader to render the image, the shader including a vertex shader for performing a coordinate space transformation on the vertices of the primitive;A rasterizer configured to rasterize the primitives at different resolutions in different areas of the image according to a rasterization map, the rasterization map comprising: The values corresponding to the different resolutions;The rasterizer is used to rasterize the primitives on a block-by-block basis, wherein, for each block, the rasterizer is used to rasterize the primitives at a resolution indicated by the value of the rasterization map. One or more primitives that block overlap are rasterized; andThe execution module is configured to execute a plurality of pixel shaders to shade pixels of a block according to the resolution.2. The processor of claim 1, wherein the rasterizer is configured to determine overlap between primitives and blocks and, for a given block, rasterize only those primitives that overlap the block. Grid.3. The processor of claim 1, wherein the rasterization map includes layout bits specifying a resolution for each tile.4. The processor of claim 3, wherein the layout bits specify the placement of pixels in each block.5. The processor of any one of claims 1-4, further comprising at least one of the following:a first level cache integrated with each of the plurality of cores;a texture sampling module for accessing texture mapping stored in the memory, the texture sampling module being configured to perform texture mapping on objects within the image;a depth buffer and associated depth test module coupled to the rasterizer;a video processor coupled to the memory controller, the video processor including a video codec engine for encoding and decoding video data;a flash memory subsystem including flash memory coupled to the memory controller and a flash memory controller; andA display interface and a display for displaying the image rendered by the execution module and the rasterizer.6. The processor of any one of claims 1-4, wherein the plurality of cores includes heterogeneous processor cores including one or more lower power processors A collection of cores and a collection of one or more higher power processor cores.7. The processor of any one of claims 1-4, wherein the rasterized map is configured to respond responsively based on gaze tracking data indicative of user gaze directed toward the first region of the image. being updated, the rasterizer for dynamically increasing resolution in one or more of the blocks in the first region based on the updated rasterization map.8. The processor of claim 7, wherein the gaze tracking data includes data from an eye tracking device indicating the direction of the user's gaze.9. A method for graphics processing, comprising:storing graphics data associated with one or more blocks of the image in block memory of the graphics processor;Execute a vertex shader to perform coordinate space transformations on the primitive's vertices;Rasterizing the primitives at different resolutions in different areas of the image according to a rasterization map, the rasterization map including corresponding to the different resolutions in the different areas of the image is a value of , where the primitives are rasterized on a block-by-block basis, and where, for each block, one or more primitives that overlap the block are rasterized at the resolution indicated by the value of the rasterization map. rasterization; andMultiple pixel shaders are executed to shade the pixels of the block according to the resolution.10. The method of claim 9, further comprising:Overlap between primitives and blocks is determined, and for a given block, only those primitives that overlap that block are rasterized.11. The method of claim 9, wherein the rasterization map includes layout bits specifying a resolution for each tile.12. The method of claim 11, wherein the layout bits specify the placement of pixels in each block.13. The method of any one of claims 9-12, further comprising at least one of the following steps:access a texture map stored in memory and perform texture mapping on objects within the image;Utilize a video processor to encode and decode video data; andDisplays the image rendered by the execution circuit and the rasterization circuit.14. The method of any one of claims 9-12, further comprising:responsively updating the rasterized map based on gaze tracking data indicative of user gaze directed toward the first region of the image; andDynamically increasing resolution in one or more of the blocks in the first region based on the updated rasterization map.15. The method of claim 14, wherein the gaze tracking data includes data from an eye tracking device indicating the direction of the user's gaze.16. A machine-readable medium having program code stored thereon, the program code, when executed by a machine, causes the machine to perform the method of any one of claims 9-15.17. A device for graphics processing, comprising:means for storing graphics data associated with one or more blocks of an image in a block memory of a graphics processor;means for executing a vertex shader to perform coordinate space transformations on the vertices of the primitive;Means for rasterizing the primitives at different resolutions in different areas of the image according to a rasterization map, the rasterization map comprising: Values corresponding to different resolutions, where the primitives are rasterized block by block, where for each block, one or more primitives that overlap the block are indicated by the value of the rasterization map is rasterized; andMeans for executing a plurality of pixel shaders to shade pixels of a block according to the resolution.18. The apparatus of claim 17, further comprising:Means for determining the overlap between primitives and blocks and rasterizing, for a given block, only those primitives that overlap the block.19. The device of claim 17, wherein the rasterization map includes layout bits specifying a resolution for each tile.20. The device of claim 19, wherein the layout bits specify the placement of pixels in each block.21. The apparatus of any one of claims 17-20, further comprising at least one of:means for accessing a texture map stored in memory and performing texture mapping on objects within the image;Apparatus for encoding and decoding video data using a video processor; andA device for displaying said image.22. The apparatus of any one of claims 17-20, further comprising:means for responsively updating the rasterized map based on gaze tracking data indicative of user gaze directed toward the first region of the image; andMeans for dynamically increasing resolution in one or more of the blocks in the first region based on an updated rasterization map.23. The device of claim 22, wherein the gaze tracking data includes data from an eye tracking device indicating the direction of the user's gaze. |
Apparatus and method for non-uniform frame buffer rasterizationThis application is a PCT international application number PCT/US2016/022793, the international filing date is March 17, 2016, and the application number entering the Chinese national phase is 201680017153.4, entitled "Apparatus for non-uniform frame buffer rasterization" Divisional application of an invention patent application for “and method”.Background techniqueField of inventionThe present invention relates generally to the field of computer processors. More specifically, the present invention relates to an apparatus and method for non-uniform frame buffer rasterization.Description of related technologiesVirtual reality (VR) is becoming an increasingly viable option for immersive applications such as gaming applications and various industry applications. That's because companies like Oculus, Samsung, and Sony have produced smaller, affordable headsets with high image quality, low latency, and head-tracking capabilities. These headsets are said to be the "ultimate platform," meaning they will eventually provide fully immersive VR experiences that are indistinguishable from reality.However, one problem is that rendering needs to be done for both the user's left and right eyes, which doubles the load on the graphics processor. Additionally, the rectangular image is distorted to compensate for the lenses inside the head-mounted display (HMD). This is demonstrated in the example shown in Figure 13.Each warped image is typically generated from an intermediate image rendered using regular ("unwarped") planar projection techniques. The image in Figure 14 shows how the final warped image will look on such a flat image. In this illustration, only 10×10 out of every 15×15 pixels are shown to better visualize the warp function shape and to make the intermediate rendered image more visible. The pixels are sparse towards the edges of the image, suggesting that more pixels will be rendered in this intermediate image than will be used to create the final image. As a result, considerable redundant work was performed. The useful pixel density at the upper and lower edges of the middle image is 1/18, at the right edge it is 1/20, and in the right corner said pixel density is only 1/38 (i.e. every 38 rendered pixels one useful pixel).Description of the drawingsThe invention may be better understood from the following detailed description taken in conjunction with the accompanying drawings, in which:1 is a block diagram of an embodiment of a computer system having a processor having one or more processor cores and a graphics processor;Figure 2 is a block diagram of one embodiment of a processor having one or more processor cores, an integrated memory controller, and an integrated graphics processor;Figure 3 is a block diagram of an embodiment of a graphics processor, which may be a discrete graphics processing unit or a graphics processor integrated with multiple processing cores;4 is a block diagram of an embodiment of a graphics processing engine for a graphics processor;Figure 5 is a block diagram of another embodiment of a graphics processor;Figure 6 is a block diagram of thread execution logic including an array of processing elements;Figure 7 illustrates a graphics processor execution unit instruction format according to an embodiment;Figure 8 is a block diagram of another embodiment of a graphics processor including a graphics pipeline, a media pipeline, a display engine, thread execution logic, and a rendering output pipeline;9A is a block diagram illustrating a graphics processor command format according to an embodiment;9B is a block diagram illustrating a graphics processor command sequence according to an embodiment;Figure 10 illustrates an exemplary graphics software architecture for a data processing system in accordance with an embodiment;11 illustrates an exemplary IP core development system that may be used to fabricate integrated circuits to perform operations, in accordance with embodiments;Figure 12 illustrates an exemplary system-on-chip integrated circuit that may be fabricated using one or more IP cores in accordance with embodiments;Figure 13 shows how a rectangular image is distorted to compensate for the lenses inside a head-mounted display (HMD);Figure 14 shows how the final warped image looks when projected onto a flat image;Figure 15 shows a rendering engine according to an embodiment of the present invention;16A-16B illustrate exemplary tiles and tile sets employed in one embodiment of the invention;Figure 17 shows a tile arrangement in which higher resolution tiles are positioned towards the center of the image;Figure 18 illustrates various different tile patterns employed in one embodiment of the present invention;Figures 19A-19C illustrate techniques for storing tiles of different resolutions into memory pages;Figure 20 shows three exemplary mip-map levels rasterized using non-uniform rasterization and tiles mapped using filtering; andFigure 21 illustrates a method according to one embodiment of the invention.Detailed waysIn the following description, for the purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. However, it will be apparent to those skilled in the art that embodiments of the invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the basic principles of the embodiments of the invention.Example graphics processor architecture and data typesSystem OverviewFigure 1 is a block diagram of a processing system 100, according to an embodiment. In various embodiments, system 100 includes one or more processors 102 and one or more graphics processors 108 , and may be a single-processor desktop system, a multi-processor workstation system, or have a large number of processors 102 or processors 102 . Core 107 server system. In one embodiment, system 100 is a processing platform incorporated within a system-on-chip (SoC) integrated circuit for a mobile, handheld, or embedded device.Embodiments of system 100 may include or incorporate server-based gaming platforms, gaming consoles including gaming and media consoles, mobile gaming consoles, handheld gaming consoles, or online gaming consoles. In some embodiments, system 100 is a mobile device, smartphone, tablet computing device, or mobile Internet device. Data processing system 100 may also include, be coupled to, or integrated within a wearable device (such as a smart watch wearable device, a smart glasses device, an augmented reality device, or a virtual reality device). In some embodiments, data processing system 100 is a television or set-top box device having one or more processors 102 and a graphical interface generated by one or more graphics processors 108 .In some embodiments, the one or more processors 102 each include one or more processor cores 107 for processing instructions that, when executed, perform operations of the system and user software. In some embodiments, each of the one or more processor cores 107 is configured to process a specific set of instructions 109 . In some embodiments, instruction set 109 may facilitate complex instruction set computing (CISC), reduced instruction set computing (RISC), or computing via very long instruction words (VLIW). Multiple processor cores 107 may each process a different instruction set 109 , which may include instructions to facilitate emulation of other instruction sets. Processor core 107 may also include other processing devices, such as a digital signal processor (DSP).In some embodiments, processor 102 includes cache memory 104 . Depending on the architecture, processor 102 may have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among components of processor 102. In some embodiments, the processor 102 also uses an external cache (eg, a level 3 (L3) cache or a last-level cache (LLC)) (not shown), which may be cached using known cache coherence techniques. The external cache is shared among the processor cores 107 . Register file 106 is additionally included in processor 102, which may include different types of registers for storing different types of data (eg, integer registers, floating point registers, status registers, and instruction pointer registers). Some registers may be general purpose registers, while other registers may be specific to the design of processor 102 .In some embodiments, processor 102 is coupled to processor bus 110 for transmitting communication signals, such as address, data, or control signals, between processor 102 and other components in system 100 . In one embodiment, system 100 uses an exemplary 'hub' system architecture, including a memory controller hub 116 and an input-output (I/O) controller hub 130 . Memory controller hub 116 facilitates communication between memory devices and other components of system 100, while I/O controller hub (ICH) 130 provides connectivity to I/O devices via the local I/O bus. In one embodiment, the logic of memory controller hub 116 is integrated within the processor.Memory device 120 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device with suitable capabilities to serve as process memory. In one embodiment, memory device 120 may operate as system memory of system 100 to store data 122 and instructions 121 for use while the one or more processors 102 execute applications or processes. Storage controller hub 116 is also coupled to an optional external graphics processor 112 that can communicate with the one or more graphics processors 108 in processor 102 to perform graphics and media processing. operate.In some embodiments, ICH 130 enables peripheral components to connect to memory device 120 and processor 102 via a high-speed I/O bus. I/O peripheral components include, but are not limited to, audio controller 146, firmware interface 128, wireless transceiver 126 (e.g., Wi-Fi, Bluetooth), data storage device 124 (e.g., hard drive, flash memory, etc.), and Legacy (eg, Personal System 2 (PS/2)) devices are coupled to the system's legacy I/O controller 140 . One or more Universal Serial Bus (USB) controllers 142 connect input devices, such as a keyboard and mouse 144 combination. Network controller 134 may also be coupled to ICH 130 . In some embodiments, a high performance network controller (not shown) is coupled to processor bus 110 . It will be appreciated that the illustrated system 100 is illustrative and not limiting, as other types of data processing systems configured in different ways may also be used. For example, I/O controller hub 130 may be integrated within the one or more processors 102 , or memory controller hub 116 and I/O controller hub 130 may be integrated into a discrete external graphics processor, such as an external within the graphics processor 112).2 is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. Those elements of Figure 2 that have the same reference numbers (or names) as elements in any other figure herein may operate or function in any manner similar to that described elsewhere herein, without limitation. These. Processor 200 may include up to and including additional cores 202N represented by the dashed box. Each of the processor cores 202A-202N includes one or more internal cache units 204A-204N. In some embodiments, each processor core may also access one or more shared cache units 206.Internal cache units 204A-204N and shared cache unit 206 represent the cache memory hierarchy within processor 200. The cache memory hierarchy may include at least one level of instruction and data caches within each processor core and one or more levels of shared mid-level caches, such as Level 2 (L2), Level 3 (L3), Level 4 (L4 ), or other levels of cache, where the highest level cache is classified as LLC before external memory. In some embodiments, cache coherence logic maintains coherence between cache units 206 and 204A-204N.In some embodiments, processor 200 may also include a set of one or more bus controller units 216 and system agent core 210. The one or more bus controller units 216 manage a set of peripheral buses, such as one or more peripheral component interconnect buses (eg, PCI, PCI Express bus). The system agent core 210 provides management functions for each processor component. In some embodiments, system agent core 210 includes one or more integrated memory controllers 214 for managing access to various external memory devices (not shown).In some embodiments, one or more of processor cores 202A-202N include support for simultaneous multithreading. In such an embodiment, system agent core 210 includes components for coordinating and operating cores 202A-202N during multi-threaded processing. System agent core 210 may additionally include a power control unit (PCU) that includes logic and components for regulating the power states of processor cores 202A-202N and graphics processor 208.In some embodiments, processor 200 additionally includes graphics processor 208 for performing graphics processing operations. In some embodiments, graphics processor 208 is coupled to a set of shared cache units 206 and a system agent core 210 that includes the one or more integrated memory controllers 214 . In some embodiments, display controller 211 is coupled with graphics processor 208 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 211 may be a separate module coupled to the graphics processor via at least one interconnect, or may be integrated within graphics processor 208 or system agent core 210.In some embodiments, ring-based interconnect unit 212 is used to couple the internal components of processor 200 . However, alternative interconnect units may be used, such as point-to-point interconnects, switched interconnects, or other technologies, including those well known in the art. In some embodiments, graphics processor 208 is coupled to ring interconnect 212 via I/O link 213 .Exemplary I/O link 213 represents at least one of a variety of I/O interconnects, including facilitating communication between various processor components and high-performance embedded memory modules 218, such as eDRAM modules. of package I/O interconnects. In some embodiments, each of processor cores 202-202N and graphics processor 208 use embedded memory module 218 as a shared last-level cache.In some embodiments, processor cores 202A-202N are homogeneous cores executing the same instruction set architecture. In another embodiment, processor cores 202A-202N are homogeneous with respect to an instruction set architecture (ISA), wherein one or more of processor cores 202A-N execute a first set of instructions, At least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment, processor cores 202A-202N are microarchitecturally homogeneous, with one or more cores having relatively high power consumption coupled to one or more power cores having lower power consumption. . Additionally, processor 200 may be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, among other components.FIG. 3 is a block diagram of a graphics processor 300, which may be a discrete graphics processing unit or a graphics processor integrated with multiple processing cores. In some embodiments, the graphics processor communicates with memory via a mapped I/O interface to registers on the graphics processor and using commands placed in processor memory. In some embodiments, graphics processor 300 includes a memory interface 314 for accessing memory. Memory interface 314 may be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, graphics processor 300 also includes a display controller 302 for driving display output data to display device 320 . Display controller 302 includes hardware for one or more overlapping planes of the display and a composition of multiple layers of video or user interface elements. In some embodiments, graphics processor 300 includes a video codec engine 306 for encoding, decoding, or media transcoding to, from, or between one or more media encoding formats, including, but not limited to: motion Graphics Experts Group (MPEG) (such as MPEG-2), Advanced Video Coding (AVC) formats (such as H.264/MPEG-4 AVC), and Society of Motion Picture & Television Engineers (SMPTE) 421 M/VC-1, and Joint Photographic Experts Group (JPEG) formats (such as JPEG, and Motion JPEG (MJPEG) formats).In some embodiments, graphics processor 300 includes a block image transfer (BLIT) engine 304 to perform two-dimensional (2D) rasterizer operations, including, for example, bit boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 310. In some embodiments, graphics processing engine 310 is a computing engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 310 includes a 3D pipeline 312 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act on 3D primitive shapes (eg, rectangles, triangles, etc.). 3D pipeline 312 includes programmable and fixed functional elements that perform various tasks within elements and/or generated threads of execution to 3D/media subsystem 315 . While 3D pipeline 312 may be used to perform media operations, embodiments of GPE 310 also include a media pipeline 316 specifically used to perform media operations such as video post-processing and image enhancement.In some embodiments, media pipeline 316 includes fixed-function or programmable logic to perform one or more specialized media operations in place of, or on behalf of, video codec engine 306, such as video decoding acceleration, video deinterleaving, and Video encoding acceleration. In some embodiments, media pipeline 316 additionally includes a thread generation unit to generate threads for execution on 3D/media subsystem 315 . The generated threads perform computations for media operations on one or more graphics execution units included in 3D/media subsystem 315 .In some embodiments, 3D/media subsystem 315 includes logic for executing threads generated by 3D pipeline 312 and media pipeline 316 . In one embodiment, the pipeline sends thread execution requests to the 3D/media subsystem 315, which includes thread dispatch logic to arbitrate and dispatch each request to available thread execution resources. The execution resources include an array of graphics execution units for processing 3D threads and media threads. In some embodiments, 3D/media subsystem 315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory (including registers and addressable memory) for sharing data between threads and for storing output data.3D/media processing4 is a block diagram of a graphics processing engine 410 of a graphics processor in accordance with some embodiments. In one embodiment, GPE 410 is a version of GPE 310 shown in FIG. 3 . Those elements of Figure 4 that have the same reference numbers (or names) as elements in any other figure herein may operate or function in any manner similar to that described elsewhere herein, without limitation. These.In some embodiments, GPE 410 is coupled with command streamer 403 , which provides command streams to GPE 3D pipeline 412 and media pipeline 416 . In some embodiments, command streamer 403 is coupled to memory, which may be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 403 receives commands from memory and sends these commands to 3D pipeline 412 and/or media pipeline 416 . The commands are instructions obtained from the ring buffer storing the 3D pipeline 412 and the media pipeline 416 . In one embodiment, the ring buffer may additionally include a batch command buffer that stores multiple batches of multiple commands. 3D pipeline 412 and media pipeline 416 process the commands by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to execution unit array 414 . In some embodiments, execution unit array 414 is scalable such that the array includes a variable number of execution units based on the target power and performance tier of GPE 410.In some embodiments, sampling engine 430 is coupled to memory (eg, cache memory or system memory) and execution unit array 414 . In some embodiments, sampling engine 430 provides a memory access mechanism to execution unit array 414 that allows execution array 414 to read graphics and media data from memory. In some embodiments, sampling engine 430 includes logic for performing specialized image sampling operations for media.In some embodiments, the specialized media sampling logic in sampling engine 430 includes denoising/deinterleaving module 432, motion estimation module 434, and image scaling and filtering module 436. In some embodiments, denoising/deinterleaving module 432 includes logic for performing one or more of denoising or deinterleaving algorithms on the decoded video data. Deinterleaving logic combines alternating fields of interleaved video content into a single video frame. The denoising logic reduces or removes data noise from video and image data. In some embodiments, the denoising and deinterleaving logic is motion adaptive and uses spatial or temporal filtering based on the amount of motion detected in the video data. In some embodiments, denoising/deinterleaving module 432 includes specialized motion detection logic (eg, within motion estimation engine 434).In some embodiments, motion estimation engine 434 provides hardware acceleration for video operations by performing video acceleration functions, such as motion vector estimation and prediction, on video data. A motion estimation engine determines motion vectors, which describe the transformation of image data between consecutive video frames. In some embodiments, the graphics processor media codec uses the video motion estimation engine 434 to perform operations on macroblock-level video, which may otherwise be too computationally intensive to perform with a general-purpose processor. In some embodiments, a motion estimation engine 434 may generally be used with graphics processor components to assist video decoding and processing functions that are sensitive or adaptive to the direction or magnitude of motion within the video data.In some embodiments, image scaling and filtering module 436 performs image processing operations to enhance the visual quality of generated images and videos. In some embodiments, scaling and filtering module 436 processes image and video data during sampling operations before providing the data to execution unit array 414 .In some embodiments, GPE 410 includes data port 444, which provides an additional mechanism for the graphics subsystem to access memory. In some embodiments, data port 444 facilitates memory access for operations including render target writes, constant buffer reads, temporary memory space reads/writes, and media surface accesses. In some embodiments, data port 444 includes a cache memory space for caching accesses to memory. The cache memory may be a single data cache or split into multiple caches for the multiple subsystems that access memory via data ports (e.g., render buffer cache, constant buffer server cache, etc.). In some embodiments, threads executing on execution units in execution unit array 414 communicate with the data ports by exchanging messages via the data distribution interconnect coupling each subsystem of GPE 410 .execution unitFIG. 5 is a block diagram of another embodiment of a graphics processor 500. Those elements of Figure 5 that have the same reference numbers (or names) as elements in any other figure herein may operate or function in any manner similar to that described elsewhere herein, without limitation. These.In some embodiments, graphics processor 500 includes ring interconnect 502, pipeline front end 504, media engine 537, and graphics cores 580A-580N. In some embodiments, ring interconnect 502 couples the graphics processor to other processing units, including other graphics processors or one or more general purpose processor cores. In some embodiments, the graphics processor is one of multiple processors integrated within a multi-core processing system.In some embodiments, graphics processor 500 receives multiple batches of commands via ring interconnect 502 . Incoming commands are interpreted by the command streamer 503 in the pipeline front end 504. In some embodiments, graphics processor 500 includes scalable execution logic for performing 3D geometry processing and media processing via graphics core(s) 580A-580N. For 3D geometry processing commands, command streamer 503 provides the commands to geometry pipeline 536 . For at least some media processing commands, command streamer 503 supplies the commands to video front end 534 , which is coupled to media engine 537 . In some embodiments, media engine 537 includes a video quality engine (VQE) 530 for video and image post-processing and a multi-format encoding/decoding (MFX) 533 engine for providing hardware-accelerated encoding and decoding of media data. In some embodiments, geometry pipeline 536 and media engine 537 each generate execution threads for thread execution resources provided by at least one graphics core 580A.In some embodiments, graphics processor 500 includes scalable thread execution resource characterization module cores 580A-580N (sometimes referred to as core shards), each scalable thread execution resource characterization module core having multiple sub-cores 550A-550N, 560A -560N (sometimes called nucleon fragmentation). In some embodiments, graphics processor 500 may have any number of graphics cores 580A-580N. In some embodiments, graphics processor 500 includes graphics core 580A, which has at least a first sub-core 550A and a second sub-core 560A. In other embodiments, the graphics processor is a low power processor with a single sub-core (eg, 550A). In some embodiments, graphics processor 500 includes a plurality of graphics cores 580A-580N, each graphics core including a set of first sub-cores 550A-550N and a set of second sub-cores 560A-560N. Each of the set of first sub-cores 550A-550N includes at least a first set of execution units 552A-552N and media/texture samplers 554A-554N. Each sub-core in the set of second sub-cores 560A-560N includes at least a second set of execution units 562A-562N and samplers 564A-564N. In some embodiments, each sub-core 550A-550N, 560A-560N shares a set of shared resources 570A-570N. In some embodiments, shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in various embodiments of the graphics processor.Figure 6 illustrates thread execution logic 600 that includes an array of processing elements employed in some embodiments of GPE. Those elements of Figure 6 that have the same reference numbers (or names) as elements in any other figure herein may operate or function in any manner similar to that described elsewhere herein, without limitation. These.In some embodiments, thread execution logic 600 includes pixel shader 602, thread dispatcher 604, instruction cache 606, scalable execution unit array including multiple execution units 608A-608N, sampler 610, data cache 612, and data port 614. In one embodiment, the included components are interconnected via an interconnection structure linked to each of the components. In some embodiments, thread execution logic 600 includes access to memory (eg, system memory or cache memory) through one or more of instruction cache 606, data port 614, sampler 610, and execution unit arrays 608A-608N. One or more connections. In some embodiments, each execution unit (eg, 608A) is an individual vector processor capable of executing multiple synchronized threads and processing multiple data elements in parallel for each thread. In some embodiments, execution unit arrays 608A-608N include any number of individual execution units.In some embodiments, execution unit arrays 608A-608N are primarily used to execute "shader" programs. In some embodiments, execution units in arrays 608A-608N execute an instruction set that includes native support for many standard 3D graphics shader instructions, allowing execution with minimal overhead from graphics libraries (e.g., Direct 3D and OpenGL) shader programs. The execution units support vertex and geometry processing (e.g., vertex program, geometry program, vertex shader), pixel processing (e.g., pixel shader, fragment shader), and general purpose processing (e.g., compute and media shader).Each execution unit in execution unit arrays 608A-608N operates on an array of data elements. The number of data elements is the "execution size", or the number of lanes for the instruction. An execution channel is a logical unit that performs data element access, masking, and flow control within instructions. The number of lanes may be independent of the number of physical arithmetic logic units (ALUs) or floating point units (FPUs) for a particular graphics processor. In some embodiments, execution units 608A-608N support integer and floating point data types.The execution unit instruction set includes a plurality of single instruction multiple data (SIMD) instructions. Individual data elements may be stored in registers as packed data types, and the execution unit will process each element based on the data size of the element. For example, when operating on a 256-bit wide vector, the 256-bit vector is stored in a register and the execution unit as four separate 64-bit compressed data elements (quadruple word (QW) sized data element), eight individual 32-bit compressed data elements (Double Word (DW)-sized data elements), sixteen individual 16-bit compressed data elements (Word (W)-sized data elements), or thirty-two A single 8-bit data element (byte (B) sized data element) is operated on the vector. However, different vector widths and register sizes are possible.One or more internal instruction caches (eg, 606) are included in the thread execution logic 600 to cache thread instructions for the execution units. In some embodiments, one or more data caches (eg, 612) are included for caching thread data during thread execution. In some embodiments, a sampler 610 is included to provide texture samples for 3D operations and media samples for media operations. In some embodiments, sampler 610 includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to the execution unit.During execution, the graphics and media pipeline sends thread initiation requests to thread execution logic 600 via thread generation and dispatch logic. In some embodiments, thread execution logic 600 includes a local thread dispatcher 604 that arbitrates thread initiation requests from the graphics pipeline and media pipeline and instantiates the requested threads on one or more execution units 608A-608N. For example, the geometry pipeline (eg, 536 of Figure 5) dispatches vertex processing, tessellation, or geometry processing threads to thread execution logic 600 (Figure 6). In some embodiments, thread dispatcher 604 may also handle runtime thread generation requests from executing shader programs.Once a set of geometric objects has been processed and rasterized into pixel data, the pixel shader 602 is called to further compute the output information and cause the results to be written to the output surface (e.g., color buffer, depth buffer, stencil buffer device, etc.). In some embodiments, pixel shader 602 calculates values for vertex attributes that are interpolated across the rasterized object. In some embodiments, pixel shader 602 then executes a pixel shader program provided by an application programming interface (API). To execute the pixel shader program, pixel shader 602 dispatches threads to execution units (eg, 608A) via thread dispatcher 604. In some embodiments, pixel shader 602 uses texture sampling logic in sampler 610 to access texture data in a texture map stored in memory. Arithmetic operations on texture data and input geometry data compute pixel color data for each geometry fragment, or discard one or more pixels without further processing.In some embodiments, data port 614 provides a memory access mechanism for thread execution logic 600 to output processed data to memory for processing on the graphics processor output pipeline. In some embodiments, data port 614 includes or is coupled to one or more cache memories (eg, data cache 612) to cache data for memory access via the data port.Figure 7 is a block diagram illustrating a graphics processor instruction format 700 in accordance with some embodiments. In one or more embodiments, a graphics processor execution unit supports an instruction set with instructions in multiple formats. Solid lines illustrate components typically included in execution unit instructions, while dashed lines include components that are optional or included only in a subset of instructions. In some embodiments, the instruction format 700 described and illustrated is macroinstructions in that they are instructions supplied to execution units, as opposed to micro-operations resulting from instruction decoding once the instructions are processed.In some embodiments, the graphics processor execution unit natively supports instructions in 128-bit format 710. The 64-bit compressed instruction format 730 may be used for some instructions based on the instruction selected, instruction options, and number of operations. The native 128-bit format 710 provides access to all instruction options, while some options and operations are restricted to the 64-bit format 730. The native instructions available in 64-bit format 730 vary depending on the embodiment. In some embodiments, instructions are partially packed using a set of index values in index field 713. The execution unit hardware references a set of compression tables based on the index value and uses the compression table output to reconstruct native instructions in 128-bit format 710 .For each format, the instruction opcode 712 defines the operation to be performed by the execution unit. The execution unit executes each instruction in parallel across the plurality of data elements of each operand. For example, in response to an add instruction, the execution unit performs a synchronous add operation across each color channel, which color channel represents a texture element or picture element. By default, the execution unit executes each instruction across all data lanes of the operands. In some embodiments, instruction control field 714 enables control of certain execution options, such as lane selection (eg, prediction) and data lane ordering (eg, reordering). For 128-bit instructions 710, the execution size field 716 limits the number of data lanes that will execute in parallel. In some embodiments, the execution size field 716 is not available for 64-bit packed instruction format 730.Some execution unit instructions have up to three operands, including two source operands src0 722, src1 722, and one destination 718. In some embodiments, the execution unit supports dual-destination instructions, where one of the destinations is implicit. A data manipulation instruction may have three source operands (eg, SRC2 724), where instruction opcode 712 determines the number of source operands. The last source operand of an instruction can be an intermediate value passed by the instruction (for example, a hard-coded value).In some embodiments, the 128-bit instruction format 710 includes access/addressing mode information 726 that defines, for example, whether a direct register addressing mode or an indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is provided directly by bits in instruction 710.In some embodiments, 128-bit instruction format 710 includes an access/address mode field 726 that specifies the address mode and/or access mode of the instruction. In one embodiment, the access pattern is used to define the data access calibration of the instruction. Some embodiments support access modes, including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, instruction 710 may use byte-aligned addressing for source and destination operands, and when in a second mode, instruction 710 may use 16-byte-aligned addressing for all source and destination operands.In one embodiment, the address mode portion of the access/address mode field 726 determines whether the instruction uses direct addressing or indirect addressing. When direct register addressing mode is used, bits in instruction 710 directly provide the register address of one or more operands. When using indirect register addressing mode, the register address of one or more operands can be calculated based on the address register value and the address immediate field in the instruction.In some embodiments, opcode decoding 740 is simplified by grouping instructions based on the opcode 712-bit field. For 8-bit opcodes, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise grouping of opcodes shown is exemplary only. In some embodiments, move and logical opcode grouping 742 includes data move and logical instructions (eg, move (mov), compare (cmp)). In some embodiments, move and logic groups 742 share the five most significant bits (MSBs), where move (mov) instructions take the form 0000xxxxb and logic instructions take the form 0001xxxxb. Flow control instruction group 744 (eg, call, jump) includes instructions using 0010xxxxb (eg, 0x20). Miscellaneous instruction group 746 includes a mixture of instructions including synchronization instructions (eg, wait, send) in the form 01111xxxxb (eg, 0x30). Parallel math instruction set 748 includes component-wise arithmetic instructions (eg, add, subtract (mul)) in the form 0100xxxxb (eg, 0x40). Parallel math group 748 performs arithmetic operations in parallel across data channels. Vector math group 750 includes arithmetic instructions (eg, dp4) in the form 0101xxxxb (eg, 0x50). Vector math groups perform arithmetic operations on vector operands, such as dot products.graphics pipelineFIG. 8 is a block diagram of another embodiment of a graphics processor 800. Those elements of Figure 8 that have the same reference numbers (or names) as elements in any other figure herein may operate or function in any manner similar to that described elsewhere herein, without limitation. These.In some embodiments, graphics processor 800 includes graphics pipeline 820, media pipeline 830, display engine 840, thread execution logic 850, and rendering output pipeline 870. In some embodiments, graphics processor 800 is a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or by commands issued to the graphics processor 800 via the ring interconnect 802 . In some embodiments, ring interconnect 802 couples graphics processor 800 to other processing components, such as other graphics processors or general purpose processors. Commands from ring interconnect 802 are interpreted by command streamer 803 , which supplies the instructions to individual components of graphics pipeline 820 or media pipeline 830 .In some embodiments, command streamer 803 directs the operation of vertex fetcher 805 , which reads vertex data from memory and executes vertex processing commands provided by command streamer 803 . In some embodiments, vertex picker 805 provides vertex data to vertex shader 807, which performs coordinate space transformation and lighting operations on each vertex. In some embodiments, vertex picker 805 and vertex shader 807 execute vertex processing instructions by dispatching execution threads to execution units 852A, 852B via thread dispatcher 831.In some embodiments, execution units 852A, 852B are vector processor arrays with instruction sets for performing graphics and media operations. In some embodiments, execution units 852A, 852B have attached L1 cache 851 that is dedicated to each array or shared between arrays. The cache may be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.In some embodiments, graphics pipeline 820 includes a tessellation component for performing hardware-accelerated tessellation of 3D objects. In some embodiments, the programmable hull shader 811 configures the tessellation operation. Programmable domain shader 817 provides backend evaluation of tessellation output. Tesseltor 813 operates in the direction of hull shader 811 and contains dedicated logic for generating a detailed set of geometric objects based on a coarse geometry model that is provided as input to graphics pipeline 820 . In some embodiments, inlay components 811, 813, 817 may be bypassed if inlays are not used.In some embodiments, the complete geometry object may be processed by the geometry shader 819 via one or more threads dispatched to the execution units 852A, 852B, or may proceed directly to the clipper 829. In some embodiments, the geometry shader operates on the entire geometry object (rather than vertices or vertex tiles as in previous stages of the graphics pipeline). If tessellation is disabled, the geometry shader 819 receives input from the vertex shader 807 . In some embodiments, geometry shader 819 may be programmed by a geometry shader program to perform geometry tessellation when the tessellation unit is disabled.Prior to rasterization, the clipper 829 processes the vertex data. Clipper 829 may be a fixed function clipper or a programmable clipper with clipping and geometry shader functionality. In some embodiments, the rasterizer and depth test component 873 in the rendering output pipeline 870 dispatches pixel shaders to convert geometric objects into their per-pixel representations. In some embodiments, pixel shader logic is included in thread execution logic 850. In some embodiments, an application may bypass rasterizer 873 and access unrasterized vertex data via flow unit 823 .Graphics processor 800 has an interconnection bus, interconnection fabric, or some other interconnection mechanism that allows data and messages to pass among the major components of the graphics processor. In some embodiments, execution units 852A, 852B and associated cache(s) 851 , texture and media sampler 854 , and texture/sampler cache 858 are interconnected via data port 856 to perform memory accesses and communicates with the rendering output pipeline component of the processor. In some embodiments, sampler 854, caches 851, 858, and execution units 852A, 852B each have separate memory access paths.In some embodiments, the rendering output pipeline 870 includes a rasterizer and depth testing component 873 that converts vertex-based objects into associated pixel-based representations. In some embodiments, the rasterization logic includes windower/masker units for performing fixed-function triangle and line rasterization. Associated rendering cache 878 and depth cache 879 are also available in some embodiments. Pixel operations component 877 performs pixel-based operations on the data, although in some examples, pixel operations associated with 2D operations (e.g., bitblock image transfer using blending) are performed by the 2D engine 841, or in The display time is replaced by the display controller 843 using overlapping display planes. In some embodiments, a shared L3 cache 875 is available for all graphics components, allowing data to be shared without using main system memory.In some embodiments, graphics processor media pipeline 830 includes media engine 837 and video front end 834. In some embodiments, video front end 834 receives pipeline commands from command streamer 803 . In some embodiments, media pipeline 830 includes a separate command streamer. In some embodiments, video front end 834 processes media commands before sending the commands to media engine 837. In some embodiments, media engine 337 includes thread generation functionality for generating threads for dispatch to thread execution logic 850 via thread dispatcher 831 .In some embodiments, graphics processor 800 includes display engine 840. In some embodiments, display engine 840 is external to processor 800 and coupled to the graphics processor via ring interconnect 802, or some other interconnect bus or mechanism. In some embodiments, display engine 840 includes 2D engine 841 and display controller 843. In some embodiments, display engine 840 includes dedicated logic capable of operating independently of the 3D pipeline. In some embodiments, display controller 843 is coupled to a display device (not shown), which may be a system-integrated display device (as in a laptop), or an external display device attached via a display device connector. display screen.In some embodiments, graphics pipeline 820 and media pipeline 830 may be configured to perform operations based on multiple graphics and media programming interfaces and are not dedicated to any one application programming interface (API). In some embodiments, the graphics processor's driver software converts API schedules specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL) and Open Computing Language (OpenCL) from Khronos Group, the Direct 3D library from Microsoft Corporation, or support may be provided for both OpenGL and D3D . Support is also provided for the open source computer vision library (OpenCV). Future APIs with compatible 3D pipelines will also be supported if a mapping can be made from the future API's pipeline to the graphics processor's pipeline.Graphics pipeline programmingFigure 9A is a block diagram illustrating a graphics processor command format 900 in accordance with some embodiments. Figure 9B is a block diagram illustrating a graphics processor command sequence 910 according to an embodiment. The solid line boxes in Figure 9A show components that are generally included in graphics commands, while the dashed lines include components that are optional or included only in a subset of the graphics commands. The example graphics processor command format 900 of Figure 9A includes data fields for identifying the target client 902 of the command, a command operation code (opcode) 904, and related data 906 for the command. Some commands also include subopcode 905 and command size 908.In some embodiments, client 902 defines a client unit of a graphics device that processes command data. In some embodiments, the graphics processor command parser examines the client field of each command in order to adjust further processing of the command and route command data to the appropriate client unit. In some embodiments, the graphics processor client unit includes a memory interface unit, a rendering unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes commands. Once the command is received by the client unit, the client unit reads opcode 904 and sub-opcode 905 (if present) to determine the operation to perform. The client unit uses the information within data field 906 to execute the command. For some commands, an explicit command size 908 is expected to limit the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments, commands are aligned via multiples of double word lengths.The flowchart in Figure 9B illustrates an example graphics processor command sequence 910. In some embodiments, software or firmware of a data processing system featuring an embodiment of a graphics processor uses a version of the illustrated command sequence to initiate, perform, and terminate a set of graphics operations. Sample command sequences are shown and described for illustrative purposes only, as embodiments are not limited to these specific commands or this sequence of commands. Furthermore, the commands may be issued as a batch of commands in a command sequence such that the graphics processor will process the command sequence in an at least partially simultaneous manner.In some embodiments, graphics processor command sequence 910 may begin with a pipeline flush command 912 to cause any active graphics pipeline to complete currently pending commands for that pipeline. In some embodiments, 3D pipeline 922 and media pipeline 924 do not operate simultaneously. Pipeline flushing is performed to allow the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will stop command processing until the active drawing engine completes pending operations and invalidates the associated read cache. Optionally, any data marked 'dirty' in the rendering cache can be flushed to memory. In some embodiments, the pipeline flush command 912 may be used for pipeline synchronization or before placing the graphics processor into a low power state.In some embodiments, pipeline selection command 913 is used when a sequence of commands requires the graphics processor to explicitly switch between pipelines. In some embodiments, pipeline select command 913 is only required once in an execution context before issuing a pipeline command, unless the context is to issue commands for two pipelines. In some embodiments, pipeline flush command 912 is required just before pipeline switching via pipeline select command 913 .In some embodiments, pipeline control commands 914 configure the graphics pipeline for operation and are used to program the 3D pipeline 922 and the media pipeline 924 . In some embodiments, pipeline control commands 914 configure the pipeline state of the active pipeline. In one embodiment, pipeline control commands 914 are used for pipeline synchronization and for clearing data from one or more cache memories within the active pipeline before processing a batch of commands.In some embodiments, return buffer status command 916 is used to configure a set of return buffers for corresponding pipelines to write data. Some pipeline operations require allocating, selecting, or configuring one or more return buffers into which intermediate data is written during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and perform cross-thread communications. In some embodiments, return buffer status 916 includes selecting a size and number of return buffers for a set of pipeline operations.The remaining commands in the command sequence vary based on the active pipeline used for the operation. Based on pipeline decision 920 , the sequence of commands is tailored for either the 3D pipeline 922 starting at 3D pipeline state 930 or the media pipeline 924 starting at media pipeline state 940 .Commands for 3D pipeline state 930 include 3D state set commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables to be configured before processing 3D primitive commands. The values of these commands are determined based at least in part on the specific 3D API in use. In some embodiments, 3D pipeline status 930 commands can also selectively disable or bypass specific pipeline elements if those elements will not be used.In some embodiments, the 3D primitive 932 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters passed to the graphics processor via the 3D primitive 932 command are forwarded to the vertex picking function in the graphics pipeline. The vertex picking function uses the 3D primitive 932 command data to generate a vertex data structure. The vertex data structures are stored in one or more return buffers. In some embodiments, the 3D primitive 932 command is used to perform vertex operations on 3D primitives via a vertex shader. To process vertex shaders, 3D pipeline 922 dispatches shader execution threads to graphics processor execution units.In some embodiments, the 3D pipeline 922 is triggered via an execution 934 command or event. In some embodiments, a register write triggers command execution. In some embodiments, execution is triggered via a 'go' or 'kick' command in a command sequence. In one embodiment, pipeline synchronization commands are used to trigger command execution to flush a sequence of commands through the graphics pipeline. The 3D pipeline will perform geometry processing on 3D primitives. Once the operation is completed, the resulting geometric object is rasterized and the resulting pixels are shaded by the pixel engine. For those operations, additional commands for controlling pixel shading and pixel backend operations can also be included.In some embodiments, graphics processor command sequence 910 follows the media pipeline 924 path when performing media operations. Generally, the specific use and manner in which media pipeline 924 is programmed depends on the media or computing operations to be performed. During the media decoding process, specific media decoding operations may be offloaded to the media pipeline. In some embodiments, the media pipeline may also be bypassed, and media decoding may be performed in whole or in part using resources provided by one or more general-purpose processing cores. In one embodiment, the media pipeline further includes elements for general graphics processor unit (GPGPU) operation, wherein the graphics processor is configured to perform SIMD vector operations using a compute shader program, the compute shader program being Rendering graphics primitives are not explicitly related.In some embodiments, media pipeline 924 is configured in a similar manner to 3D pipeline 922 . Prior to the media object command 942, a set of media pipeline status commands 940 are dispatched or placed into the command queue. In some embodiments, media pipeline status command 940 includes data for configuring media pipeline elements that will be used to process media objects. This includes data used to configure video decoding and video encoding logic within the media pipeline, such as encoding or decoding formats. In some embodiments, the media pipeline status command 940 also supports the use of one or more pointers to "indirect" status elements that contain a batch of status settings.In some embodiments, media object command 942 supplies pointers to media objects for processing by the media pipeline. The media object includes a memory buffer that contains video data to be processed. In some embodiments, all media pipeline states must be valid before issuing media object command 942. Once the pipeline state is configured and the media object command 942 is queued, the media pipeline 924 is triggered via an execute 944 command or equivalent execution event (eg, register write). The output from media pipeline 924 may then be post-processed by operations provided by 3D pipeline 922 or media pipeline 924. In some embodiments, GPGPU operations are configured and performed in a similar manner to media operations.Graphics software architectureFigure 10 illustrates an exemplary graphics software architecture for a data processing system 1000 in accordance with some embodiments. In some embodiments, the software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, processor 1030 includes a graphics processor 1032 and one or more general purpose processor cores 1034. Graphics application 1010 and operating system 1020 each execute in system memory 1050 of the data processing system.In some embodiments, 3D graphics application 1010 includes one or more shader programs including shader instructions 1012 . Shader language instructions may be in a high-level shader language, such as High-Level Shader Language (HLSL) or OpenGL Shader Language (GLSL). The application also includes executable instructions 1014 in machine language suitable for execution by the general-purpose processor core 1034 . The application also includes graphics objects 1016 defined by vertex data.In some embodiments, operating system 1020 may be an operating system from Microsoft Corporation, a proprietary UNIX-style operating system, or an open source UNIX-style operating system using a variant of the Linux kernel. When using the Direct 3D API, the operating system 1020 uses a front-end shader compiler 1024 to compile any shader instructions 1012 in HLSL into a low-level shader language. The compilation may be just-in-time (JIT) compilation, or the application may perform shader pre-compilation. In some embodiments, during compilation of the 3D graphics application 1010, high-level shaders are compiled into low-level shaders.In some embodiments, user-mode graphics driver 1026 includes a back-end shader compiler 1027 for transforming shader instructions 1012 into a hardware-specific representation. When using the OpenGL API, shader instructions 1012 in the GLSL high-level language are passed to the user-mode graphics driver 1026 for compilation. In some embodiments, user-mode graphics driver 1026 uses operating system kernel-mode functionality 1028 to communicate with kernel-mode graphics driver 1029 . In some embodiments, kernel-mode graphics driver 1029 communicates with graphics processor 1032 to dispatch commands and instructions.IP core implementationOne or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium that represents and/or defines logic within an integrated circuit, such as a processor. For example, the machine-readable medium may include instructions representing various logic within a processor. When read by a machine, the instructions may cause the machine to fabricate logic for performing the techniques described herein. Such representations (called "IP cores") are reusable units of logic of an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model describing the structure of the integrated circuit. . The hardware model may be supplied to various consumers or manufacturing facilities that load the hardware model on a manufacturing machine that manufactures integrated circuits. Integrated circuits may be fabricated such that the circuits perform the operations described in connection with any of the embodiments described herein.11 is a block diagram illustrating an IP core development system 1100 that may be used to fabricate integrated circuits to perform operations, according to an embodiment. The IP core development system 1100 can be used to generate modular, reusable designs that can be incorporated into larger designs or used to build entire integrated circuits (eg, SOC integrated circuits). The design facility 1130 may generate a software simulation 1110 of the IP core design using a high-level programming language (eg, C/C++). Software simulation 1110 can be used to design, test and verify the behavior of IP cores. A register transfer level (RTL) design can then be created or synthesized from the simulation model 1100 . The RTL design 1115 is an abstraction of the behavior of an integrated circuit that models the flow of digital signals between hardware registers, including associated logic executed using the modeled digital signals. In addition to RTL Design 1115, lower level designs at logic level or transistor level can be created, designed or synthesized. From this, the specific details of the initial design and simulation can change.The RTL design 1115 or equivalent may be further synthesized by the design facility into a hardware model 1120 , which may be in a hardware description language (HDL) or some other representation of the physical design data. HDL can be further simulated or tested to verify the IP core design. Non-volatile memory 1140 (eg, hard drive, flash memory, or any non-volatile storage medium) may be used to store the IP core design for delivery to a 3rd party manufacturing facility 1165 . Alternatively, the IP core design may be transmitted (eg, via the Internet) via a wired connection 1150 or a wireless connection 1160 . Fabrication facility 1165 may then fabricate integrated circuits based at least in part on the IP core design. The fabricated integrated circuit may be configured to perform operations in accordance with at least one embodiment described herein.Figure 12 is a block diagram illustrating an exemplary system-on-chip integrated circuit 1200 that may be fabricated using one or more IP cores, according to an embodiment. Exemplary integrated circuits include one or more application processors 1205 (eg, CPUs), at least one graphics processor 1210, and may additionally include an image processor 1215 and/or a video processor 1220, any of which may be Modular IP cores from the same or multiple different design facilities. The integrated circuit includes peripheral or bus logic, including USB controller 1225, UART controller 1230, SPI/SDIO controller 1235, and I2S/I2C controller 1240. Additionally, the integrated circuit may include a display device 1245 coupled to one or more of a high-definition multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255 . Storage may be provided by flash memory subsystem 1260 (including flash memory and flash memory controller). A memory interface may be provided via memory controller 1265 to access SDRAM or SRAM memory devices. Some integrated circuits additionally embed a security engine 1270.Additionally, other logic and circuitry may be included in the processor of integrated circuit 1200, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores.Apparatus and method for non-uniform frame buffer rasterizationTo provide more efficient rasterization, one embodiment of the present invention includes hardware that efficiently reduces the rendering resolution of an image within a specified area. In contrast to current graphics processing units (GPUs), where all pixels in a rendered image have the same predefined pixel spacing across the image, the non-uniform framebuffer rasterizer described below allows pixel The spacing changes across the image in a way that makes the rasterization extremely efficient, requiring little modification to achieve the desired result. By changing the rasterized pixel spacing, the number of shading executions and framebuffer accesses (depth and color) is significantly reduced.The basic principles of the present invention can be implemented to improve rasterization efficiency in a variety of different applications. As mentioned above, for example, in current virtual reality rendering systems, pixels are sparse towards the edges of the image, which requires more pixels to be rendered than will be used to create the final image. Therefore, when used for virtual reality rendering, embodiments of the present invention can be used to reduce the rendering resolution towards the edges and corners of the intermediate image. However, it should be noted that the underlying principles of the invention are not limited to virtual reality or any specific application.As used herein, an "image pixel" (or simply "pixel") is a pixel of a rendered image. A "scaled pixel" (SP) is a rectangle surrounding one or more image pixels. The "scale factor" is the size ratio between scale pixels and image pixels. A "tile" is a rectangular area containing a fixed number (W×H) of scale pixels. The number of image pixels covered by a tile depends on the scale factor.A tile can be a memory page or any other size-dependent buffer area. In the current system, there is a one-to-one correspondence between scale pixels and pixels in the image. In comparison, in one embodiment of the present invention, the tile may correspond to W×H image pixels, 2W×2H image pixels or 4W×4H image pixels, for example corresponding to 1×1, 2× respectively. Scale factor of 2 or 4×4. As discussed below, fixed-function rasterizers can be adapted to efficiently handle such non-uniform frame buffers, and the final buffer can be efficiently filtered at high quality. Using these techniques, it is possible to rasterize the image pixel density 1/(4×4)=1/16 towards the edges, making rendering such tiles approximately 16 times faster than rendering image pixels directly. In one embodiment, the programmer may determine a scaling factor over the image.In the following discussion, certain assumptions about tile size, scale factors, and other variables will be made for explanation purposes. However, it should be clear that other sizes, scale factors, etc. may be used to implement the basic principles of the invention. In the above discussion, assuming that the tile has 4×4 scale pixels (SP) and the possible scale factors are 1×1, 2×2 and 4×4, it means that the tile can correspond to 4×4 image pixels respectively , 8×8 image pixels, or 16×16 image pixels. In other embodiments, asymmetric scaling factors such as 2x4 and 4x1 may also be used.15 illustrates a rendering engine 870 including a memory 1530 for storing graphics data (eg, a frame buffer or a set of frame buffers) and for performing operations as described herein, in accordance with one embodiment of the present invention. Non-uniformly rasterized rasterizer unit 1500 is depicted. Rendering engine 870 may be implemented within a graphics processor architecture, such as those described above with respect to FIGS. 1-12. However, the underlying principles of the invention are not limited to any particular GPU architecture. A brief overview of rendering engine 870 will first be provided, following a detailed description of the operation of the rendering engine at different levels.In one embodiment, triangles or other polygons 1510 defining the surface to be rendered are generated by the front end of the graphics processor and input to the rasterizer unit 1510 . Non-uniform rasterization logic 1512 first tests each tick pixel to determine whether the square, rectangle, or other shape defined by each tile overlaps the triangle being rendered. If so, non-uniform rasterization logic 1512 continues per-sample testing of these tiles. For example, the non-uniform edge function discussed below can be used.In one embodiment, non-uniform rasterizer 1514 rasterizes at different resolutions for different portions of the image (eg, using different scale factors for different tiles and portions of the tile). The specific way in which non-uniform rasterization is performed, including the scale factors used to define the different resolution patterns of tiles, is specified by layout bits 1513, which may be based on the known characteristics of the application for which the image was generated. Pre-selected. For example, as mentioned above, for virtual reality applications, tiles toward the periphery of each image may be rendered at a relatively lower resolution than tiles in the middle region. The layout bits 1513 for each image frame may also be dynamically generated based on feedback information provided to the rasterizer unit 1500, such as in response to tracking the user's gaze as discussed below.In one embodiment, tile storage logic 1516 then stores the results of the non-uniform rasterization within memory 1530 in an efficient manner. As discussed below, for example, tile storage logic 1516 may sparsely store images in memory using a mip map hierarchy or an "in-place" storage scheme.To further demonstrate the operation of various embodiments of the invention, various specific details will be provided, such as specific tile sizes and shapes, scale pixel sizes and shapes, and memory storage arrangements. It should be noted, however, that the underlying principles of the invention are not limited to these specific implementation details.In Figure 16A, an exemplary set of 3x2 tiles 1602 is shown. Each tile, such as tile 1601 (highlighted with a dashed square), includes 4x4 image pixels 1603 and a tile center 1604 (marked with an X). A typical hierarchical rasterizer can test to determine whether the 4×4 square surrounding the pixel overlaps the triangle being rendered. If so, the rasterizer continues per-sample testing. To make this process more efficient, the value of the edge function e(x,y) can be calculated at the center 1604 of the 4x4 tile 1601. This value is then used in two different ways. First, when the rasterizer has determined that the triangle overlaps the tile, only the edge function value at the center of the 4×4 tile is offset in order to calculate the edge function value of the sample. For example, the edge function can be implemented as:e(x,y)=a*x+b*y+c,where a, b, and c are constants calculated in the triangle setup based on the vertices of the triangle. The marginal function of the sample can be estimated as:ec+a*xP+b*yPIn this example, ec is the edge function estimated at the center of the tile, and (xp, yp) are the local coordinates of the sample to be estimated (i.e. "local" relative to the center of each tile). For example, in the image shown in Figure 16A, the four samples closest to the center of the tile have coordinates: (±0.5, ±0.5).In one embodiment, the non-uniform rasterizer 1514 selects a different coordinate list of scale pixels (xp, yp) for each possible scaling factor used in the tile. For example, as shown in Figure 16B, for a tile with 4×4 image pixels 1612 (i.e., a 1:1 ratio between scale pixels and image pixels), the first pixel closest to the center of the tile is drawn from the center. Each first sample in the sample has an offset of (±0.5, ±0.5). For a tile 1611 with 8 × 8 image pixels (i.e., 4 × 4 scale pixels, each scale pixel including 2 × 2 image pixels), each of the first samples from the center to the center of the tile is The offset of the first sample is (±1.0, ±1.0). Finally, for a tile 1610 with 16×16 image pixels (i.e., 4×4 scale pixels, each scale pixel including 4×4 image pixels), from the center to the first sample close to the center of the tile The offset of each is (±2.0, ±2.0). Therefore, to ensure appropriate offsets between pixels, the non-uniform rasterizer 1514 will select these offsets based on the type of tile currently being rasterized. As mentioned above, in one embodiment this is done by maintaining a different list of coordinates (xp, yp) for each of the different scale factors used to generate the tile.Additionally, in one embodiment, the non-uniform rasterizer 1514 interprets these scale factors when determining how to traverse from one tile center to adjacent tile centers using only addition. For example, for tile 1612 with 4×4 image pixels, to traverse to the tile (immediately to the right of the current tile), the following calculation can be performed: ec=ec+a*4, where a is used for vertical Traverse. Calculating ec=ec+b*4 can be used for vertical traversal. In this example, the factor 4 is derived from the tile width/height being 4 image pixels. For tile 1611 with 8×8 image pixels, these equations become ec=ec+a*8 and ec=ec+b*8 (since the width/height is 8 image pixels), and for tile 1611 with 16× For tile 1610 of 16 image pixels, these equations become ec=ec+a*16 and ec=ec+b*16 (since the width/height is 16 image pixels).In the example shown in Figure 16B, each tile includes 4x4 scale pixels, but the size of the tile is dictated by the scaling factor used. For example, tile 1612 covers 4x4 image pixels, tile 1611 covers 8x8 image pixels, and tile 1610 covers 16x16 image pixels. It should be noted that only the specific tile sizes and shapes shown in Figure 16B have been selected for purposes of explanation. The underlying principles of the invention are not limited to any particular tile size or configuration. In one embodiment, for example, the tile size may be adjusted to fit within a memory page, which is typically 4096 bytes. If the pixels use 4 bytes, 1024 pixels can be stored in such a tile, which makes the tile size 32×32 pixels.Figure 17 illustrates an exemplary arrangement that provides relatively high resolution in the middle of the image and relatively low resolution toward the edges of the image. As mentioned above, this arrangement can be used in virtual reality implementations where distortion occurs towards the edges of the image. Although all tiles 1610-1612 include 4x4 scale pixels, different numbers of image pixels may be used to generate different scale pixels. The end result is: 16×16 image pixels are used to generate the lowest resolution tile 1610 (toward the edge of the image), 8×8 image pixels are used to generate a set of relatively higher resolution tiles 1611, and 4 ×4 image pixels generate the highest resolution tile 1610 (i.e. one-to-one mapping).One embodiment of the non-uniform rasterizer 1514 performs its operations from the perspective of the largest tile (eg, tile 1610 generated with 16×16 image pixels in FIG. 17 ), which is described herein. Known as traversal of tiles ("TT"). In one embodiment, the rasterizer always tracks the edge function value at the center of the TT (eg, 16x16 pixels), regardless of which scaling factor is used. Therefore, ec=ec+a*16 is used to traverse from the current TT to the TT on the right, since TT 1610 corresponds to 16×16 image pixels.In one embodiment, the layout used to generate each TT in the image may be selected by layout bits 1513 based on the specific application for which the non-uniform rasterization is performed (e.g., using a higher value toward the center for some virtual reality applications) resolution). By way of example and not limitation, Figure 18 shows a set of different layouts 1801-1806 for each TT. Layout 1801 uses a single tile formed with 16×16 image pixels (e.g., such as tile 1610 in FIG. 16B ), and layout 1602 uses a set of four tiles formed with 8×8 image pixels (e.g., such as Tile 1611 in Figure 16B), and layout 1806 uses a set of sixteen tiles formed using 4×4 image pixels (eg, tile 1612 in Figure 16B). Layout 1803 includes a combination of four 4x4 pixel tiles and three 8x8 pixel tiles; layout 1804 includes a combination of eight 4x4 pixel tiles and two 8x8 pixel tiles; and layout 1805 includes A combination of twelve 4×4 pixel tiles and one 8×8 pixel tile.The numbers above/below each area correspond to the number of permutations each layout of this type can have. For example, in layout 1803, an 8x8 image pixel tile area (top left) may be placed in each of the four corners, resulting in four combinations of that layout. In this example, each TT has 1+1+4+6+4+1=17 different layouts. To reduce the number of bits required to encode all permutations, some permutations can be skipped. For example, in layout 1804, both arrangements may be skipped, where the two 4x4 scale pixel tile areas are in opposite corners, and only the two 4x4 scale pixel tile areas may be used in the same horizontal position or Arrangement in vertical position (note that each 4x4 scale pixel tile area has its own 2x2 tile of 4x4 pixels). This would mean that there are only 15 types of layouts per TT, allowing a 4-bit encoding to be used to identify each type and arrangement of TTs (i.e. 24=16). In one embodiment, the 16th combination may be defined as unused. However, for more flexibility, you can choose to use 5 bits to store this encoding.By way of example and not limitation, for a resolution of 1920x1200, with 120x75 TTs, this means 120*75*4/8 = 4500 storage bytes. With 32×32 scale pixels per tile, its resolution for 1920×1200 becomes 15*10*4/8 = 75 storage bytes and for 3840×2160 it becomes 30*17*4/ 8 = 255 bytes.In one embodiment, the non-uniform rasterizer 1514 reads the layout bits 1513 of each TT to determine the TT layout during triangle rasterization. Depending on the layout bits, rasterizer 1514 then accesses 16×16 pixel tiles, 8×8 pixel tiles, and 4×4 pixel tiles by traversing to the center of these tiles (e.g., by making the layout’s table The column values (px, py) are added to calculate the new ec value), and then it is tested whether the triangle overlaps the corresponding tile; if so, the sample test continues.For the example image shown in Figure 17, approximately 40% fewer clips were rasterized using this technique without degrading image quality during testing with the default settings of the Oculus Rift Dk1 virtual reality headset. . This also eliminates depth testing and shading and their associated bandwidth usage, as well as color buffer physical memory and bandwidth usage, as described below.Once the tick pixels are rendered, they need to be stored to memory. The simple solution would be to copy the color and depth of the tick pixel to all image pixels covered by said tick pixel. However, this is highly undesirable as it will consume more physical memory and memory bandwidth than necessary and make depth optimization difficult due to the non-linear nature of image pixel depth.In contrast, one embodiment of the present invention includes tile storage logic 1516 for efficiently storing tiles in memory 1530. Two storage schemes known as "in-place" storage and "mip" storage are described below. Both storage schemes rely on allocating large amounts of virtual memory but only the necessary amount of physical memory, leaving portions of the virtual memory unused.On modern computers, memory is allocated in pages, where a page is typically 4096 bytes. Images are stored in several pages, each page storing a rectangular slice of the image. In both storage schemes proposed, the tile size is chosen such that one tile occupies one page (e.g., 32 × 32 scale pixels, where each scale pixel is 4 bytes (that is, 4 *32*32=4096)) instead of 4×4 scale pixels in the above example. If the color format is 8 bytes per tick pixel, you can instead make the tiles take up two pages (64×64 tick pixels) to keep the tiles square, but this is only necessary for the mip storage scheme.In one embodiment, the in-situ storage scheme works as follows. The image memory is arranged as indicated by the finest resolution (ie, a scale factor of 1x1, such as where tile 1612 in Figure 16B has 4x4 image pixels). When storing tiles of any scale, the upper left corner is used to determine where to store said tile. Therefore, some pages will never be touched again, nor are they backed up by any physical storage. To read this layout, the layout bits are used to determine the scaling factor from the stored scaling factor. If necessary, layout bits can be inferred by checking which pages are backed up by physical memory and which are not.Figures 19A-19C provide examples of how tiles with different numbers of image pixels (ie, different ratios of image pixels to scale pixels) may be packed within a set of memory pages 1-8. In Figure 19A, the tiles with the highest resolution (that is, 1 scale pixel = 1 image pixel (eg, tile 1612 in Figure 16B)) are packed in contiguous memory pages. Since each tile is resized to fit within a memory page (starting at the top left corner of the image), the first tile is packed into page 1, the second tile is packed into page 2, and so on. This is the typical way a texture would be stored in memory, where each tile is numbered from left to right and top to bottom, where each tile is stored consecutively.Figure 19B now illustrates non-uniform tiles that may be packed within a memory page in accordance with one embodiment of the invention. The first two tiles have the highest resolution - 1 image pixel per scale pixel (i.e. the same resolution as all tiles in Figure 19A) - and are stored within memory pages 1 and 2 as in Figure 19A. However, after the first two tiles, the next tile (B) includes scale pixels, each of which includes 4 image pixels (eg, such as tile 1611 in Figure 16B). Tile B is the same size as a high-resolution tile (since all tiles contain 4×4 scale pixels) but it contains data for 8×8 image pixels. In one embodiment, Tile B is stored in memory at the location in Figure 16A where the third high-resolution tile is stored - memory page 3. Where the other three high-resolution tiles will already be stored (memory pages 4, 7, and 8), no physical memory is allocated to store the contents of those locations.Figure 19C shows another example with two tiles A and B, both containing 8×8 image pixels (i.e. 2×2 image pixels per scale pixel). In this example, Tile A is stored in the same memory page (1) as the first high-resolution tile in Figure 19A, and Tile B is stored in the same memory page (1) as the third high-resolution tile in Figure 19A in memory page (3). Where the other six high-resolution tiles would already be stored (memory pages 2, 4, and 5-8), no physical memory is allocated to store the contents of those locations.In one embodiment, the mip storage scheme works as follows. The mip map hierarchy is allocated in virtual memory. The finest level of the mip map represents a scale factor of 1×1. When storing tiles of scale pixels, the scale factor determines which mip level the tile is written to. This directly produces a sparsely composed mip mapped image, where each region of the image only exists in a single mip level. This representation can be easily accessed using texture samplers of graphics processors that already support mip map hierarchies.To avoid noticeable changes in resolution, the mip map hierarchy can allocate tiles to two or more mip map levels. In this example, non-uniform rasterization is performed as described above and performed until completion. A coarser layer on top of each active tile can then be created by sampling existing tiles with a 2×2 texel box filter (or a more complex low-pass filter). The warp pass changes so that the texture coordinate derivative is calculated so that the filtered color is calculated as a blend between the two filled mip levels of the current texture coordinate (eg, trilinear lookup). Figure 20 shows an example of three mip levels that have been rasterized using non-uniform rasterization (such tiles are indicated as rasterized mapped tiles). As can be seen, the tiles above each mapped tile are generated using filtering (referred to herein as a filtering surface). The gray area is where trilinear filtering can be used to sample during warping. Note that this example shows a 1D texture (lines) for simplicity but can easily be extended to a 2D texture.For boundaries between tiles where the rasterization resolution changes (for example, from a 2×2 scale factor to a 4×4 scale factor), sampling can be performed at the mip map layer that the two tiles already form. Alternatively, mip map level details can be precomputed and stored in a separate texture, and only queried during warping, rather than computing texture coordinate derivatives. Yet another alternative is to skip the step of creating a coarser mip map level, and developers typically use a 2×2 box filter to create the coarser mip map level. In contrast, during warping, only one level exists, and the coarser level texels can be calculated by averaging the finer level texels and then blending.In the near future, all virtual display devices will track where the user is looking. This is sometimes called "gaze tracking," and with feedback coming from the VR device, a new distribution of where to focus the rendering work can be calculated. For example, using the techniques described above, the direction of the user's retina can be tracked and new layout bits can be dynamically calculated for all tiles for both the left and right eye screens. In one embodiment, the tiles may be calculated such that the area of each image where the user's retina is pointed is provided with relatively high resolution, while areas further away from where the user's retina is pointed are provided with relatively lower resolution. Rate. The image is then rendered using these new layout bits for both the right eye and the right eye. The layout bits described above only take up 255 bits per eye, and therefore there is not a lot of information that needs to be calculated again every frame, and it is not expensive to send to the GPU/graphics processor.In summary, embodiments of the invention described herein provide for rasterizing at different resolutions of different parts of an image to remove unnecessary information and improve efficiency. In one embodiment, the resolution is defined by the layout of each tile of each image and potentially using a set of different resolution patterns within the tiles. Additionally, techniques for sparsely storing each image in memory are described, including storing the images into a mip map hierarchy. For example, after non-uniform rasterization has been completed, a mip hierarchy can be created above the filled tiles in the mip map hierarchy, and a trilinear search can then be performed. Finally, the orientation of the user's retina can be tracked, and new layout bits can be dynamically calculated for all tiles of both the left and right eye screens, such that the area to which the user's retina is pointing is relatively high for each image. resolution.A method according to one embodiment of the invention is illustrated in Figure 21. The method may be implemented in the context of the system architecture described above, but is not limited to any specific system architecture.At 2101, triangles or other polygons for the next image to be rendered are received, and at 2102, the layout bits defining the non-uniform layout of each image are read. As discussed above, in one embodiment, different tile patterns may be defined within each image based on layout bits (see, eg, Figure 18 and associated context).At 2103, overlap between tiles and polygons is determined. For example, each tick pixel can be tested to determine whether the square, rectangle, or other shape defined by each tile overlaps the triangle being rendered. The technique described above at this stage is used to evaluate the edge function and adjust the (xp, yp) coordinates and traverse between tile centers.At 2104, non-uniform rasterization is performed based on the layout bits. As mentioned, different layout bits can be defined in order to adjust the resolution in different areas of the image. For example, for a virtual reality implementation, relatively higher resolution may be provided in the middle of the image, and relatively lower resolution may be provided toward the edges of the image (see, eg, Figure 17 and associated context). In embodiments that track the user's gaze, layout bits may be adjusted based on the user's current gaze.Once the non-uniform rasterization is complete (at 2105), the tiles of the rasterized image are sparsely stored in memory. For example, an in-situ storage scheme may be employed, in which the memory is arranged as indicated by the highest resolution. Alternatively, a mip storage scheme can be employed, where a mip map hierarchy is allocated in virtual memory and different tiles are stored at different levels of the hierarchy.At 2106, a filtering operation is performed to reduce sharp edges between regions of images rendered at different resolutions. Various different filtering techniques can be employed to reduce these sharp edges, such as by applying a low pass filter to the higher resolution portion of the image or using a trilinear filtering operation.At 2107, a determination is made as to whether there are changes to the rasterized layout for the next image. For example, as mentioned above, in one embodiment, feedback information including a new set of layout bits may be provided. By way of example and not limitation, the focus of the user's retina may have changed in a gaze tracking environment, necessitating changes in the area of the image to be rendered at high resolution. Regardless of the specific implementation, if there is a change in the layout, the layout bit is updated at 2108. In either case, the process repeats starting at 2101 for the next image in the sequence.Rendering to different resolutions in different parts of an image can now be done by rendering the entire scene at once for each different resolution. For each of the resolutions, only the area to be rendered at that resolution is calculated, and a stencil buffer is used to mask the remaining areas. Finally, these images can be composited together to produce a single image with different resolutions.This way of accomplishing variable resolution has several disadvantages. One disadvantage is that the scene is rendered several times (once for each resolution) and therefore geometry processing (like vertex shading and triangle setup) becomes several times more expensive. Another disadvantage is that the rasterizer will rasterize masked areas because the stencil test is performed after the pipeline. This creates pipeline bubbles because many continuity pipelines can be spent rasterizing masked areas while the pixel processing pipeline is idle waiting for unmasked pixels. Finally, the stencil buffer itself also consumes a lot of bandwidth (especially if coupled to the depth buffer). An alternative way to reduce the resolution is to render several images at different angles (like a panorama), thus completely avoiding large gaps at the periphery. This is a more involved technique that interferes with many visual aspects, and is more difficult to use. None of the embodiments of the invention disclosed herein suffer from these disadvantages.Embodiments of the present invention may include the steps that have been described above. These steps may be embodied as machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor to perform the steps. Alternatively, these steps may be performed by specific hardware components containing hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.As described herein, instructions may refer to a specific configuration of hardware, such as an application specific integrated circuit (ASIC) that is configured to perform certain operations or have a predetermined function or is stored in a computer that is embodied as a non-transitory Software instructions in memory of a computer-readable medium. Accordingly, the techniques illustrated in the figures may be implemented using code and data stored and executed on one or more storage devices (eg, end stations, network elements, etc.). Such electronic devices store and transmit (either internally and/or with other electronic devices over a network) code and data using computer machine-readable media, such as non-transitory computer machine-readable storage media (e.g., magnetic disks; optical disks; random access drives) access memory; read-only memory; flash memory device; phase change memory) and transient computer machine-readable communication media (e.g., electrical, optical, acoustic, or other forms of propagated signals—e.g., carrier waves, infrared signals, digital signals, etc.).Additionally, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (e.g., keyboard, touch screen, and/or monitor) and network connection. The set of processors and other components are typically coupled through one or more buses and bridges (also called bus controllers). Storage devices and signals that carry network traffic represent one or more machine-readable storage media and machine-readable communications media, respectively. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more portions of embodiments of the invention may be implemented using different combinations of software, firmware, and/or hardware. Throughout the detailed description, for the purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. In some instances, well-known structures and functions have not been described in detail to avoid obscuring the subject matter of the present invention. Accordingly, the scope and spirit of the invention should be determined from the following claims. |
Method and apparatus for remotely programming a functionality of an integrated circuit (IC) are described herein. In one aspect, exemplary method includes receiving a command for enabling a functionality of an integrated circuit (IC) from a remote facility over a network, and enabling the functionality of the IC in response to the command. Other methods and apparatuses are also described. |
CLAIMS What is claimed is: 1. A method, comprising: receiving a command for programming a clock speed of an integrated circuit (IC) from a remote facility over a network; and adjusting the clock speed of the IC in response to the command. 2. The method of claim 1, wherein the network is a wide area network (WAN) or a local area network (LAN). 3. The method of claim 1, wherein the command comprises an encrypted programming code to program the clock speed, wherein the method further comprises decrypting the programming code. 4. The method of claim 3, further comprising attempting to match the decrypted programming code with a speed code within the IC to determine whether the programming code is valid. 5. The method of claim 4, further comprising programming a phase-lock-loop (PLL) to adjust the clock speed of the IC based on the programming code if the programming code is valid. 6. The method of claim 4, further comprising examining the programming code to determined whether the programming code is valid against an identification, (ID) of the IC. <Desc/Clms Page number 20> 7. The method of claim 4, further comprising rejecting the programming code if the programming code does not match with the speed code. 8. The method of claim 7, further comprising disabling an ability to program the clock speed of the IC if the attempting is performed more than a predetermined number of chances. 9. The method of claim 1, wherein the remote facility is a Web server, the method further comprises: transmitting a request for adjusting the clock speed of the IC to the remote facility over the network; and tendering a payment associated with the request to exchange for the command. 10. An apparatus, comprising: an interface to receive a command for programming a clock speed of an integrated circuit (IC) from a remote facility over a network; and a programming engine coupled to the interface to program the clock speed of the IC based on the command. 11. The apparatus of claim 10, further comprising a microcode unit to receive the command from the interface. 12. The apparatus of claim 11, wherein the command includes an encrypted programming code, the apparatus further comprises a decryption engine coupled to the microcode unit to decrypt the programming code. <Desc/Clms Page number 21> 13. The apparatus of claim 12, further comprising a phase-lock-loop (PLL) circuit to receive a speed code from the decryption engine to program a clock circuit of the IC to achieve a desirable clock speed. 14. The apparatus of claim 12, further comprises one or more speed keys to allow the decryption engine to match the programming code with the one or more speed keys to determine whether the programming code is valid. 15. The apparatus of claim 14, wherein the programming code is matched with the one or more speed keys based on an identification (ID) of the IC. 16. The apparatus of claim 10, wherein the network is a wide area network (WAN) or a local area network (LAN). 17. A machine-readable medium having executable code to cause a machine to perform a method, the method comprising: receiving a command for programming a clock speed of an integrated circuit (IC) from a remote facility over a network; and adjusting the clock speed of the IC in response to the command. 18. The machine-readable medium of claim 17, wherein the network is a wide area network (WAN) or a local area network (LAN). 19. The machine-readable medium of claim 17, wherein the command comprises an encrypted programming code to program the clock speed, wherein the method further comprises decrypting the programming code. <Desc/Clms Page number 22> The machine-readable medium of claim 19, wherein the method further comprises attempting to match the decrypted programming code with a speed code within the IC to determine whether the programming code is valid. The machine-readable medium of claim 20, wherein the method further comprises programming a phase-lock-loop (PLL) to adjust the clock speed of the IC based on the programming code if the programming code is valid. The machine-readable medium of claim 20, wherein the method further comprises examining the programming code to determined whether the programming code is valid against an identification (ID) of the IC. The machine-readable medium of claim 20, wherein the method further comprises rejecting the programming code if the programming code does not match with the speed code. The machine-readable medium of claim 23, wherein the method further comprises disabling an ability to program the clock speed of the IC if the attempting is performed more than a predetermined number of chances. The machine-readable medium of claim 17, wherein the remote facility is a Web server, the method further comprises: transmitting a request for adjusting the clock speed of the IC to the remote facility over the network; and tendering a payment associated with the request to exchange for the command. A method, comprising: <Desc/Clms Page number 23> receiving a command for enabling a functionality of an integrated circuit (IC) from a remote facility over a network; and enabling the functionality of the IC in response to the command. 27. The method of claim 26, wherein the command is received via an eCommerce mechanism over the network. 28. The method of claim 26, wherein the functionality of the IC is member selected from the group consisting of : a clock speed capability; a cache memory; and at least a portion of a core logic. 29. A data processing system, comprising: a processor; and an interface to receive a command from a remote facility over a network to enable a functionality of the processor. 30. The data processing system of claim 29, further comprising a memory coupled to the processor and coupled to the interface, to store instructions that causes the processor to perform operations, the operations including: decrypting the command to retrieve a programming code; matching the programming code with an internal signature to determine whether the programming code is valid; and programming a circuit to enable the functionality of the processor if the programming code is valid. |
<Desc/Clms Page number 1> METHOD AND APPARATUS FOR PROGRAMMING A FUNCTIONALITY OF AN INTEGRATED CIRCUIT (IC) FIELD [0001] Embodiments of the invention relate to the field of an integrated circuit (IC) ; and more specifically, to a remote programmable IC. BACKGROUND [0002] A microprocessor, also known as a CPU (central processing unit), is a complete computation engine that controls a computer. Currently, internal CPU clocks speeds are fixed at the manufacturing facility via an internal fuse setting. These CPU speed settings cannot be changed after initial manufacture, test, and packaging. A large percentage of processors may be able to execute at its maximum clock speed capability. However, higher clock speeds may be charged for a premium price by a respective vendor. To enable lower cost effective CPUs in the market without cannibalizing the higher performance market segments, vendors tend to force slower speeds on devices that are fully capable of running at much higher speeds for markets that cannot justify a higher performance cost premium. [0003] Many"new"market segments currently cannot justify the cost of the higher performance devices. In addition, many of those devices tend to have long life cycles (e. g. , 5-6 years or more) and once they are deployed, the devices may not realistically and/or economically be returned for upgrade purposes. The fact that these markets are "new"often equates, initially, to limited sophistication and thus performance requirements. An end user may not be willing to pay for sophistication and its associated performance increase, performance increase that they don't yet know they need. However, it is understood that as the consumer slowly becomes more and more <Desc/Clms Page number 2> sophisticated, the demand for more services and thus performance will increase, along with the end user's willingness to pay for increased performance. [0004] As a result, customers are caught in a predicament. They are deploying millions of devices of a 5-6 year product life cycle and those devices cannot be realistically returned for upgrade. Those devices may have minimal performance requirements today and may have dramatically increased performance requirements tomorrow. Evidence of this predicament is manifested by demands for high performance devices with low performance pricing. BRIEF DESCRIPTION OF THE DRAWINGS [0005] The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: [0006] Figure 1 illustrates a block diagram of a network of computer systems which may be used with one embodiment. [0007] Figure 2 illustrates a block diagram of a computer system which may be used with one embodiment. [0008] Figure 3 illustrates a block diagram of a processor in accordance with one embodiment. Figure 4 illustrates a block diagram of a computer system according to another embodiment. [0010] Figure 5 illustrates a block diagram of an embodiment of a system architecture. [0011] Figure 6 is a flow diagram illustrating an embodiment of a process to remotely program a clock speed of an integrated circuit (IC). [0012] Figure 7 is a flow diagram illustrating another embodiment of a process to remotely program a clock speed of an integrated circuit (IC). <Desc/Clms Page number 3> Figure 8 is a flow diagram illustrating an embodiment of a process to remotely enable a functionality of an integrated circuit (IC). DETAILED DESCRIPTION [0014] Methods and apparatuses to remotely program a clock speed of an integrated circuit (IC) are described. According to one embodiment, the exemplary apparatus includes an adjustable phase-lock-loop (PLL) circuit that controls the clock speed of the IC. Adjustments to the PLL may be gated by a decryption engine and an internally supplied set of speed keys. An access counter may be employed to gate all accesses to the speed keys. According to one embodiment, if more than a predetermined number of unsuccessful access attempts are made, the capability to program the clock speed may be disabled. In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. [0016] Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. <Desc/Clms Page number 4> [0017] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as"processing"or"computing"or"calculating"or"determining"or "displaying"or the like, refer to the action and processes of a computer system, or similar data processing device, that manipulates and transforms data represented as physical (e. g. electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. [0018] Embodiments of the present invention also relate to apparatuses for performing the operations described herein. An apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) such as Dynamic RAM (DRAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each of the above storage components is coupled to a computer system bus. [0019] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods. The structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the embodiments of the invention as described herein. <Desc/Clms Page number 5> A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e. g. , a computer). For example, a machine- readable medium includes read only memory ("ROM"); random access memory ("RAM"); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e. g. , carrier waves, infrared signals, digital signals, etc. ) ; etc. [0020] Figure 1 is a diagram of a network of computer systems in which a client computer may be remotely programmed its clock speed of a microprocessor, according to one embodiment of the invention. As shown in Figure 1, a network 100 includes a number of client computer systems that are coupled together through an Internet 122. It will be appreciated that the term"Internet"refers to a network of networks. Such networks may use a variety of protocols for exchange of information, such as TCP/IP, ATM, SNA, SDI, etc. The physical connections of the Internet and the protocols and communication procedures of the Internet are well known to those in the art. It will be also appreciated that such system may be implemented in an Intranet within an organization. Access to the Internet 122 is typically provided by Internet service providers (ISPs), such as the ISP 124, and the ISP 126. Users on client systems, such as the client computer systems 102,104, 118, and 120, generally obtain access to the Internet through Internet service providers, such as ISPs 124 and 126. Access to the Internet may facilitate transfer of information (e. g. , email, text files, media files, etc. ) between two or more digital processing systems, such as the client computer systems 102,104, 118, and 120 and/or a Web server system 128. For example, one or more of the client computer systems 102,104, 118, and 120 and/or the Web server 128 may provide document presentations (e. g. , a Web page) to another one or more of the client computer systems 102,104, 118, and 120 and/or Web server 128. For example, in one embodiment of the invention, one or more client computer systems 102,104, 118, and 120 may request to access a document that may be stored at a remote location, such as the Web server 128. In <Desc/Clms Page number 6> the case of remote storage, the data may be transferred as a file (e. g. , download) and then displayed (e. g. , in a window of a browser) after transferring the file. In another embodiment, the document presentation may be stored locally at the client computer systems 102,104, 118, and/or 120. In the case of local storage, the client system may retrieve and display the document via an application, such as a word processing application. Without requiring a network connection. The Web server 128 typically includes at least one computer system to operate with one or more data communication protocols, such as the protocols of the World Wide Web, and as such, is typically coupled to the Internet 122. Optionally, the Web server 128 may be part of an ISP which may provide access to the Internet and/or other network (s) for client computer systems. The client computer systems 102,104, 118, and 120 may each, with appropriate Web browsing software, access data, such as HTML document (e. g. , Web pages), which may be provided by the Web server 128. The client computer may incorporate an application in accordance with one embodiment of the invention, to allow a user to interact with a vendor facility and download a remote command to program the clock speed of an IC (e. g. , a microprocessor) of the client machine. [0023] The ISP 124 provides Internet connectivity to the client computer system 102 via a modem interface 106, which may be considered as part of the client computer system 102. The client computer systems 102,104, 118, and 120 may be a conventional data processing system, such as a computer having a Pentium microprocessor available from Intel Corporation, a"network"computer, a handheld/portable computer, a cell phone with data processing capabilities, a Web TV system, or other types of digital processing systems (e. g. , a personal digital assistant (PDA)). [0024] Similarly, the ISP 126 provides Internet connectivity for the client computer systems 102,104, 118, and 120. However, as depicted in Figure 1, such connectivity may vary between various client computer systems, such as the client computer systems 102, 104, 118, and 120. For example, as shown in Figure 1, the client computer system 104 is coupled to the ISP 126 through a modem interface 108, while the client computer systems <Desc/Clms Page number 7> 118 and 120 are part of a local area network (LAN). The interfaces 106 and 108, shown as modems 106 and 108, respectively, may represent an analog modem, an ISDN modem, a DSL modem, a cable modem, a wireless interface, or other interface for coupling a digital processing system, such as a client computer system, to another digital processing system. The client computer systems 118 and 120 are coupled to a LAN bus 112 through network interfaces 114 and 116, respectively. The network interface 114 and 116 may be an Ethernet-type, asynchronous transfer mode (ATM), or other type of network interface. The LAN bus is also coupled to a gateway digital processing system 110, which may provide firewall and other Internet-related services for a LAN. The gateway digital processing system 110, in turn, is coupled to the ISP 126 to provide Internet connectivity to the client computer systems 118 and 120. The gateway digital processing system 110 may, for example, include a conventional server computer system. Similarly, the Web server 128 may, for example, include a conventional server computer system. Figure 2 is a block diagram of a digital processing system which may be used with one embodiment of the invention. For example, the system 200 shown in Figure 2 may be used as a client computer system (e. g. , the client computer systems 102,104, 118, and/or 120), a Web server system (e. g. , the Web server system 128), or a conventional server system, etc. Furthermore, the digital processing system 200 may be used to perform one or more functions of an Internet service provider, such as the ISP 124 and 126. [0026] Note that while Figure 2 illustrates various components of a computer system, it is not intended to represent any particular architecture or manner of interconnecting the components, as such details are not germane to the present invention. It will also be appreciated that network computers, handheld computers, cell phones, and other data processing systems which have fewer components or perhaps more components may also be used with the present invention. The computer system of Figure 2 may, for example, be an IBM compatible computer or an Apple Macintosh computer. <Desc/Clms Page number 8> [0027] As shown in Figure 2, the computer system 200, which is a form of a data processing system, includes a bus 202 which is coupled to a microprocessor 203 and a ROM 207, a volatile RAM 205, and a non-volatile memory 206. The microprocessor 203, which may be a Pentium microprocessor from Intel Corporation or a PowerPC G3 or PowerPC G4 microprocessor from Motorola, Inc. , is coupled to cache memory 204 as shown in the example of Figure 2. The bus 202 interconnects these various components together and also interconnects these components 203,207, 205, and 206 to a display controller and display device 208, as well as to input/output (I/O) devices 210, which may be mice, keyboards, modems, network interfaces, printers, and other devices which are well-known in the art. Typically, the input/output devices 210 are coupled to the system through input/output controllers 209. The volatile RAM 205 is typically implemented as dynamic RAM (DRAM) which requires power continuously in order to refresh or maintain the data in the memory. The non-volatile memory 206 is typically a magnetic hard drive, a magnetic optical drive, an optical drive, or a DVD RAM or other type of memory system which maintains data even after power is removed from the system. Typically the non-volatile memory will also be a random access memory, although this is not required. While Figure 2 shows that the non-volatile memory is a local device coupled directly to the rest of the components in the data processing system, it will be appreciated that the present invention may utilize a non-volatile memory which is remote from the system, such as a network storage device which is coupled to the data processing system through a network interface such as a modem or Ethernet interface. The bus 202 may include one or more buses connected to each other through various bridges, controllers, and/or adapters, as is well-known in the art. In one embodiment, the I/O controller 209 includes a USB (Universal Serial Bus) adapter for controlling USB peripherals. [0028] Figure 3 is a block diagram illustrating an embodiment of a microprocessor whose clock speed or a functionality may be remotely programmed over a network. In one embodiment, microprocessor 300 includes a processing core 315 that processes data of a computer system, such as computer system 200 of Figure 2. Core 315 includes a <Desc/Clms Page number 9> cache 301, prefetch buffers 302, an instruction decoder 303, a microcode unit 304, datapath circuitry 305, an address generator 306 and a floating point unit 307. Cache 301 may store instructions and data for execution by microprocessor 300. Cache 301 may be remotely programmed (e. g. , enabled) by a command received from a remote facility over a network, such as, Internet 122 of Figure 1. A portion of microprocessor 300, such as a portion of core logic 315, may also be programmed or enabled remotely by a command received from a remote facility over a network. Prefetch buffers 302 may retrieve data and instructions for execution by microprocessor 300. Buffers 302 may retrieve the data and instructions either from cache 301 or if a cache miss occurs, from a memory of the computer system via a bus interface unit 308. [0029] Instruction decoder 303 retrieves and decodes the instructions from the prefetch buffers 302. Microcode unit 304 has a memory that stores microcode instructions for microprocessor 300. Microcode unit 304 interacts with the instruction decoder 303 to execute the instructions. To carry out execution of the instructions, microcode unit 304 provides address generator 306 with address information which address generator 306 uses to generate addresses necessary to carry out the execution of the instructions. In a similar manner, address generator 306 generates addresses for datapath circuitry 305 and floating point unit 307. [0030] Microcode unit 304 is also responsible for instruction boundary processing, such as interrupt/exception arbitration, and the halting of instruction decoder 303 when necessary. Microcode unit 304 also handles cache 301 misses. [0031] Datapath circuitry 305 provides the main execution data path for microprocessor 300. Datapath circuitry 305 includes an arithmetic logic unit (ALU), control registers, a barrel shifter, read only memory (ROM) and flags. Datapath circuitry 305 retrieves data from prefetch buffers 302. Datapath circuitry 305 executes microcode provided by instruction decoder 303 using data received from prefetch buffers 302 according to the addresses generated by address generator 306. Floating point unit 307 is used in the execution of floating point instructions. <Desc/Clms Page number 10> [0032] Outside of processing core 315, microprocessor 300 has bus interface unit (BIU) 308, a pad interface 311, and a clock generator 310. Bus interface unit 308 provides an interface between internal buses of microprocessor 300 and external buses that are used to fetch data and instructions from a memory of the computer system. Bus interface 308 has write buffers 309 that are used to store data to be transferred from microprocessor 300 to the rest of the computer system. Pad interface 311 provides a pin interface for control, address and data signals passed between microprocessor 300 and the rest of the computer system. [0033] Clock generator 310 receives a system clock signal Clksys, which may be a system clock from a motherboard, and uses the Clksys to generate clock signals for microprocessor 300. Clock generator 310 furnishes a clock signal ClklX to bus interface unit 308 and pad interface 311. When microprocessor 300 is not overheating (as indicated by the deassertion of a signal from thermal sensor 321), the ClklX signal has the same frequency as the Clksys signal, and portions of bus interface unit 308 that interact with pad interface 311 use the Cllc 1X signal. [0034] Clock generator 310 furnishes another clock signal ClkIntemal to processing core 315. The Clk-llitemal signal is synchronized to the Clksys signal and has a frequency that is a multiple (e. g. , a multiple of two) of the frequency of the Clksys signal. As a result, when microprocessor 300 is operating under normal conditions, processing core 315 generally operates at a higher frequency than the rest of the computer system. [0035] Control logic 312 of clock generator 310 receives thermal trip signal from thermal sensor 321. When the thermal trip signal is asserted, control logic 312, depending on its configuration, may alter the frequency of the ClkIntemal signal to slow down processing core 315 and reduce thermal buildup in the substrate of microprocessor 300. In this manner, when the thermal trip signal is asserted, control logic 312 either throttles back the frequency of the ClkIntemal signal or temporarily halts the ClkIntemal signal. [0036] In addition, according to one embodiment, microcode unit 304 may include a specific instruction (e. g. , an operand) which, when called, may be provided to instruction <Desc/Clms Page number 11> decoder 303 to carry out the corresponding operations. For example, after a user of a system issues a request to upgrade a clock speed of processor 300 and receives an encrypted command for programming the clock speed, microcode unit 304 is called to provide an instruction to instruction decoder 303. Instruction decoder 303 executes the instruction received from microcode unit 304 to insert the encrypted command to a decryption engine (not shown). The decryption engine then decrypts the command to extract a programming code. The decryption engine may access speed key information which may be embedded within processor 300 (e. g. , ROM) to ensure that the programming code is valid. If the programming code is valid, the decryption engine may communicate with clock control logic 312, which may include a PLL circuit, to control clock generator 310, based on the programming code, to generate proper clock signals, such as clock signals ClklX and ClkIntemal. [0037] Figure 4 is a block diagram illustrating an embodiment of a computer system having a microprocessor whose clock speed or a functionality may be remotely programmed. In one embodiment, exemplary system 400 includes an interface to receive a command for programming a clock speed of an integrated circuit (IC) from a remote facility over a network, and a programming engine coupled to the interface to program the clock speed of the IC based on the command. Referring to Figure 4, system 400 includes a processor 403 coupled to a system input/output (IO) interface 402 to access to a network 401. Processor 403 may be used as processor 203 of system 200 shown in Figure 2. System 10 402 may include one or more 10 devices, such as devices 210 of system 200. Particularly, system 10 402 may include a network interface card (NIC) or a modem to access network 401. Processor 403 may be a Pentium processor from Intel Corporation. Alternatively, processor 403 may be PowerPC processor from Motorola, Inc. Network 401 may be a wide area network (WAN), such as an Internet. Alternatively, network 401 may be a local area network (LAN) or an Intranet. [0039] In one embodiment, processor 403 includes, among others, CPU microcode unit 404, decryption engine 405, phase-lock-loop (PLL) circuit 406, clock generator 407, <Desc/Clms Page number 12> access counter 408, speed keys 409, and processor identification (ID) 410. According to one embodiment, a user who operates system 400 may issue a request to upgrade its processor 403 clock speed to a system manufacture facility or a third party support facility over network 401. The facility may be maintained by the respective vendor at a Web site, such as Web server system 128 of Figure 1. Alternatively, the facility may be a corporate headquarter (e. g. , an IT department) through an Intranet. The request may include processor ID 410 to indicate a type of processor 403. In one embodiment, the user may access the facility using an application, such as, for example, a utility application provided by the manufacturer. Alternatively, the user may just use an Internet browser to issue the request to the manufacture Web site through HTTP (hypertext transfer protocol) protocol. In addition, the facility may require the user to pay for a charge of the upgrade over the network via an e-Commerce mechanism. [0040] Upon paying for the upgrade, the facility releases a command to allow system 400 to download from the facility over network 401. In one embodiment, the command includes an encrypted programming code with an encryption algorithm, which may be well known in the art. Once the application (not shown) running at an application level of system 400 receives the command via system 10 402, the application may access, through an application-programming interface (API), microcode section 404. According to one embodiment, microcode section 404 may be previously enabled with an instruction (e. g., an operand) to specifically handle the command. In one embodiment, microcode 404 may include one or more memory or 10 mapped registers to allow an application, such as a browser or other upgrading utility applications, to directly access microcode unit 404. In an alternatively embodiment, the application may access microcode unit 404 via a device driver specifically developed for microcode unit 404. The device driver may ultimately communicate with BIOS (basic input/output system) to access microcode unit 404. In further embodiment, the application may communicate directly with the BIOS to access microcode unit 404. <Desc/Clms Page number 13> [0041] According to one embodiment, the command downloaded is encrypted with an encryption algorithm which may be commercially available in the market. Once microcode unit 404 receives the command from the application, microcode unit 404 may a specific instruction to an instruction decoder, such as instruction decoder 303 of Figure 3, to pass or insert the command to decryption engine 405. Decryption engine 405 decrypts the command to retrieve the programming code to program PLL circuit 406. Prior to programming PLL circuit 406, decryption engine 405 may access speed key 409 to retrieve an embedded speed key 409, as well as processor ID 410, through access counter 408. Decryption engine 405 then compares the programming code with speed key 409 to determine whether the programming code is valid. The comparison may also examine processor ID 410 to determine whether processor 403 can support such clock speed specified by the programming code. Speed key 409 and processor ID 410 may be stored in a memory, such as a ROM (read-only memory), within processor 403. Alternatively, speed key 409 and processor ID 410 may be hardwired in a logic circuit, such as a fuse circuit, during manufacturing. If decryption engine 405 decides that the programming code is valid, decryption engine 405 may program PLL 406 to control clock generator 407 to generate a desired clock speed accordingly. [0042] According to one embodiment, accessing speed key unit 409 is gated by access counter 408. If decryption engine 405 fails to access speed key unit 409 for a predetermined number of attempts, access counter 408 may disable the remote clock speed adjustment functionality, temporarily or permanently. [0043] According to another embodiment, the command received from a facility over network 401 may be encrypted with a public/private key pair. That is, the command is encrypted with a public key at the facility and is transmitted over to system 400 over network 401. According to one embodiment, a private key is embedded in speed key unit 409. When decryption engine 405 accesses speed key unit 409, decryption engine 405 retrieves the private key and uses the private key to decrypt the command which has been <Desc/Clms Page number 14> encrypted by the public key. As a result, the whole transaction is secured. Other encryption protocols or algorithms may be utilized. [0044] Figure 5 is a block diagram illustrating an embodiment of an architecture, which may be used to remotely program a clock speed of an IC (e. g. , a microprocessor). In one embodiment, exemplary architecture 500 includes, among others, applications 501, device drivers 502, BIOS 503, and hardware 504. Applications 501 may communicate one or more device drivers 502 through an application programming interface (API) provided by the operating system (OS) or the device drivers. Device drivers 502 may access hardware (e. g. , a processor) via BIOS 503 to program the hardware to perform certain operations. Alternatively, device drivers 502 may directly communicate with hardware 504 via a memory or 10 mapped interface. [0045] Applications 501 may reside in a user space of the OS. Device drivers 502 may reside in a kernel space of the OS, while BIOS 503 may reside in a firmware (e. g., ROM) within next to hardware 504. The OS may be a Windows operating system from Microsoft Corporation. Alternatively, the OS may be a Mac OS from Apple Computer, Inc. Further, the OS may be a Unix or a Linux operating system. Other operating systems, such as real-time operating system embedded in a set-top box type computer, may be utilized. [0046] Applications 501 may include, according to one embodiment, a browser to access a manufacture facility, such as a Web server system 128, over a network, such as Internet 122. Particularly, when a user wishes to access a remote manufacture facility, the user uses a browser which resides in application level 501 to communicate with a device driver of a network interface, such as a network driver which resides at device drivers' level 502. The network driver then communicates with hardware 504 which may include a NIC card to access the manufacture facility over a network. Alternatively, the user may use a special utility application provided by the vendor to access the facility via dial-up networking mechanism using a modem. <Desc/Clms Page number 15> Upon receiving a command from the facility, the command may be forwarded to a decryption engine, such as decryption engine 405, which may reside in device drivers 502, BIOS 503, or embedded in hardware 504. The decryption engine may be implemented in software or in hardware. The decryption engine then decrypts the command against one or more speed keys, such as speed keys 409 to ensure that the programming code extracted is valid. If the programming code is valid, the decryption engine may program the respective PLL circuit to control a clock generator to generate proper clock signals. [0048] Figure 6 is a flow diagram illustrating an embodiment of a process to remotely program a clock speed of an IC. In one embodiment, exemplary process 600 includes receiving a command for programming a clock speed of an integrated circuit (IC) from a remote facility over a network, and adjusting the clock speed of the 1C in response to the command. Referring to Figure 6, after a user issues a request to a remote facility for upgrading its clock speed of an IC (e. g. , a microprocessor of a computer system) over a network (e. g. , an Internet), at block 601, the user receives a command for programming a clock speed of the IC from the remote facility over the network. In response to the command, at block 602, the system adjusts the clock speed of the 1C based on the command. [0049] Figure 7 is a flow diagram illustrating another embodiment of a process to remotely program a clock speed of an IC. In one embodiment, the exemplary process 700 starts, at block 701, to generate and transmit a request to a facility, such as a Web server 128 maintained by a manufacturer, for upgrading a clock speed of an IC of a client over a network (e. g. , an Internet). In response to the client's request, at block 702, the facility issues an encrypted command to program the clock speed requested and allows the client to download the encrypted command to the client's machine over the network. At block 703, the client decrypts the command to extract a programming code and retrieves a speed key from a memory of the corresponding IC. At block 704, the client examines the programming code against the speed key to determine whether the programming code is <Desc/Clms Page number 16> valid. If the programming code is valid, at block 705, the client adjusts the clock speed of the IC (e. g. , a microprocessor) based on the programming code. In one embodiment, the client programs a PLL circuit to control a clock generator to generate desirable clock signal based on the programming code. [0050] Although the techniques described above relate to programming a clock speed of an IC, it will be appreciated that the techniques are not limited to programming a clock speed. The techniques may be applied to other remotely programmable features of an IC or a data processing system. For example, according to one embodiment, a cache memory of a microprocessor (e. g. , cache 301) or a data processing system (e. g. , cache 204) may be remotely enabled or programmed. Alternatively, a portion of a logic, such as core logic of a processor (e. g. , core logic 315 of microprocessor 300), may be remotely programmed or enabled. Other functionalities apparent to one with ordinary skill in the art may be remotely programmed via one of the above techniques described. [0051] Figure 8 is a flow diagram illustrating another embodiment of a process to remotely program a functionality of an IC. In one embodiment, exemplary process 800 includes receiving a command for enabling a functionality of an integrated circuit (IC) from a remote facility over a network and enabling the functionality of the IC in response to the command. In one embodiment, the command is received via an eCommerce channel, which may be hosted by a manufacturer of the IC or a third party distributor. [0052] Referring to Figure 8, the exemplary process 800 starts, at block 801, to generate and transmit a request to a facility, such as a Web server 128 maintained by a manufacturer, for upgrading a functionality of an IC of a client over a network (e. g. , an Internet). In one embodiment, the functionality may include a clock speed capability, a cache memory, or a portion of a core logic, etc. In response to the client's request, at block 802, the facility issues an encrypted command to program the functionality requested and allows the client to download the encrypted command to the client's machine over the network. At block 803, the client decrypts the command to extract a programming code and retrieves an internal code or signature from a memory of the <Desc/Clms Page number 17> corresponding IC. At block 804, the client examines the programming code against the internal code to determine whether the programming code is valid. If the programming code is valid, at block 805, the client program the corresponding functionality of the IC (e. g. , a clock speed, a cache memory, a portion of a core logic) based on the programming code. In one embodiment, the client programs a PLL circuit to control a clock generator to generate desirable clock signal based on the programming code. Other functionalities apparent to one with ordinary skill in the art may be remotely programmed via one of the above techniques described. [0053] Thus, foregoing described embodiments enable a vendor to under-clock high performance CPU product and sell at discounted lower performance prices. Subsequently, as time and end user demands for performance increase, the vendor can repeatedly offer higher CPU speed increments remotely, for a small premium. How much of a premium depends upon the expected"final"selling cost of the CPU, how many CPU speed increments are incorporated into the CPU (and thus, the number of premium's charged), and the time value of money. It will be appreciated that other features of the CPU, such as cache memory or a portion of logic, may be remotely upgraded or enabled. With the technologies described above, adjustments may be made remotely, automatically, securely, and"on-the-fly". [0054] For instance, an example of a high performance processor might be a microprocessor at 866 MHz that may be sold for about $100. This device would be way too pricey for a"new"price sensitive consumer product. But with the technology described in the embodiments above, a vendor can actually sell this product in a market segment that traditionally has lower prices. The vendor then could clock that device down to, for example, 300 MHz and sell it to a customer for, for example, $30. In a year, the vendor can offer a performance increase to 466 MHz for, for example, $30 (this time, the fee might actually be paid by the end user). The offer, as well as the performance increased, would all be done remotely and automatically via a common e-Commerce mechanism, such that the end user does not need to physically return the device for <Desc/Clms Page number 18> upgrade. A year later, the same vendor could offer a performance increase to 633 MHz for, for example, another $30. And finally, still another year later the vendor could offer a performance increase to 866 MHz for a final fee, such as, for example, another $30. Therefore, the final selling price for the chip is $120 over a three years period. [0055] In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
Systems and methods may provide for receiving web content and determining a trust level associated with the web content. Additionally, the web content may be mapped to an execution environment based at least in part on the trust level. In one example, the web content is stored to a trust level specific data container. |
1.A computing system comprising:a first container, associated with a first trust level;a second container, associated with a second trust level, the second trust level being different from the first trust level; andprocessor circuit for:assigning different portions of the program code to the first container and the second container based on the trust level determined for each portion of the program code; andComputing resources are allocated to execute each portion of the program code based on which of the first container and the second container is associated with which portion of the program code.2.2. The computing system of claim 1, wherein the processor circuit is configured to assign the program code to the first container, the first container, the first container based on the trust level determined for the program code At least one of the second container, or the third container.3.The computing system of claim 1, wherein the computing system includes a third container, the third container is associated with a third level of trust, the third level of trust being different from the first and second levels of trust Level 2 trust.4.The computing system of any one of claims 1-3, wherein the program code is associated with a web application.5.5. The computing system of claim 4, wherein the network circuit is configured to receive the network application.6.3. The computing system of any one of claims 1-3, wherein the processor circuit is configured to determine a trust level of the program code based on user-provided data.7.A method for code execution, comprising:With at least one processor, different portions of the program code are allocated to a first container and a second container based on the trust level determined for each portion of the program code, the first container being associated with the first trust level , the second container is associated with a second trust level, the second trust level being different from the first trust level; andWith the at least one processor, computing resources are allocated to execute each of the program code based on which of the first container and the second container is associated with which portion of the program code part.8.8. The method of claim 7, wherein the assigning comprises assigning the program code to the first container, the second container based on the trust level determined for the program code , or at least one of a third container.9.9. The method of claim 8, wherein the third container is associated with a third trust level, the third trust level being different from the first and second trust levels.10.The method of any one of claims 7-9, wherein the program code is associated with a web application.11.11. The method of claim 10, further comprising receiving the web application.12.9. The method of any one of claims 7-9, wherein the trust level of the program code is determined based on user-provided data.13.At least one computer-readable medium comprising instructions that, when executed, cause at least one processor to perform at least the following operations:Based on the trust level determined for each portion of the program code, different portions of the program code are assigned to a first container and a second container, the first container being associated with the first trust level, the second container associated with a second trust level, the second trust level being different from the first trust level; andComputing resources are allocated to execute each portion of the program code based on which of the first container and the second container is associated with which portion of the program code.14.14. The at least one computer-readable medium of claim 13, wherein the instructions, when executed, cause the at least one processor to assign all the The program code is given to at least one of the first container, the second container, or the third container.15.15. The at least one computer-readable medium of claim 14, wherein the third container is associated with a third trust level, the third trust level being different from the first and second trust levels.16.The at least one computer-readable medium of any of claims 13-15, wherein the program code is associated with a web application.17.17. The at least one computer-readable medium of claim 16, wherein the instructions, when executed, further cause the at least one processor to receive the network application.18.16. The at least one computer-readable medium of any of claims 13-15, wherein the instructions, when executed, cause the at least one processor to determine the program code based on user-supplied data trust level. |
Differentiated containerization of web content based on trust level and other attributes
with executionThis application is the PCT international application number PCT/US2014/021839, the international filing date is March 7, 2014, and the application number that entered the Chinese national phase is 201480008933.3, entitled "Difference between WEB content based on trust level and other attributes Containerization and Execution of Containerization" of a divisional application for an invention patent application.Background techniqueEmbodiments generally relate to access control to web-based applications. More specifically, embodiments relate to differentiated containerization and execution of web content based on trust levels and other attributes.Such as HTML5 (Hypertext Markup Language 5, eg, HTML5 Edited Draft, May 8, 2012, World Wide Web Consortium/W3C, www*w3*org), LLVM (eg, LLVM 3.1, May 22, 2012, llvm. org) and other emerging markup languages such as runtime or just-in-time (JIT) environment languages can support more robust multimedia-related web platform development. However, the use of these languages by web application developers may also expose client device hardware that would not otherwise be accessible by traditional web content. Although recently developed "sandbox" solutions can provide some level of protection by blocking certain functionality when code is sent as part of a web page, there is still considerable room for improvement. For example, conventional sandbox solutions may not adequately distinguish between trusted and untrusted web content sources. As a result, client devices may be vulnerable to malware (malicious software) and other web content sources.Description of drawingsVarious advantages of embodiments of the present invention will become apparent to those skilled in the art from reading the following specification and appended claims, with reference to the following accompanying drawings, in which:1 is a block diagram of an example of a containerized architecture with multiple trust-level dedicated data containers, according to an embodiment;2 is a flowchart of an example of a method of differentiating web content using trust levels, according to an embodiment;3 is a block diagram of an example of a containerized structure with a content offload module, according to an embodiment;4 is a flowchart of an example of a method of differentiating web content using trust levels and offload containers, according to an embodiment;5 is a block diagram of an example of a processor according to an embodiment; and6 is a block diagram of an example of a system according to an embodiment.Detailed waysTurning now to FIG. 1 , a containerized architecture 10 is shown in which, based on trust level information corresponding to web content 12, information such as web applications, web code, services, "mash- up)", etc. web content 12 is mapped to the execution environment. The term "containerization" may refer to the organization of web content information (eg, objects) into one or more "containers" represented as classes, data structures, abstract data types (ADTs), binaries, other executables code, etc. whose instances can be collections of other objects. Containerization of web content 12 may adhere to certain access rules, wherein the illustrated architecture 10 incorporates trust levels as part of those rules. In the illustrated example, browser interface 16 receives web content 12 and container assignment module 18 determines a trust level associated with the web content 12 . In this regard, web content 12 may incorporate runtime or JIT environment languages such as, for example, HTML5, LLVM, etc., which allow for more local platform hardware 24 and/or memory 26 than conventional web content. access. Accordingly, as will be discussed in greater detail, using trust levels to containerize web content 12 and map the web content 12 to an execution environment may provide significantly improved runtime protection.The container assignment module 18 may access the trust level database 20, augment the trust level database 20 with data from other devices 22 (eg, machines and/or users), wherein the information in the trust level database 20 may in turn be used to determine the trust level. For example, the trust level database 20 may include information about "whitelisted" sites, "greylisted" sites, "blacklisted" sites, etc., as well as other raw data such as, for example, provider information, application developer information, mashup ( mash-up) origin and/or behavior information, etc. The container assignment module 18 may also use the real-time trust assessment 26 to determine the trust level of the web content 12, wherein the real-time trust assessment 26 may be generated internally by the containerized architecture 10 (eg, as part of a security tool plug-in), or from another A security module 28 (eg, third-party security software) obtains the real-time trust assessment 26 . In particular, c.The illustrated architecture 10 also includes dedicated data containers 30 ( 30a - d ) for organizing and/or storing multiple levels of web content 12 according to one or more trust levels corresponding to the web content 12 . For example, high-trust container 30a may be used to store content associated with whitelisted sites, wherein content in high (eg, "native") trust container 30a may be considered very trustworthy and can be derived from the execution environment's From a perspective, it can be viewed similarly as native code. On the other hand, medium trust container 30b may be used to store content associated with greylisted sites, wherein content in medium (eg, "browser application") trust container 30b may be considered medium trustworthy, and From an execution environment perspective, it can be similarly thought of as a browser application. Additionally, low trust container 30c may be used to store content associated with unknown sites, wherein content in low (eg, "test") trust container 30c may be considered potentially untrustworthy, and may similarly be It is considered a new web site. The illustrated architecture 10 also includes a trash container 30d, which may be used to store content associated with blacklisted sites, wherein execution and/or deletion of the content in the trash container 30d may be prevented.Environment module 32 may map web content 12 to execution environments based at least in part on a trust level associated with web content 12 . Thus, the environment module 32 may use the work scheduler 34 to allocate components in the platform hardware 24 such as processors (eg, central processing unit/CPU, graphics processing unit/GPU), input output (IO) controllers (eg, display, audio, resources for performing one or more workloads associated with web content 12. Similarly, context module 32 may use memory mapping module 36 (eg, Input Output Memory Management Unit/IOMMU) to implement one or more memory transactions associated with web content 12 . Of particular note, scheduling of workloads, allocation of resources, and enforcement of memory transactions may all be a function of the level of trust associated with web content 12 .More specifically, the type of container 30 that holds the base web content 12 may determine how workload scheduler 34 allocates resources and schedules workloads, and how memory mapping module 36 performs memory transactions related to memory 26 . For example, all platform resources may be available to workloads associated with content in high trust containers 30a, while only a subset of platform resources may be available to workloads associated with content in medium trust containers 30b. On the other hand, workloads associated with content in low-trust containers 30c may only have limited access to platform resources, and workloads associated with content in trash containers 30d may be prevented from compromising on whatever platform hardware any access. Additionally, certain restricted areas of memory 26 may be prevented from being accessed by web content in trash container 30d, low trust container 30c, and/or medium trust container 30b. As will be discussed in greater detail, such as, for example, stack area components associated with web content 12 (eg, code logic, data presentation, data consumption), latency of one or more web transactions, content purpose (eg, web Other contextual attributes such as the type of site and the data consumed), service/site type, etc. may also be used to containerize the web content 12 and select an execution environment for the web content 12.Turning now to FIG. 2, a method 40 of differentiating web content using trust levels is shown. Method 40 may be implemented as a set of logic instructions and/or firmware stored on a machine-readable medium or computer-readable medium (such as random access memory (RAM), read only memory (ROM), Programmable ROM (PROM), Flash memory, etc.), stored in configurable logic such as, for example, Programmable Logic Array (PLA), Field Programmable Gate Array (FPGA), Complex Programmable Logic Device (CPLD), Stored in fixed-function logic hardware using circuit technologies such as, for example, Application Specific Integrated Circuit (ASIC), Complementary Metal Oxide Semiconductor (CMOS), or Transistor-Transistor Logic (TTL) technology, or any combination thereof middle. For example, computer program code for performing the operations shown in method 40 may be written in any combination of one or more programming languages, including object-oriented programming languages such as C++, and programming languages such as "C" Or a regular procedural programming language like a similar programming language. Furthermore, method 40 may be implemented as containerized architecture 10 (FIG. 1) using any of the aforementioned circuit techniques.Process block 42 is illustrated for receiving web content, such as, for example, web applications, web code, services, etc., where at least a portion of the web content may include a runtime or JIT environment language such as, for example, HTML5, LLVM, or the like. Block 44 may determine a trust level associated with the web content. As already noted, the determination at block 44 may take into account information in a trust level database, one or more real-time trust level assessments, etc., or any combination thereof. The illustrated block 46 maps web content to execution environments based on trust levels, where using the trust levels to select execution environments for web content may provide significantly improved runtime protection.Figure 3 shows a containerized architecture 50 in which web content 12 (12a, 12b) can be split between a local execution environment and an "offload" execution environment. In the illustrated example, the browser interface 16 receives the web content 12 , and the container assignment module 52 determines, based on, for example, information from the trust level database 20 and/or one or more real-time trust assessments 26 to be associated with the web content 12 . trust level. As has been noted, the real-time trust assessment 26 may be obtained from a security module 28 such as a built-in assessment tool, a separate security tool, an enterprise information technology module, a cloud module, or any combination thereof. The architecture 50 may also include a content offload module 54 that selectively sends the portion 12a of the web content 12 to the offload container 56 to map the portion 12a of the web content 12 to another, more risk-tolerant execution environment, wherein , the offload container 56 may be associated with a provider of the web content 12, an emulation module of a local computing device/platform, an enterprise data center, a private cloud, a third party service provider, or the like.More specifically, container assignment module 52 may detect situations in which the trust level is relatively low (eg, the trust level is below a threshold) and execution latency is tolerated (eg, a latency tolerance condition is satisfied), wherein in such a situation , the portion 12a of the web content 12 directed to the offload container 56 may represent unauthenticated, latency-insensitive web content. In such a case, the results associated with the offload container 56 may be received from an entity and/or processor corresponding to the offload container 56 . On the other hand, if the illustrated container assignment module 52 determines that the trust level is such that relatively high or intolerable execution latency, the content may be processed locally as the trusted, latency-sensitive portion 12b of the web content 12 . As already discussed, the environment module 32 may use the workload scheduler 34 and/or the memory mapping module 36 to map the portion 12b of the web content 12 to an execution environment based on the trust level of the portion 12b.FIG. 4 illustrates a method 60 for differentiating web content using various trust levels and offload containers. Method 40 may be implemented as a set of logic instructions and/or firmware stored in a machine-readable medium or computer-readable medium (such as RAM, ROM, PROM, flash memory, etc.), stored In configurable logic (such as, for example, PLA, FPGA, CPLD), stored in fixed-function logic hardware using circuit technology (such as, for example, ASIC, CMOS, or TTL technology), or any combination thereof. Process block 62 is illustrated for receiving web content, where at block 64 the trust level and latency of the web content can be determined. Determining latency may involve identifying how much execution latency will be incurred to offload at least a portion of the web content to another execution environment, which may reside on a different platform, system, and/or network (eg, a web content provider). , local emulation modules, enterprise data centers, private clouds, third-party service providers, etc.). For example, it may be estimated at block 64 that a particular unit of work associated with the web content is most likely to take x milliseconds to process on the third party service provider.Block 66 may determine whether the trust level is below a certain threshold. If so, illustrated block 68 determines whether the latency tolerance condition is met. The wait tolerance condition may take into account historical information, quality of service (QoS) information, service level agreement (SLA) information, etc., wherein the determination at block 68 may involve, for example, executing the difference between the wait time determined at block 64 and the maximum execution wait time. comparison between. If the latency tolerance condition is met (eg, the latency is below the maximum execution latency), then illustrated block 70 maps the corresponding portion of the web content to the offload container. On the other hand, if the trust level is not below the particular threshold or the latency tolerance condition is not met, then block 72 may map the corresponding portion of the web content to the local execution environment. Block 72 may consider the trust level when deciding which platform resources to expose to web content.As has been noted, other contextual attributes such as, for example, stack composition, content purpose, service type, etc., may also be used to determine the trust level of web content. For example, if the code logic reflected in a stack component of web content indicates that the web content involves one type of activity (eg, login cookie retrieval), but the data presented to the user involves another type of activity (eg, socializing Internet, instant messaging/IM), then it can be inferred that the trust level of the web content is relatively low. Other trust level inferences can also be made even though the origin of the web content is not recorded in the trust level database.FIG. 5 illustrates a processor core 200 according to one embodiment. Processor core 200 may be a core for any type of processor, such as a microprocessor, embedded processor, digital signal processor (DSP), network processor, or other device for executing code. Although only one processor core 200 is illustrated in FIG. 5 , the processing element may alternatively include more than one processor core 200 illustrated in FIG. 5 . The processor core 200 may be a single-threaded core or, for at least one embodiment, the processor core may be multi-threaded, in that it may include more than one hardware thread context (or "logical processor" per core) ).FIG. 5 also illustrates memory 270 coupled to processor 200 . Memory 270 may be any of a wide variety of memories (including levels of a memory hierarchy) known to those skilled in the art or otherwise available. The memory 270 may include one or more code 213 instructions for execution by the processor 200 core, wherein the code 213 may implement the containerized architecture 10 (FIG. 1) and/or the containerized architecture 50 (FIG. 3) already discussed. Processor core 200 follows a program sequence of instructions indicated by code 213 . Each instruction may enter the front end 210 and be processed by one or more decoders 220 . Decoder 220 may generate as its output micro-ops, such as fixed-width micro-ops in a predefined format, or may generate other instructions, micro-instructions, or control signals that reflect the original code instructions. The illustrated front end 210 also includes register renaming logic 225 and scheduling logic 230, which generally allocate resources and queue operations for execution corresponding to conversion instructions.Processor 200 is shown including execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include multiple execution units dedicated to a particular function or set of functions. Other embodiments may include only one execution unit or one execution unit that may perform certain functions. The illustrated execution logic 250 performs the operations specified by the code instructions.After completing execution of the operations specified by the code instructions, backend logic 260 retires the instructions of code 213 . In one embodiment, processor 200 allows out-of-order execution of instructions but requires in-order retirement of instructions. Retirement logic 265 may take various forms known to those skilled in the art (eg, reorder buffers, etc.). In this manner, during execution of code 213, processing is transformed in accordance with at least the output generated by the decoder, the hardware registers and tables utilized by register renaming logic 225, and any registers (not shown) modified by execution logic 250. device core 200.Although not illustrated in FIG. 5 , the processing elements may include other elements on a chip with processor core 200 . For example, the processing elements may include memory control logic with the processor core 200 . The processing elements may include I/O control logic and/or may include I/O control logic integrated with memory control logic. A processing element may also include one or more caches.Referring now to FIG. 6, shown is a block diagram of an embodiment of a system 1000 in accordance with an embodiment of the present invention. Shown in FIG. 6 is a multiprocessor system 1000 that includes a first processing element 1070 and a second processing element 1080. Although two processing elements 1070 and 1080 are shown, it should be understood that embodiments of system 1000 may include only one such processing element.System 1000 is illustrated as a point-to-point interconnect system, wherein a first processing element 1070 and a second processing element 1080 are coupled via a point-to-point interconnect 1050 . It should be understood that any or all of the interconnects illustrated in FIG. 6 may be implemented as multi-drop buses rather than point-to-point interconnects.As shown in FIG. 6, each of processing elements 1070 and 1080 may be a multi-core processor that includes first and second processor cores (ie, processor cores 1074a and 1074b and processor cores 1084a and 1084b). ). Such cores 1074, 1074b, 1084a, 1084b may be configured to execute instruction code in a manner similar to that discussed above in connection with FIG.Each processing element 1070 , 1080 may include at least one shared cache 1896 . Shared caches 1896a, 1896b may store data (eg, instructions) used by one or more components of a processor, such as cores 1074a, 1074b and 1084a, 1084b, respectively. For example, a shared cache may cache data stored in memory 1032, 1034 locally for faster access by components of the processor. In one or more embodiments, the shared cache may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4) or other level caches, last level caches Cache (LLC) and/or a combination thereof.Although shown with only two processing elements 1070, 1080, it should be understood that the scope of the present invention is not so limited. In other embodiments, one or more additional processing elements may be present within a given processor. Alternatively, one or more of the processing elements 1070, 1080 may be elements other than processors, such as accelerators or field programmable gate arrays. For example, additional processing elements may include additional processors that are the same as first processor 1070, additional processors that are heterogeneous or asymmetric with respect to first processor 1070, accelerators (such as, for example, graphics accelerators or digital signal processing (DSP) unit), field programmable gate array, or any other processing element. Various differences may exist between the processing elements 1070, 1080 in a range of quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, and the like. These differences can effectively manifest as asymmetry and heterogeneity between the processing elements 1070, 1080. For at least one embodiment, the various processing elements 1070, 1080 may reside in the same die package.The first processing element 1070 may also include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078 . Similarly, second processing element 1080 may include MC 1082 and P-P interfaces 1086 and 1088 . As shown in Figure 6, MCs 1072 and 1082 couple the processors to respective memories, namely memory 1032 and memory 1034, which may be portions of main memory locally attached to the respective processors. Although the MC logic 1072 and 1082 are illustrated as being integrated into the processing elements 1070, 1080, for alternative embodiments, the MC logic may be discrete logic external to the processing elements 1070, 1080, rather than being integrated therein.The first processing element 1070 and the second processing element 1080 may be coupled to the I/O subsystem 1090 via P-P interconnects 1076, 1086, respectively. As shown in FIG. 6 , I/O subsystem 1090 includes P-P interfaces 1094 and 1098 . Additionally, the I/O subsystem 1090 includes an interface 1092 for coupling the I/O subsystem 1090 with the high-performance graphics engine 1038 . In one embodiment, bus 1049 may be used to couple graphics engine 1038 to I/O subsystem 1090 . Alternatively, point-to-point interconnects may couple these components.I/O subsystem 1090 may in turn be coupled to first bus 1016 via interface 1096 . In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PC Express bus or another third-generation I/O interconnect bus, although the scope of the invention is not limited this.As shown in FIG. 6, various I/O devices 1014 may be coupled to a first bus 1016 and a bus bridge 1018, which may couple the first bus 1016 to a second bus 1020. In one embodiment, the second bus 1020 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1020, including, for example, a keyboard/mouse 1012, a network controller/communication device 1026 (which in turn may communicate with a computer network), and data storage such as disk drives or other mass storage devices Unit 1019, in one embodiment, the data storage unit 1019 may include code 1030. In one example, web content is received via communication device 1026 . Code 1030 may include instructions for performing embodiments of one or more of the methods described above. Thus, as already discussed, the illustrated code 1030 may implement the containerized architecture 10 (FIG. 1) and/or the containerized architecture 50, and may be similar to the code 213 (FIG. 5). Additionally, audio I/O 1024 may be coupled to second bus 1020 .Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of FIG. 6, the system may implement a multipoint or another such communication topology. Also, more or fewer integrated chips than shown in FIG. 6 may alternatively be used to separate the elements in FIG. 6 .Additional notes and examples:Example one may thus include a method of distinguishing web content by which web content is received and a trust level associated with the web content is determined. The method may also provide for mapping the web content to an execution environment based at least in part on the trust level.In addition, the method of Example 1 may further include: storing the web content in a trust-level dedicated data container.In addition, the web content of the example one method may be further mapped to the execution environment based on context attributes, the context attributes including one or more of the following: a stack area component associated with the web content, one or more of the web content associated The latency of the web transaction, the purpose of the web content, and the type of service associated with the web content.In addition, mapping the web content to the execution environment in the method of Example 1 may further include: sending at least a portion of the web content to an offload container associated with one or more of the following: a provider of the web content, a local computing device's Emulate modules, enterprise data centers, private clouds, and third-party service providers; and receive results associated with offloading containers.Additionally, if the trust level is below the threshold and the latency tolerance condition is met, then at least a portion of the web content in the method of example one may be sent to the offload container.Additionally, the method of example one may further include accessing a trust level database, wherein the trust level is determined based at least in part on the trust level database.Additionally, the method of example one may further include obtaining a real-time trust level assessment, wherein the trust level is determined based at least in part on the real-time trust level assessment.Additionally, obtaining a real-time trust level assessment in an example one method may include generating a real-time trust level assessment.Additionally, in an example one method, mapping the web content to the execution environment may include scheduling one or more workloads associated with the web content based at least in part on the trust level; provisioning one or more workloads for the one or more workloads; a plurality of resources; and implementing one or more memory transactions associated with the web content based at least in part on the trust level.Example two may include at least one computer-readable storage medium comprising a set of instructions that, if executed by a processor, cause a computing device to perform the method of example one.Example three may include a web content differentiation device having a browser interface for receiving web content and a container assignment module for determining a trust level associated with the web content. The apparatus may also have an environment module for mapping web content to execution environments based at least in part on the trust level.In addition, the apparatus of Example 3 may further include a plurality of trust level specific data containers, wherein the container assignment module is configured to store the web content in one or more of the plurality of trust level specific data containers.In addition, the web content of the apparatus of Example 3 may be further mapped to the execution environment based on context attributes, the context attributes including one or more of the following: a stack area component associated with the web content, one or more of the web content associated The latency of the web transaction, the purpose of the web content, and the type of service associated with the web content.In addition, the apparatus of example three may further include a content offload module for: sending at least a portion of the web content to an offload container associated with one or more of the following to map the web content to the execution environment: Provision of the web content parties, emulation modules of local computing devices, enterprise data centers, private clouds, and third-party service providers; and receiving results associated with offloading containers.In addition, if the trust level is below the threshold and the waiting time tolerance condition is satisfied, then at least part of the web container in the method of example three may be sent to the offload container.Additionally, the device of example three may further include a trust level database, wherein the trust level is determined based at least in part on the trust level database.Additionally, the container assignment module in the example three device may obtain a real-time trust level assessment, wherein the trust level is determined based at least in part on the real-time trust level assessment.Additionally, the device of example three may further include a security module for generating a real-time trust level assessment.Additionally, the security module of example three may be one or more of a built-in assessment tool, a stand-alone security tool, an enterprise information technology module, and a cloud module.Additionally, the apparatus of example three may further include a workload scheduler for: scheduling one or more workloads associated with the web content based at least in part on the trust level; and provisioning the workloads for the one or more workloads One or more resources, wherein the environment module is used to implement one or more memory transactions associated with the web content based at least in part on the trust level.Accordingly, the techniques described herein may allow differentiated containers that provide different client execution environments (eg, memory, CPU, graphics, network, operating system/OS) for web content based on the trust level of the web content's origin site change. Furthermore, improved runtime protection of client devices from malware (malicious software) and other web content from unknown sources may be allowed. Other contextual attributes such as stack area components (code logic, data presented, data consumed), latency of web transactions, content purpose, service type, etc. can also be used to differentiate web content and configure the execution environment. Additionally, web content may be split between client devices, cloud computing resources (eg, content providers, enterprise data centers, private clouds, third-party service providers) based on trust levels, latency, and the like. A containerized module may be implemented as a stand-alone security application, a plug-in to a security tool (eg, Security Enclave, DeepSafe), implemented in firmware, etc., or any combination thereof. Techniques can also be used to correlate real-time assessment data from other security applications and/or resources.Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (eg, transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASICs), programmable logic devices (PLDs) , Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), logic gates, registers, semiconductor devices, chips, microchips, chipsets, etc. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subprograms, functions, methods, procedures, software interfaces , application programming interface (API), instruction set, computational code, computer code, code fragment, computer code fragment, word, value, symbol, or any combination thereof. Determining whether to use hardware elements and/or software elements to implement an embodiment may vary based on any number of factors, such as desired compute rate, power level, thermal tolerance, processing cycle budget, input data rate, output data rate , memory resources, data bus speed, and other design or performance constraints.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium representing various logic within a processor, which when read by a machine cause the machine to Logic for implementing the techniques described herein is fabricated. Such representations, referred to as "IP cores," may be stored on tangible machine-readable media and provided to various consumers or manufacturing facilities for loading into manufacturing machines that actually make logic or processors.Embodiments of the present invention are suitable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include, but are not limited to, processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, and the like. Additionally, in some of the drawings, signal conductors are represented by lines. Some may be different to indicate more component signal paths, may have numerical labels to indicate multiple component signal paths, and/or may have arrows at one or more ends to indicate the primary direction of information flow. However, this should not be interpreted in a limiting manner. Rather, such additional detail may be used in conjunction with one or more exemplary embodiments to facilitate an easier understanding of the circuit. Any signal line represented (with or without additional information) may actually include one or more signals that can travel in multiple directions and that can be implemented using any suitable type of signaling scheme, for example, using differential pairs. Digital or analog lines, fiber optic lines and/or single-ended lines.Example sizes/models/values/ranges have been given, but embodiments of the invention are not limited to the same values. As fabrication techniques (eg, photolithography) mature over time, it is expected that smaller size devices can be fabricated. Additionally, to simplify illustration and discussion, and to avoid obscuring certain aspects of the embodiments of the invention, well-known power/ground connections to IC chips and other components may or may not be shown in the figures. Furthermore, in order to obscure embodiments of the invention, and also to take into account the fact that the details of implementation with respect to such block diagram arrangements are highly dependent on the platform in which the embodiments are to be implemented, arrangements can be shown in block diagram form, i.e. , such details should be well within the purview of those skilled in the art. Where specific details (eg, circuits) are set forth to describe example embodiments of the invention, it will be apparent to those skilled in the art that the invention may be practiced without or with variations of these specific details examples of . Accordingly, the present description should be regarded as illustrative rather than restrictive.Some embodiments may be implemented, for example, using a machine or tangible computer-readable medium or article of manufacture that may store instructions or a set of instructions that, if executed by a machine, may cause the machine to perform an operation according to embodiments. method and/or action. Such machines may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, etc., and may be implemented using any suitable combination of hardware and/or software . A machine-readable medium or article of manufacture may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, such as memory, removable or non-removable medium, Removable or non-erasable media, writable or rewritable media, digital or analog media, hard disk, floppy disk, compact disk read only memory (CD-ROM), compact disk recordable (CD-R), compact disk Disc-rewritable (CD-RW), optical discs, magnetic media, magneto-optical media, removable memory cards or discs, various types of digital versatile discs (DVDs), magnetic tapes, cassettes, and the like. Instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language , dynamic code, encrypted code, etc.Unless specifically stated otherwise, it should be understood that terms such as "processing", "computing", "calculating", "determining" and the like refer to a computer or computing system or similar electronic Actions and/or processes of a computing device that manipulate data represented as physical (eg, electronic) quantities within registers and/or memory of a computing system and/or transform it into a Other data represented as physical quantities within a memory, register, or other such information storage, transfer, or display device of a computing system. The embodiments are not limited in this context.The term "coupled" may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical, or other connections. Additionally, the terms "first," "second," etc. may be used herein for ease of discussion only and do not carry a specific temporal or chronological order unless otherwise indicated.Those skilled in the art will appreciate from the foregoing description that the broad techniques of embodiments of the invention can be implemented in a variety of forms. Thus, although embodiments of the invention have been described in connection with specific examples thereof, the true scope of embodiments of the invention should not be limited thereto as skilled practitioners study the drawings, specification and appended claims Afterwards, other modifications will be apparent to them. |
An apparatus of an aspect includes a plurality of cores. The plurality of cores are logically grouped into a plurality of clusters. A cluster sharing map-based coherence directory is coupled with the plurality of cores and is to track sharing of data among the plurality of cores. The cluster sharing map-based coherence directory includes a tag array to store corresponding pairs of addresses and cluster identifiers. Each of the addresses is to identify data. Each of the cluster identifiers is to identify one of the clusters. The cluster sharing map-based coherence directory also includes a cluster sharing map array to store cluster sharing maps. Each of the cluster sharing maps corresponds to one of the pairs of addresses and cluster identifiers. Each of the cluster sharing maps is to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier. |
CLAIMS What is claimed is: 1. An apparatus comprising: a plurality of cores, the plurality of cores logically grouped into a plurality of clusters; and a cluster sharing map-based coherence directory coupled with the plurality of cores to track sharing of data among the plurality of cores, the cluster sharing map-based coherence directory including: a tag array to store corresponding pairs of addresses and cluster identifiers, each of the addresses to identify data, each of the cluster identifiers to identify one of the clusters; and a cluster sharing map array to store cluster sharing maps, each of the cluster sharing maps corresponding to one of the pairs of addresses and cluster identifiers, each of the cluster sharing maps to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier. 2. The apparatus of claim 1 , wherein the clusters logically group non-overlapping sets of cores of equal size. 3. The apparatus of claim 1, wherein a pair of an address and a cluster identifier are to be stored in a given set and a given way of the tag array, and wherein a cluster sharing map corresponding to the pair is to be stored in a corresponding set and a corresponding way of the cluster sharing map array. 4. The apparatus of claim 1 , further comprising logic to indicate inter-cluster sharing of a given data identified by a given address between a first cluster and a second cluster by storing both a first cluster identifier to identify the first cluster and a second cluster identifier to identify the second cluster in different ways of a same set of the tag array, and by storing both a first cluster sharing map to indicate intra-cluster sharing of the given data within the first cluster and a second cluster sharing map to indicate intra- cluster sharing of the given data within the second cluster in different ways of a same set of the cluster sharing map array. 5. The apparatus of claim 1, further comprising logic to generate an all-core sharing map from a plurality of cluster sharing maps each corresponding to a given address and each corresponding to a different cluster identifier, by rearranging the plurality of cluster sharing maps, from positions where they are stored in the cluster sharing map array to positions in the all-core sharing map, based on the different corresponding cluster identifiers. 6. The apparatus of claim 1 , further comprising tag comparison logic coupled with the tag array, the tag comparison logic to compare a plurality of addresses in different ways of a same set of the tag array and to provide a plurality of per-way match signals to indicate whether or not the addresses in the different ways match. 7. The apparatus of claim 1, further comprising a small all-core sharing map-based coherence directory having no more than twenty entries per core of the plurality of cores, the small all-core sharing map-based coherence directory to store corresponding pairs of addresses and all-core sharing maps, each of the all-core sharing maps to indicate sharing of data identified by a corresponding address within the plurality of cores. 8. The apparatus of claim 7, wherein the small all-core sharing map-based coherence directory has no more than fifteen entries per core of the plurality of cores. 9. The apparatus of claim 7, wherein the cluster sharing map-based coherence directory and the small all-core sharing map-based coherence directory are coupled to be accessed in parallel. 10. The apparatus of claim 7, wherein the cluster sharing map-based coherence directory and the small all-core sharing map-based coherence directory are coupled to be accessed in series. 11. The apparatus of claim 1 , wherein the plurality of cores comprise at least thirty- two cores. 12. The apparatus of claim 11, wherein the plurality of cores comprise at least one hundred cores. 13. A method comprising: storing corresponding pairs of addresses and cluster identifiers in a tag array of a cluster sharing map-based coherence directory, each of the addresses to identify data, each of the cluster identifiers to identify one of a plurality of clusters, the clusters logically grouping a plurality of cores; storing cluster sharing maps in a cluster sharing map array of the cluster sharing map- based coherence directory, each of the cluster sharing maps corresponding to one of the pairs of addresses and cluster identifiers, each of the cluster sharing maps to indicate intra-cluster sharing of data identified by the corresponding address by cores within a cluster identified by the corresponding cluster identifier; and determining inter-cluster sharing of data corresponding to a given address by accessing from the cluster sharing map-based coherency directory a plurality of cluster sharing maps each corresponding to the given address and each having a different cluster identifier. 14. The method of claim 13, further comprising logically grouping the cores into the clusters, wherein logically grouping comprises logically grouping at least one hundred cores into the plurality of clusters. 15. The method of claim 13, further comprising indicating inter-cluster sharing of a given data identified by a given address between a first cluster and a second cluster by: storing both a first cluster identifier to identify the first cluster and a second cluster identifier to identify the second cluster in different ways of a same set of the tag array; and storing both a first cluster sharing map to indicate intra-cluster sharing of the given data within the first cluster and a second cluster sharing map to indicate intra-cluster sharing of the given data within the second cluster in different corresponding ways of a same set of the cluster sharing map array. 16. The method of claim 13, further comprising generating an all-core sharing map from a plurality of cluster sharing maps each corresponding to a given address and each corresponding to a different cluster identifier. 17. The method of claim 16, wherein generating the all-core sharing map comprises rearranging the plurality of cluster sharing maps, from positions where they are stored in the cluster sharing map array, to positions in the all-core sharing map, based on the different corresponding cluster identifiers. 18. The method of claim 13, further comprising comparing a plurality of addresses in a plurality of different ways of a set of the tag array with a reference address and indicating which of multiple addresses match the reference address. 19. The method of claim 13, further comprising logically grouping the cores into the clusters, wherein logically grouping the cores into the clusters comprises logically grouping the cores into clusters of non-overlapping sets of cores of equal size. 20. The method of claim 13, further comprising accessing a small all-core sharing map-based coherence directory having no more than twenty entries per core, the small all-core sharing map-based coherence directory storing corresponding pairs of addresses and all-core sharing maps, each of the all-core sharing maps to indicate sharing of data identified by a corresponding address by any of the plurality of cores. 21. The method of claim 20, further comprising accessing the cluster sharing map- based coherence directory in parallel with accessing the small all-core sharing map-based coherence directory. 22. The method of claim 20, further comprising accessing the cluster sharing map- based coherence directory in series with accessing the small all-core sharing map-based coherence directory. 23. The method of claim 22, wherein the cluster sharing map-based coherence directory accesses the small all-core sharing map-based coherence directory. 24. The method of claim 20, wherein accessing comprises accessing a small all-core sharing map-based coherence directory that has no more than fifteen entries per core. 25. A system comprising: a multi-core apparatus, the multi-core apparatus including: a plurality of cores, the plurality of cores logically grouped into a plurality of clusters; a memory controller coupled with a first core of the plurality; and a cluster sharing map-based coherence directory coupled with the plurality of cores to track sharing of data among the plurality of cores, the cluster sharing map-based coherence directory including: a tag array to store corresponding pairs of addresses and cluster identifiers, each of the addresses to identify data, each of the cluster identifiers to identify one of the clusters; and a cluster sharing map array to store cluster sharing maps, each of the cluster sharing maps corresponding to one of the pairs of addresses and cluster identifiers, each of the cluster sharing maps to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier; and a memory coupled with the memory controller, wherein the memory comprises a dynamic random access memory (DRAM). 26. The system of claim 25, further comprising logic to indicate inter-cluster sharing of a given data identified by a given address between a first cluster and a second cluster by storing both a first cluster identifier to identify the first cluster and a second cluster identifier to identify the second cluster in different ways of a same set of the tag array, and by storing both a first cluster sharing map to indicate intra-cluster sharing of the given data within the first cluster and a second cluster sharing map to indicate intra- cluster sharing of the given data within the second cluster in different ways of a same set of the cluster sharing map array. 27. The system of claim 25, further comprising logic to generate an all-core sharing map from a plurality of cluster sharing maps each corresponding to a given address and each corresponding to a different cluster identifier, by rearranging the plurality of cluster sharing maps, from positions where they are stored in the cluster sharing map array to positions in the all-core sharing map, based on the different corresponding cluster identifiers. |
SCALABLE COHERENCE FOR MULTI-CORE PROCESSORS BACKGROUND Field Embodiments relate to multi-core processors. In particular, embodiments relate to maintaining data coherence in multi-core processors. Background Information Chip multi-processors (CMPs), multi-core devices, and other multi-processor apparatus have a number of cores or processors on a single integrated circuit die or chip. Each core generally has associated therewith one or more corresponding local caches which are operable to cache copies of data from one or more shared memories. The cores are generally coupled together and are operable to share the data stored in their local caches with one another. It is generally important to maintain coherence, or a consistent view of the data, across all of the cores. All-core sharing map-based hardware coherence directories are one of the commonly used hardware-based coherence mechanisms in present day general-purpose processors to help maintain coherence of data across all of the cores. These directories represent hardware structures that are operable to track data cached in the local cache(s) of all of the cores, as well as which of the cores are sharing the data. All-core hardware coherence tags are typically stored in the entries of the directories and indicate the sharing of the data. Figure 1 is a block diagram of a known all-core hardware coherence tag 100. As the name implies, the all-core hardware coherence tag has a scope of all of the cores and is operable to indicate sharing of data among any or all of the cores. The all-core hardware coherence tag includes an address field 102, a state field 104, and an all-core sharing map field 106. The address field may indicate an address (e.g., of a cache line caching a copy of data from memory and/or the memory address of the data). By way of example, the address field may have a length of 33-bits. The state field may indicate a state of the corresponding data or entry in the directory (e.g., whether the data or entry is modified, exclusive, shared or invalid). For example, the state field may have a length of 2-bits. The 2-bits may indicate any of four different states. The all-core sharing map field 106 may indicate which of the cores of a device are caching a copy of the data corresponding to the address field as well. The all-core sharing map field generally includes 1-bit for each of the cores. As shown in the illustration, the all-core sharing map field has a length of 32-bits or 1-bit for each of 32- cores. The 1-bit corresponding to a given core is operable to indicate whether or not the given core is caching a copy of the data. According to one possible convention, a binary value of 1 (i.e., the bit being set) may be used to indicate that the given core is caching a copy of the data, whereas a binary value of 0 (i.e., the bit being cleared) may be used to indicate that the given core is not caching a copy of the data. For example, in the illustrated embodiment, bits [0:5] having the respective values 0 1 1 0 0 1 may indicate that, for the said address, core 0 is not caching, cores 1 and 2 are caching, cores 3 and 4 are not caching, and core 5 is caching. Figure 2 is a block diagram of a known all-core sharing map-based hardware coherence directory 210. The directory is set associative and includes a 4-way set associative tag array 212 and a 4-way set associative all-core sharing map array 214. There is a one-to-one correspondence between ways in the tag and cluster sharing map arrays. The tag array 212 is arranged as (k+l)-sets, labeled set[0] thorough set[k], and four ways, labeled way[0] through way[3]. The address and state fields are typically included in the tag array. As shown, set[l] includes address 102 and state 104 fields in each of way[l] and way [2]. The all-core sharing map array 214 is also arranged as (k+1)- sets, labeled set[0] thorough set[k], and four ways, labeled way[0] through way[3]. The all-core sharing map fields are typically included in the all-core sharing map array. As shown, set[l] includes all-core sharing map fields 106 in each of way[l] and way[2]. Typically, the number of tags in the directory equals the total number of tags in local/private caches of all cores to enable tracking distinct cache lines. During operation, when it is desired to know which cores are caching data for a given address, the all-core sharing map-based hardware coherence directory may be consulted. The directory includes tag comparison logic 216. The tag comparison logic may compare four addresses, each stored within a different one of the four ways of a set, with a given address. The four addresses may be read out on tag array readout lines 218. Either none of the four addresses may match the given address, or at most a single address in a single way may match the given address. Assuming single address in a single way matches the given address, a way select signal 220, for example a 2-bit way select signal for a 4-way set associative array, may be output from the tag comparison logic to way selection logic 222. The way select signal may indicate the single way having the matching address. Four all-core sharing map fields, each in one of four different ways of the corresponding set, may be readout of all-core sharing map array readout lines 224 and provided to the way selection logic. The way selection logic may select the single all-core sharing map field on the single way indicated by the way select signal. For example, if the way select signal indicates way[2] (e.g., has a value of binary 10), then the all-core sharing map field in way[2] may be selected and output as a selected all-core sharing map 206. The output all-core sharing map field indicates which of the cores are sharing the data. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: Figure 1 is a block diagram of a known all-core hardware coherence tag. Figure 2 is a block diagram of a known all-core sharing map-based hardware coherence directory. Figure 3 is a block diagram of an embodiment of a multi-processor apparatus. Figure 4 shows an embodiment of suitable internal components of a representative tile. Figure 5 is a block diagram of an embodiment of a multi-processor apparatus having multiple processors or cores in which the processors or cores are logically grouped into at least two clusters, with each of the clusters including at least two processors or cores. Figure 6 is a block diagram of a particular example embodiment of a thirty-two core apparatus having thirty-two cores logically grouped into four clusters that each includes a different set of eight of the cores. Figure 7 is a block diagram of an embodiment of a cluster hardware coherence tag. Figure 8 is a block diagram of an embodiment of a cluster sharing map-based hardware coherence directory. Figure 9 is a graph plotting directory storage as a percentage of cache storage as a function of number of cores for a conventional all-core shared map-based hardware coherence directory and a cluster shared map-based hardware coherence directory. Figure 10 is a block diagram of a first embodiment of hardware coherence logic that includes a cluster sharing map-based hardware coherence directory and an optional small all-core sharing map-based hardware coherence directory that are accessed in sequentially. Figure 11 is a block diagram of a second embodiment of hardware coherence logic that includes a cluster sharing map-based hardware coherence directory and an optional small all-core sharing map-based hardware coherence directory that are accessed concurrently. Figure 12A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention Figure 12B is a block diagram illustrating both an exemplary embodiment of an in- order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. Figures 13A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. Figure 14 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. Figure 15, shown is a block diagram of a system in accordance with one embodiment of the present invention. Figure 16, shown is a block diagram of a first more specific exemplary system 1600 in accordance with an embodiment of the present invention. Figure 17, shown is a block diagram of a second more specific exemplary system 1700 in accordance with an embodiment of the present invention. Figure 18, shown is a block diagram of a SoC in accordance with an embodiment of the present invention. Figure 19 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. DETAILED DESCRIPTION In the following description, numerous specific details, such as specific multi-core processors, specific directory configurations, specific array configurations, specific core cluster arrangements, specific logic implementation choices, specific logic partitioning/integration details, and the like, are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. One limitation of all-core hardware coherence tags, all-core sharing maps, and/or all-core sharing map-based hardware coherence directories, is that the sizes of the tags, the maps, and/or the directories tend to increase significantly with increasing numbers of cores. As discussed above, the all-core sharing maps include 1-bit for each of the cores so that if the number of cores increase the number of bits within each of the maps also increases. For example, in the case of 64-cores each of the maps may be 64-bits wide, in the case of 256-cores each of the maps may be 256-bits wide, and in the case of 1024- cores each of the maps may be 1024-bits wide, and so on. Moreover, these maps are typically stored in many sets and ways. Accordingly, as the number of cores increase, the amount of storage space needed to store all of the all-core sharing maps and/or the size of the all-core sharing map-based hardware coherence directories may tend to increase significantly (in fact the rate of increase may tend to accelerate). At some number of cores (e.g., somewhere around 512), the amount of storage space needed to store all of the all-core sharing maps and/or the size of the all-core sharing map-based directory may even surpass the actual cache storage space used to store the data being tracked. Such increased storage space tends to increase the size, power consumption, and manufacturing cost of the device. As a result, such all-core hardware coherence tags, all-core sharing maps, and/or all-core sharing map-based hardware coherence directories do not provide a scalable solution that efficiently scales with increasing numbers of cores. Other more scalable hardware coherence approaches would be useful and would offer certain advantages (e.g., in terms of reduced storage, reduced manufacturing cost, reduced area, reduced power, etc.). As previously mentioned, in the all-core sharing map-based hardware coherence directory, the number of tags stored in the directory generally equals the total number of tags in the local/private caches of all of the cores to enable tracking distinct cache lines. When cache lines are shared by two or more cores, fewer distinct addresses will generally be tracked by the directory, such that the storage capacity of the directory is not fully utilized in the presence of such sharing. For example, assume each cache line is pair- wise shared by two cores. In this case, only approximately half of the available directory storage capacity is being used. That is, as the amount of sharing increases, the amount of available storage space in the directory may tend to increase. In some embodiments, the increased amount of available storage space in the directory, as a result of sharing of data among cores, may rather be utilized to make the hardware coherence directory more scalable with increasing numbers of cores. Figure 3 is a block diagram of an embodiment of a multi-processor apparatus 330. In some embodiments, the multi-processor apparatus may represent a multi-core apparatus, such as, for example, a chip multi-processor (CMP). The illustrated multiprocessor apparatus includes multiple tiles 332. In the illustrated embodiment, for purposes of illustration, nine tiles are shown. In other embodiments there may be either fewer tiles (e.g., 4, 6, 8, or some other number of tiles) or more tiles (e.g., 16, 32, 64, 80, 100, 128, 256, 512, 1024, more than 1024, or some other number of tiles). There is no requirement for the number of tiles to be an even number or a power of two, although this may often be the case. In the case of a CMP, the tiles are generally all disposed on the same semiconductor substrate (e.g., an integrated circuit die or chip). An interconnect 334 (e.g., an on-die or on-substrate interconnect) couples the tiles together. In various embodiments the interconnect may be configured as a mesh, a torus, a ring, or another known interconnect configuration. The tiles and/or cores are logically grouped into a plurality of clusters 348-1, 348-N, although such grouping may or may not be visible to software applications or operating systems. In various embodiments, hardware, firmware, software, or some combination, may logically group the tiles and/or cores. Generally, some aspects of the grouping (e.g., number of cores per cluster) may be fixed by the hardware, fields, etc. Figure 4 shows an embodiment of suitable internal components of a representative tile 432. The tile includes one or more cores 436. In one embodiment, the tile may include a single core. Alternatively, the tile may include two or more cores. The tile includes one or more local or private caches 437. In one embodiment, the tile may include a single cache. Alternatively, the tile may include two or more levels of local or private caches representing a local or private cache hierarchy. The tile also includes a switch or router 438 to couple the tile with the interconnect 334. In general, various different types of cores, caches, and switches or routers known in the art may be utilized. The other tiles may have either the same, similar, or entirely different internal components. Generally, each of the tiles includes one or more cores and one or more local caches although this is not required. Referring again to Figure 3, as shown, in some embodiments, some but not all of the tiles may have a corresponding directly coupled memory controller 340. In the illustration, two memory controllers are shown, namely a first memory controller 340-1 and a second memory controller 340-2. The memory controllers could alternatively be off-chip. Alternate embodiments, may include either fewer or more memory controllers. Moreover, in alternate embodiments the memory controllers may be coupled with the tiles in a different arrangement or configuration (e.g., coupled to different tiles, etc.). Each of the memory controllers is operable to couple with, and provide access to, a corresponding memory 342. In particular, the first memory controller is operable to couple with, and provide access to, a first memory 342-1. The second memory controller is operable to couple with, and provide access to, a second memory 342-2. Each of the first and second memories may be shared by some or all of the tiles and/or cores. In the illustration, the memories are shown in dashed lines to indicate that they are not necessarily part of the multi-processor apparatus, but rather may be system-level components included in a system in which the multi-processor apparatus is deployed. The memories and memory controllers need not be dedicated to any particular one of the clusters. In some embodiments, each of the cores may be operable to process or run one or more threads. Software is commonly executed as multiple threads on multiple processors (e.g., cores) in order to provide concurrent processing, increase processing throughput, reduce processing time, etc. Each thread may represent a portion of software (e.g., a group of instructions) that can be processed separately from (e.g., independently from and/or concurrently with) other portions (e.g., threads). The threads may process data accessed in the local or private caches within the tile of the core they are running on, accessed in the local or private caches of other cores, and/or accessed in the first and second memories. The multi-processor apparatus includes an embodiment of a cluster sharing map- based hardware coherence directory 344. The cluster sharing map-based hardware coherence directory is operable to provide hardware-based data coherence for the data shared by the cores and memories. The cluster sharing map-based hardware coherence directory is operable to store cluster hardware coherence tags 346. Further details of the cluster sharing map-based hardware coherence directory and the cluster hardware coherence tags will be provided further below. Figure 5 is a block diagram of an embodiment of a multi-processor apparatus 530 having multiple processors or cores 536 in which the processors or cores are logically grouped into at least two clusters 548, with each of the clusters including at least two processors or cores. In some embodiments, the multi-processor apparatus may be a chip multi -processor (CMP). In the illustration, a first cluster 548-1 and an Nth cluster 548-N are shown, although there may optionally be more than two clusters. The first cluster includes a plurality of cores 536. The second cluster also includes a plurality of cores 536. In some embodiments, all of the cores of the multi-processor apparatus may be logically grouped into the clusters. Alternatively, one or more of the cores may optionally be omitted from the clusters. In some embodiments, the clusters may all have the same number of cores. Alternatively, the clusters may optionally have different numbers of cores. In some embodiments, each core may be included in one and only one of the clusters. In some embodiments, the clusters may include different non-overlapping sets of cores of equal size. In one aspect, the cores within each cluster may potentially be physically contiguous, adjacent, or neighboring, cores (e.g., on a die or substrate). Alternatively, in another aspect, the cores within each cluster may be physically interleaved (e.g., every fourth core in the physical layout may be in a given cluster), or can have any random combination chosen at boot time, or during some other form of initialization. Figure 6 is a block diagram of a particular example embodiment of a thirty-two core apparatus 630 having thirty-two cores 636 logically grouped into four clusters 648 that each include a different set of eight of the cores. In particular, the apparatus includes a first cluster 648-1 having eight cores, a second cluster 648-2 having eight cores, a third cluster 648-3 having eight cores, and a fourth cluster 648-4 having eight cores. In this embodiment, all of the clusters have the same number of cores, each core is included in one and only one of the clusters, and each cluster includes a set of physically contiguous, adjacent, or neighboring, cores. It is to be appreciated that this is just one example. In other embodiments, the cores may be grouped into either fewer or more clusters, with the clusters including either fewer or more cores, with the cores distributed between the clusters in different ways, with the clusters including different numbers of cores, etc. Figure 7 is a block diagram of an embodiment of a cluster hardware coherence tag 750. The cluster hardware coherence tag has a scope of a cluster of cores (i.e., a subset of the cores), not all of the cores as in the case of the all-core hardware coherence tag lOOshown in Figure 1, and the cluster hardware coherence tag is operable to indicate sharing of data among any or all of the cores within a single cluster. The cluster hardware coherence tag includes an address field 702 and a state field 704. The address field and the state field may be similar to, or the same as, conventional address fields and state fields known in the arts. The address field may indicate an address. . In some embodiments, the address field may have a length of 33-bits. The state field may indicate a state of the corresponding data or entry in the directory (e.g., whether the data or entry is valid or invalid). In some embodiments, the state field may have a length of two bits. The two bits may indicate any of four different states. For example, in one aspect they may be MESI states or other similar states known in the art. Some directories don't distinguish between the modified (M) and exclusive (E) states but instead always assume that a cached copy could be modified. The cluster hardware coherence tag also includes a cluster identifier (ID) field 752. The cluster ID field is operable to uniquely identify the particular cluster which the cluster hardware coherence tag corresponds to. By way of example, if there are four clusters, the cluster ID field may have a length of two bits, and binary 00 in the cluster ID field may indicate a first cluster, binary 01 cluster ID field may indicate a second cluster, binary 10 in the cluster ID field may indicate a third cluster, and binary 11 in the cluster ID field may indicate a fourth cluster. Alternatively, if there are more or less clusters the cluster ID field may have a longer or shorter length in bits. The cluster hardware coherence tag also includes a cluster sharing map field 754. The cluster sharing map field has a scope of a cluster of cores (i.e., a subset of the cores), not all of the cores as in the case of the all-core sharing map field 106 shown in Figure 1. The cluster sharing map field is operable to indicate intra-cluster sharing of data identified by and/or corresponding to the address field among any or all of the cores within a single cluster identified by the corresponding cluster identifier field. In some embodiments, the cluster sharing map field may include 1-bit for each of the cores within the cluster. In the illustrated embodiment, the cluster sharing map field has a length of 8- bits. Each of the 8-bits corresponds to a different one of eight cores within a single cluster. By way of example, as shown in Figure 6, the cores of a thirty-two core apparatus may be logically grouped into four clusters each having eight cores. Alternatively, if there are more cores in the cluster the field may have more bits. The 1- bit corresponding to a given core is operable to indicate whether or not the given core is caching a copy of the data. According to one possible convention, a binary value of 1 (i.e., the bit being set) may be used to indicate that the given core is caching a copy of the data, whereas a binary value of 0 (i.e., the bit being cleared) may be used to indicate that the given core is not caching a copy of the data. Alternatively, the opposite convention may be used. Advantageously, the length in bits of the cluster sharing map field is less than the length in bits of the all-core sharing map field. The all-core sharing map field has a scope of all cores of the apparatus, whereas the cluster sharing map field has a scope of cores only within a single cluster and all of the cores of the apparatus are divided or partitioned among at least two clusters. As a result, the amount of storage space needed to store all cluster sharing map fields is less than that needed to store all of the all- core sharing map fields. This reduced storage space may offer advantages such as reduced size, reduced power consumption, reduced manufacturing cost, etc. This is especially true when the number of cores becomes greater than about thirty-two. This may also help to provide continued support for the widely used shared memory programming model, which is prevalent on many Intel Architecture based processors, when the number of cores or processors increases, which may help to allow existing applications to be run without change. The illustrated cluster hardware coherence tag is just one illustrative example. In other embodiments, the fields of the tag may have different sizes, the fields of the tag may be arranged differently (e.g., the order of the fields may be shuffled around), additional fields may be included in the tags, etc. Moreover, it is not required that the bits of the fields be contiguous. Rather, the bits of a field may be interleaved or dispersed with bits of other fields if desired. Figure 8 is a block diagram of an embodiment of a cluster sharing map-based hardware coherence directory 844. In various embodiments, the cluster sharing map- based hardware coherence directory 844 may be used in the multi-processor apparatus 330 of Figure 3, the multi-processor apparatus 530 of Figure 5, the thirty-two core apparatus 630 of Figure 6, or an entirely different multi-core or multi-processor apparatus. For example, specific or optional details described for the directory 844 may also optionally be used for the directory 344. In some embodiments, the directory may be visible to and/or used by all of the cores of a multi-core apparatus. The directory is set associative and includes a tag array 856 and a data cluster sharing map array 858. In the illustrated embodiment, the tag array is 4-way set associative and the cluster sharing map array is 4-way set associative. Alternatively, 8- way set associative, or other desired arrangements may optionally be used. There is a one-to-one correspondence between ways in the tag and cluster sharing map arrays. The tag array is arranged as (k+l)-sets, labeled set[0] thorough set[k], and four ways, labeled way[0] through way[3]. The number of sets may be any desired integer number (e.g., a number conventionally used in tag arrays), but typically a power of 2. The cluster sharing map array is also arranged as (k+l)-sets, labeled set[0] thorough set[k], and four ways, labeled way[0] through way[3]. Alternatively, fewer or more ways (e.g., eight ways) may optionally be used. In other embodiments, the tag and cluster sharing map arrays may be merged together into a single array. In some embodiments, address fields, state fields, and cluster ID fields, may be included in the tag array 856. For example, as shown, set[k] includes corresponding address fields 702, state fields 704, and cluster ID fields 752 in each of way[l] and way[2]. In embodiments, the cluster sharing map fields may be included in the cluster sharing map array 858. For example, as shown, set[k] includes cluster sharing map fields 754 in each of way[l] and way[2]. The address, state, and cluster ID fields in way[l] and way[2] of set[k] of the tag array respectively correspond to the cluster sharing map fields in way[l] and way [2] of the cluster sharing map array within a corresponding way. During operation, it may be desired to know which among all of the cores of the apparatus are sharing data corresponding to a given address. Without limitation to the invention, there are various possible reasons to want to know this. Examples of possible reasons include, but are not limited to, in order to maintain coherence (e.g., in order to change the state of the data), in order to share the data between cores, etc. By way of example consider the case of sharing data between cores. When a given core is seeking the data corresponding to the given address, it may use the given address to check one or more of its corresponding local caches. If the sought data is not found in the local cache(s), then the cluster sharing map-based hardware coherence directory 844 may be consulted to determine which if any of the other cores have the sought data. This may be performed prior to accessing system memory, which generally tends to take more time (e.g., higher access latency). If the cluster sharing map-based hardware coherence directory indicates that the sought data is present in the local cache(s) of one or more of the other cores of the apparatus, then the sought data may be provided from these cache(s) to the core seeking the data. Alternatively, if the cluster sharing map-based hardware coherence directory indicates that the sought data is not present in the local cache(s) of any of the other cores of the apparatus, then a copy of the sought data may be obtained from the system memory, and the cluster sharing map-based hardware coherence directory may be updated to indicate that the requesting core now has a copy of the data. For example, a cluster hardware coherence tag may be stored in the cluster sharing map-based hardware coherence directory with a bit corresponding to the requesting core set to binary 1 to indicate that it has a copy of the data. In some embodiments, in order to determine which among all of the cores of the apparatus are sharing data corresponding to a given address, the cluster sharing map- based hardware coherence directory may include logic to generate and output an all-core sharing map 806. In some embodiments, the all-core sharing map 806 may be similar to, or the same as, the all-core sharing map stored in the all-core sharing map field 106 shown in Figure 1. In some embodiments, the all-core sharing map may be generated from one or a plurality of cluster sharing maps each corresponding to a given address and each corresponding to a different cluster identifier. The logic may rearranging the cluster sharing maps, from positions where they are stored in the cluster sharing map array to positions in the all-core sharing map, based on the different corresponding cluster identifiers. Advantageously, using the same all-core sharing map may offer certain advantages. For one thing, this may help to make the all-core sharing map (i.e., the output of the directory) compatible with conventional coherence logic and/or coherence protocols. This may help to reduce the amount of changes and validation needed. The coherence logic and/or coherence protocols may not even need to be aware of the changes to how the all-core sharing map is generated. Alternatively, other embodiments are not limited to generating an all-core sharing map that is the same as those shown for Figure 1. The cluster sharing map-based hardware coherence directory includes the tag array 856. The tag array is operable to store corresponding pairs of addresses 702 and cluster identifiers 752. Each of the addresses is operable to identify data. Each of the cluster identifiers is operable to identify one of the clusters. The cluster sharing map-based hardware coherence directory also includes a cluster sharing map array 858 that is operable to store cluster sharing maps 754. Each of the cluster sharing maps corresponds to one of the pairs of addresses 702 and cluster identifiers 752. Each of the cluster sharing maps is operable to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier. Logic associated with the hardware coherence directory (e.g., a coherence directory controller) may indicate inter-cluster sharing of a given data identified by a given address between clusters (e.g., between a first cluster and a second cluster) by storing different cluster identifiers (e.g., both a first cluster identifier to identify the first cluster and a second cluster identifier to identify the second cluster) in different ways of a same set of the tag array. The logic may also store different corresponding cluster sharing maps (e.g., both a first cluster sharing map to indicate intra-cluster sharing of the given data within the first cluster and a second cluster sharing map to indicate intra-cluster sharing of the given data within the second cluster) in different ways of a same set of the cluster sharing map array. In some embodiments, lookup of the tag and cluster sharing map arrays may be performed sequentially (e.g., with the lookup in the tag array first), whereas in other embodiments the lookup of the tag and cluster sharing map arrays may be performed at least partly or substantially concurrently. The directory includes tag comparison logic 860. The tag comparison logic may compare four addresses, each stored within a different one of the four ways of a set of the tag array, with a given address. The four addresses may be read out on tag array readout lines 864. Either none of the four addresses may match the given address, or for the four- way array one, two, three, or all four of the addresses in the four ways may match the given address. Recall that in the conventional all-core sharing map-based hardware coherence directory 210 of Figure 2, at most a single address in a single way may match the given address. When there is inter-cluster sharing (i.e., different clusters share data corresponding to a given address), multiple cluster ID fields 752 and address fields 702, each corresponding to a cluster sharing map field 754, may be stored in the same set of the directory. If no unused tags are available in the set, a victim (e.g., a least recently used victim) may be selected to make room for the new cluster sharing map field. The tag comparison logic includes per-way match signal generation logic 862 that is operable to generate and output four per-way match signals. Each of the four per-way match signals indicates whether or not the comparison by the tag comparison logic indicated an address match for a corresponding way. For example, a first of the four per- way match signals may indicate whether or not an address in way[0] matched, a second of the four per-way match signals may indicate whether or not an address in way[l] matched, a third of the four per-way match signals may indicate whether or not an address in way[2] matched, and a fourth of the four per-way match signals may indicate whether or not an address in way[3] matched. In some embodiments, each of the four per-way match signals may include a single bit. The single bit may have a first binary value (e.g., 1) to indicate that there was a match and a second, different binary value (e.g., 0) to indicate that there was not a match. Each of the four per-way match signals may be provided to a different corresponding one of four per-way selection logic 866 in the same way. A first selection logic 866-0 corresponds to way[0], a second selection logic 866-1 corresponds to way[l], a third selection logic 866-2 corresponds to way[2], and a fourth selection logic 866-3 corresponds to way[3]. Four cluster ID fields 752 (only two of which are shown) each in a different one of the four ways may be read out of the tag array along cluster ID readout lines 868. Each cluster ID field is operable to uniquely identify the particular cluster which the cluster hardware coherence tag and/or the address corresponds to. By way of example, if there are four clusters, each cluster ID field may have a length of two bits, and binary 00 in the cluster ID field may indicate a first cluster, binary 01 cluster ID field may indicate a second cluster, binary 10 in the cluster ID field may indicate a third cluster, and binary 11 in the cluster ID field may indicate a fourth cluster. Alternatively, if there are more or less clusters the cluster ID field may have a longer or shorter length in bits. The cluster ID readout lines may be operable to read out the number of bits for each of the four ways. Each of the four cluster ID fields may be provided to a different corresponding one of four per-way selection logic 866 in the same way. For example, the cluster ID field in way[3] may be provided to the selection logic corresponding to way[3], etc. Recall that the cluster ID fields indicate which cluster the bits of the cluster sharing map fields correspond to. Four cluster sharing map fields 754 (only two of which are shown), each in a different way of the cluster sharing map array 858, may be read out of the cluster sharing map array on cluster sharing map readout lines 870. Each of the four cluster sharing map fields may be provided to a different corresponding one of the four per-way selection logic 866 in the same way. For example, the cluster sharing map field in way[3] may be provided to the selection logic 866-3 corresponding to way[3], etc. Intra-cluster sharing (i.e., sharing among the cores within a given cluster) may be indicated within a given single cluster sharing map 754, whereas inter-cluster sharing (i.e., sharing among cores in multiple different clusters) may be indicated through multiple different cluster sharing maps, each corresponding to a different cluster, each having the same address, and each in the illustrated embodiment included in the same set. Accordingly, each of the four per-way selection logic receives three inputs. Namely, each of the four per-way selection logic receives a corresponding way match signal corresponding to the same way from the tag comparison logic, a corresponding cluster ID field corresponding to the same way from the tag array, and a corresponding cluster sharing map corresponding to the same way from the cluster sharing map array. Each of the four per-way selection logic is operable to select either the received/input cluster sharing map or a predetermined value that is operable to indicate no cores within the cluster are sharing the data based on the received/input way match signal. For example, in a convention where a binary value of 1 in the cluster sharing map indicates the corresponding core is sharing data, the predetermined value may have 8-bits cleared bits equal to binary 0 when there are eight cores in the cluster. In some embodiments, when the received/input way match signal indicates there is a match the received/input cluster sharing map is selected, whereas when the way match signal indicates there is no match the predetermined value indicating no cores are sharing the data is selected. Each of the four per-way selection logic has four outputs 872. Each of the four outputs from a given one of the four per-way selection logic is coupled with an input of a different corresponding one of four OR gates 874. The four OR gates represent an embodiment of cluster sharing map alignment and/or repositioning logic. The four outputs of the selection logic corresponding to way[0] are each coupled with a different one of the four OR gates, the four outputs of the selection logic corresponding to way[l] are each coupled with a different one of the four OR gates, and so on. Each of the selection logics is operable to de-multiplex, route, or otherwise provide the selected value (e.g., either the received/input cluster sharing map or the predetermined value) to one of the four OR gates based on the corresponding received/input cluster ID field. In some embodiments, the cluster sharing maps for the clusters are not placed in the tag array 856 in an order required by their cluster IDs. For example, a cluster sharing map for a first cluster and/or cluster ID may be placed in any of the ways in the tag array. Such routing or moving of the selected values may be used to rearrange the selected values (e.g., the cluster sharing maps) to appropriate positions within the all-core sharing map 806. In some embodiments, each of the clusters corresponds to a different fixed or predetermined position within the all-core sharing map. For example, in the illustrated embodiment, there are four clusters, there are four positions within the all-core sharing map each operable to contain a different cluster sharing map, and each of the clusters corresponds to a different fixed or predetermined one of the four positions within the all- core sharing map. For example, a first cluster (e.g., identified by cluster ID 00) corresponds to the way[0] position in the all-core sharing map, a second cluster (e.g., identified by cluster ID 01) corresponds to the way[l] position in the all-core sharing map, a third cluster (e.g., identified by cluster ID 10) corresponds to the way[2] position in the all-core sharing map, and a fourth cluster (e.g., identified by cluster ID 11) corresponds to the way [3] position in the all-core sharing map. This is just one possible example. Each of the OR gates may output or provide the input they receive from any of the four selection logic to a corresponding position in the all-core sharing map 806. Accordingly, in some embodiments, the cluster sharing map-based hardware coherence directory includes cluster sharing map rearrangement or routing logic to rearrange or route cluster sharing maps corresponding to different clusters into an arrangement suitable for the all-core sharing map 806. Where there is inter-cluster sharing, and multiple cluster sharing maps each corresponding to the same address are stored within the same way, these cluster sharing maps are routed or rearranged into the positions of the all-core sharing map appropriate for their corresponding cluster. It is not required to use the particular selection logic and OR gates shown. Other embodiments may use other configurations of selection and Boolean logic to perform the rearrangement. Still other embodiments may include shifting and merging logic to perform the rearrangement. In the above description a sequential lookup into the tag and cluster sharing map arrays has been described, although other embodiments may perform at least partially concurrent tag and cluster sharing map array lookup. Accordingly, as described above in conjunction with Figure 8, a comparatively small amount of static storage for tracking data sharing may be allocated. When the amount of sharing increases beyond the static amount, the information to track the additional sharing/sharers may be opportunistically spilled into available or unused space in the directory that results from the sharing. Advantageously, this may help to avoid needing to statically allocate an amount of storage space for the maximum possible amount of sharing, which is generally not the common case for most applications. Reducing the total amount of storage needed for the directory may help to reduce the size, manufacturing cost, and/or power consumption for the directory. Moreover, this may also help to provide continued support for the widely used shared memory programming model, which is prevalent on many Intel Architecture based processors, when the number of cores or processors increases, which may help to allow existing applications to be run without change. In some embodiments, rather than a single physical directory, a distributed directory may be utilized. For example, in some embodiments, each core may have a corresponding distributed "slice" or other portion of the directory. For example, if there are thirty-two cores, there may be thirty-two per-core slices of the directory each located proximate a corresponding one of the cores (e.g., within a tile having the core). In some embodiments, each slice and/or each core may have a unique predefined address range. For example, if there are thirty-two cores and/or slices, any given address may uniquely map to one of the thirty-two cores and/or slices referred to as a home slice for that given address. By way of example, one way to implement this is to have each possible value of the first five bits of the address uniquely correspond to a different one of the thirty-two slices. For example, all addresses with the first five bits 11111 may correspond to the same slice. All other slices may have a different value for these first five bits of the address. Alternatively, some embodiments may choose to hash the address differently to derive a home slice. Figure 9 is a graph plotting directory size as a percentage of cache size as a function of number of cores for a conventional all-core shared map-based hardware coherence directory (e.g., per the approach shown in Figure 2) and a cluster shared map- based hardware coherence directory (e.g., per the approach shown in Figure 8). Directory size as a percentage of cache size is plotted on the vertical axis. Number of cores is plotted on the horizontal axis. As can be readily seen, especially when the number of cores is approximately thirty-two or more, the conventional all-core shared map-based hardware coherence directory tends to have a much higher percentage of directory size to cache size than the cluster shared map-based hardware coherence directory. When there are about 512 or more cores, the conventional all-core shared map-based hardware coherence directory may consume as much storage space as used for the actual cache. By contrast, the cluster shared map-based hardware coherence directory has a relatively flat dependency on increasing number of cores beyond about thirty-two cores. This graph clearly shows that the cluster shared map-based hardware coherence directory is much more scalable than the conventional all-core shared map-based hardware coherence directory. It is to be appreciated that embodiments are applicable to even small numbers of cores although as explained elsewhere herein advantages of reduced directory storage space are especially incurred for large core counts of at least sixteen or more. Figure 10 is a block diagram of a first embodiment of hardware coherence logic 1078 that includes a cluster sharing map-based hardware coherence directory 1044 and an optional small all-core sharing map-based hardware coherence directory 1076 that are accessed in series. The cluster sharing map-based directory may be similar to, or the same as, those described elsewhere herein. In some embodiments, the small all-core sharing map-based directory may be similar to the known all-core sharing map-based directory 210 of Figure 2 except that it is relatively smaller (e.g., has fewer entries). The features described above for the directory 210 are also relevant to the directory 1076. The small all-core sharing map-based coherence directory may store corresponding pairs of addresses and all-core sharing maps. In various embodiments, the small all-core sharing map-based directory may have no more than about 20, 15, or 10 entries per core and/or slice. For example, in various embodiments, the small all-core sharing map-based directory may have about 1 to 20, about 2 to 20, about 4 to 15, or about 6 to 15 entries per core and/or slice. By contrast, the known all-core sharing map-based directory 210 commonly includes many more entries. For example, the number of entries may be equal to the total number of possible cache lines. For instance, if each private cache is 256KB with 64-bit line size, each directory slice may have around 4096 entries (i.e., 256*1024/64). In some embodiments, the all-core sharing map-based and the cluster sharing map- based directories may track non-overlapping or mutually exclusive sets of addresses. For example, the small all-core sharing map-based directory may store all-core sharing maps when the number of clusters sharing data exceeds a threshold and/or exceeds the associativity of the cluster sharing map-based directory. If a tag is to be added to the cluster sharing map-based directory, but it would result in the number of tags exceeding the threshold and/or the associativity, then all tags for the corresponding address may be marked as invalid in the cluster sharing map-based directory and an all-core sharing map indicating equivalent sharing may be created and stored in the small all-core sharing map- based directory. The relatively small size of the small all-core sharing map-based directory is appropriate for various different types of data sharing patterns. One common data sharing patter is that a relatively large degree of sharing of a relatively few number of addresses (e.g., for semaphores). Active semaphores are generally relatively few in number for a given application. Consequently, a small number of entries in the small all- core sharing map-based directory are generally sufficient for semaphores. Another common data sharing pattern is that of widely shared read only data for a large number of different addresses. The wide sharing of the data generally significantly reduces the number of distinct addresses. For example, consider a 256 kilo byte private L2 cache/tile with 4096 entries (64-byte cache line size) for a 1024 core design. If all lines are shared, then there are only 4096 distinct addresses. Since there are 1024 cores and/or slices, there are only about 4 (i.e., 4096/1024) entries per core and/or slice. For other numbers of cores, the number would be the number of distinct addresses divided by the number of cores or slices. If desired, more (e.g., between about 4 to 10) may optionally be included (e.g., to help account for non-uniform address distribution and/or one slice having a disproportionate amount of addresses). Yet another common data sharing pattern is that of random sharing among a few addresses. For random sharing across a few cores, the cluster sharing map based hardware coherence directories described elsewhere herein are should suffice for inter-cluster sharing. Since these types of accesses tend to be relatively few in number, the side-effects of these can generally be ignored without significant performance impact. Alternatively, a slightly larger cache may be included to help mitigate the side-effects of the few randomly shared lines. Accordingly, the number of entries generally tends to be relatively small, such as no more than about twenty entries per core or slice. There is no precise number that is required, but rather there is flexibility in the actual number, although with some performance versus area/power tradeoff. Generally, relatively more entries per core or slice (although often no more than about twenty) tends to provide relatively better performance, but to have relatively larger area and larger power consumption. Conversely, relatively fewer entries per core or slice (although often at least 1-2), tendst o provide smaller area and smaller power consumption, but to have relatively worse performance. Those skilled in the art will appreciate that the actual number may be selected for the particular implementation depending on factors such as the number of cores, the sizes of the caches, the types of data sharing patterns expected, the performance, area, and power objectives, etc. When selecting victims in the cluster sharing map-based directory and/or the small all-core sharing map-based directory, conventional victim selection approaches known in the arts may optionally be used. For example, in some embodiments, a least recently used (LRU) approach may be used. If desired, more sophisticated approaches may optionally be used. For example, in addition to considering recent use (e.g., as in the case of LRU approaches), other factors such as the number of sharers may optionally be considered. In some embodiments, if there are multiple tags for an address of a selected victim in the cluster sharing map-based directory, all of these tags may optionally be invalidated and/or removed from the cluster sharing map-based directory, and an all-core sharing may optionally be added to the small all-core sharing map-based directory. Referring again to Figure 10, in the illustrated first embodiment, the cluster sharing map-based and small all-core sharing map-based directories are shown to be accessed in series. In particular, in the illustrated embodiment, the cluster sharing map-based directory is shown to be accessed prior to the small all-core sharing map-based directory. Commonly, the cluster sharing map-based directory has a higher hit rate than the small all-core sharing map-based directory and it is more efficient to access the cluster sharing map-based directory first. During use an address (e.g., from a directory controller) may be used to perform a lookup in the cluster sharing map-based directory. If the address is found to have one or more matching tags in the cluster sharing map-based directory, then an all-core sharing map 1006 may be regenerated from one or more matching cluster sharing maps as described elsewhere herein and output (e.g., written to an all-core sharing map register accessible to the directory controller). Alternatively, if there is a miss in the cluster sharing map-based directory, the small all-core sharing map-based directory may be accessed. As shown in the illustration, in some cases the cluster sharing map-based directory may access the small all-core sharing map-based directory. In other cases, the directory controller may access the small all-core sharing map-based directory. If the tag is found to have a match in the small all-core sharing map-based directory, then an intact all-core sharing map 1006 stored in the small all-core sharing map-based directory may be selected and output (e.g., written to an all-core sharing map register). If desired, the directory controller may optionally be informed of whether there is a hit in the cluster sharing map-based directory and/or the small all-core sharing map-based directory. Such a serial lookup generally tends to be more energy efficient, as compared to a parallel lookup, and may be used to help reduce power consumption. Figure 11 is a block diagram of a second embodiment of hardware coherence logic 1178 that includes a cluster sharing map-based hardware coherence directory 1144 and an optional small all-core sharing map-based hardware coherence directory 1176 that are accessed in parallel. During use an address (e.g., from the directory controller) may be used to concurrently perform a lookup in both the cluster sharing map-based directory and the small all-core sharing map-based directory. If the address is found to have one or more matching tags in the cluster sharing map-based directory, then an all-core sharing map 1106 may be regenerated from one or more matching cluster sharing maps as described elsewhere herein and output (e.g., written to an all-core sharing map register). If the address is found to have a matching tag in the small all-core sharing map-based directory, then an intact all-core sharing map 1106 stored in the small all-core sharing map-based directory may be selected and output (e.g., written to an all-core sharing map register). In some cases, an optional selection logic 1177 may be included to select between the outputs. If desired, the directory controller may optionally be informed of whether there is a hit in the cluster sharing map-based directory and/or the small all-core sharing map-based directory. Such a parallel lookup generally tends to detect matches faster, as compared to a serial lookup, and may be used to help increase performance. Exemplary Core Architectures, Processors, and Computer Architectures Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures. Exemplary Core Architectures In-order and out-of-order core block diagram Figure 12A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 12B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 12A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described. In Figure 12 A, a processor pipeline 1200 includes a fetch stage 1202, a length decode stage 1204, a decode stage 1206, an allocation stage 1208, a renaming stage 1210, a scheduling (also known as a dispatch or issue) stage 1212, a register read/memory read stage 1214, an execute stage 1216, a write back/memory write stage 1218, an exception handling stage 1222, and a commit stage 1224. Figure 12B shows processor core 1290 including a front end unit 1230 coupled to an execution engine unit 1250, and both are coupled to a memory unit 1270. The core 1290 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLrW) core, or a hybrid or alternative core type. As yet another option, the core 1290 may be a special -purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like. The front end unit 1230 includes a branch prediction unit 1232 coupled to an instruction cache unit 1234, which is coupled to an instruction translation lookaside buffer (TLB) 1236, which is coupled to an instruction fetch unit 1238, which is coupled to a decode unit 1240. The decode unit 1240 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1240 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1290 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1240 or otherwise within the front end unit 1230). The decode unit 1240 is coupled to a rename/allocator unit 1252 in the execution engine unit 1250. The execution engine unit 1250 includes the rename/allocator unit 1252 coupled to a retirement unit 1254 and a set of one or more scheduler unit(s) 1256. The scheduler unit(s) 1256 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1256 is coupled to the physical register file(s) unit(s) 1258. Each of the physical register file(s) units 1258 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1258 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1258 is overlapped by the retirement unit 1254 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1254 and the physical register file(s) unit(s) 1258 are coupled to the execution cluster(s) 1260. The execution cluster(s) 1260 includes a set of one or more execution units 1262 and a set of one or more memory access units 1264. The execution units 1262 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1256, physical register file(s) unit(s) 1258, and execution cluster(s) 1260 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1264). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order. The set of memory access units 1264 is coupled to the memory unit 1270, which includes a data TLB unit 1272 coupled to a data cache unit 1274 coupled to a level 2 (L2) cache unit 1276. In one exemplary embodiment, the memory access units 1264 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1272 in the memory unit 1270. The instruction cache unit 1234 is further coupled to a level 2 (L2) cache unit 1276 in the memory unit 1270. The L2 cache unit 1276 is coupled to one or more other levels of cache and eventually to a main memory. By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1200 as follows: 1) the instruction fetch 1238 performs the fetch and length decoding stages 1202 and 1204; 2) the decode unit 1240 performs the decode stage 1206; 3) the rename/allocator unit 1252 performs the allocation stage 1208 and renaming stage 1210; 4) the scheduler unit(s) 1256 performs the schedule stage 1212; 5) the physical register file(s) unit(s) 1258 and the memory unit 1270 perform the register read/memory read stage 1214; the execution cluster 1260 perform the execute stage 1216; 6) the memory unit 1270 and the physical register file(s) unit(s) 1258 perform the write back/memory write stage 1218; 7) various units may be involved in the exception handling stage 1222; and 8) the retirement unit 1254 and the physical register file(s) unit(s) 1258 perform the commit stage 1224. The core 1290 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1290 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data. It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology). While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1234/1274 and a shared L2 cache unit 1276, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor. Specific Exemplary In-Order Core Architecture Figures 13A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application. Figure 13 A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1302 and with its local subset of the Level 2 (L2) cache 1304, according to embodiments of the invention. In one embodiment, an instruction decoder 1300 supports the x86 instruction set with a packed data instruction set extension. An LI cache 1306 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1308 and a vector unit 1310 use separate register sets (respectively, scalar registers 1312 and vector registers 1314) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 1306, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back). The local subset of the L2 cache 1304 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1304. Data read by a processor core is stored in its L2 cache subset 1304 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1304 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi- directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction. Figure 13B is an expanded view of part of the processor core in Figure 13 A according to embodiments of the invention. Figure 13B includes an LI data cache 1306A part of the LI cache 1304, as well as more detail regarding the vector unit 1310 and the vector registers 1314. Specifically, the vector unit 1310 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1328), which executes one or more of integer, single- precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1320, numeric conversion with numeric convert units 1322A-B, and replication with replication unit 1324 on the memory input. Write mask registers 1326 allow predicating resulting vector writes. Processor with integrated memory controller and graphics Figure 14 is a block diagram of a processor 1400 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 14 illustrate a processor 1400 with a single core 1402 A, a system agent 1410, a set of one or more bus controller units 1416, while the optional addition of the dashed lined boxes illustrates an alternative processor 1400 with multiple cores 1402A-N, a set of one or more integrated memory controller unit(s) 1414 in the system agent unit 1410, and special purpose logic 1408. Thus, different implementations of the processor 1400 may include: 1) a CPU with the special purpose logic 1408 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1402A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1402A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1402A-N being a large number of general purpose in-order cores. Thus, the processor 1400 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1400 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1406, and external memory (not shown) coupled to the set of integrated memory controller units 1414. The set of shared cache units 1406 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1412 interconnects the integrated graphics logic 1408, the set of shared cache units 1406, and the system agent unit 1410/integrated memory controller unit(s) 1414, alternative embodiments may use any number of well- known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1406 and cores 1402-A-N. In some embodiments, one or more of the cores 1402A-N are capable of multithreading. The system agent 1410 includes those components coordinating and operating cores 1402A-N. The system agent unit 1410 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1402A-N and the integrated graphics logic 1408. The display unit is for driving one or more externally connected displays. The cores 1402A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1402A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. Exemplary Computer Architectures Figures 15-18 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable. Referring now to Figure 15, shown is a block diagram of a system 1500 in accordance with one embodiment of the present invention. The system 1500 may include one or more processors 1510, 1515, which are coupled to a controller hub 1520. In one embodiment the controller hub 1520 includes a graphics memory controller hub (GMCH) 1590 and an Input/Output Hub (IOH) 1550 (which may be on separate chips); the GMCH 1590 includes memory and graphics controllers to which are coupled memory 1540 and a coprocessor 1545; the IOH 1550 is couples input/output (I/O) devices 1560 to the GMCH 1590. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1540 and the coprocessor 1545 are coupled directly to the processor 1510, and the controller hub 1520 in a single chip with the IOH 1550. The optional nature of additional processors 1515 is denoted in Figure 15 with broken lines. Each processor 1510, 1515 may include one or more of the processing cores described herein and may be some version of the processor 1400. The memory 1540 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1520 communicates with the processor(s) 1510, 1515 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1595. In one embodiment, the coprocessor 1545 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1520 may include an integrated graphics accelerator. There can be a variety of differences between the physical resources 1510, 1515 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. In one embodiment, the processor 1510 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1510 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1545. Accordingly, the processor 1510 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1545. Coprocessor(s) 1545 accept and execute the received coprocessor instructions. Referring now to Figure 16, shown is a block diagram of a first more specific exemplary system 1600 in accordance with an embodiment of the present invention. As shown in Figure 16, multiprocessor system 1600 is a point-to-point interconnect system, and includes a first processor 1670 and a second processor 1680 coupled via a point-to- point interconnect 1650. Each of processors 1670 and 1680 may be some version of the processor 1400. In one embodiment of the invention, processors 1670 and 1680 are respectively processors 1510 and 1515, while coprocessor 1638 is coprocessor 1545. In another embodiment, processors 1670 and 1680 are respectively processor 1510 coprocessor 1545. Processors 1670 and 1680 are shown including integrated memory controller (IMC) units 1672 and 1682, respectively. Processor 1670 also includes as part of its bus controller units point-to-point (P-P) interfaces 1676 and 1678; similarly, second processor 1680 includes P-P interfaces 1686 and 1688. Processors 1670, 1680 may exchange information via a point-to-point (P-P) interface 1650 using P-P interface circuits 1678, 1688. As shown in Figure 16, IMCs 1672 and 1682 couple the processors to respective memories, namely a memory 1632 and a memory 1634, which may be portions of main memory locally attached to the respective processors. Processors 1670, 1680 may each exchange information with a chipset 1690 via individual P-P interfaces 1652, 1654 using point to point interface circuits 1676, 1694, 1686, 1698. Chipset 1690 may optionally exchange information with the coprocessor 1638 via a high-performance interface 1639. In one embodiment, the coprocessor 1638 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. Chipset 1690 may be coupled to a first bus 1616 via an interface 1696. In one embodiment, first bus 1616 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited. As shown in Figure 16, various I/O devices 1614 may be coupled to first bus 1616, along with a bus bridge 1618 which couples first bus 1616 to a second bus 1620. In one embodiment, one or more additional processor(s) 1615, such as coprocessors, high- throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1616. In one embodiment, second bus 1620 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1620 including, for example, a keyboard and/or mouse 1622, communication devices 1627 and a storage unit 1628 such as a disk drive or other mass storage device which may include instructions/code and data 1630, in one embodiment. Further, an audio I/O 1624 may be coupled to the second bus 1620. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 16, a system may implement a multidrop bus or other such architecture. Referring now to Figure 17, shown is a block diagram of a second more specific exemplary system 1700 in accordance with an embodiment of the present invention. Like elements in Figures 16 and 17 bear like reference numerals, and certain aspects of Figure 16 have been omitted from Figure 17 in order to avoid obscuring other aspects of Figure 17. Figure 17 illustrates that the processors 1670, 1680 may include integrated memory and I/O control logic ("CL") 1672 and 1682, respectively. Thus, the CL 1672, 1682 include integrated memory controller units and include I/O control logic. Figure 17 illustrates that not only are the memories 1632, 1634 coupled to the CL 1672, 1682, but also that I/O devices 1714 are also coupled to the control logic 1672, 1682. Legacy I/O devices 1715 are coupled to the chipset 1690. Referring now to Figure 18, shown is a block diagram of a SoC 1800 in accordance with an embodiment of the present invention. Similar elements in Figure 14 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 18, an interconnect unit(s) 1802 is coupled to: an application processor 1810 which includes a set of one or more cores 202A-N and shared cache unit(s) 1406; a system agent unit 1410; a bus controller unit(s) 1416; an integrated memory controller unit(s) 1414; a set or one or more coprocessors 1820 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1830; a direct memory access (DMA) unit 1832; and a display unit 1840 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1820 include a special -purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high- throughput MIC processor, embedded processor, or the like. Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code, such as code 1630 illustrated in Figure 16, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language. One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable' s (CD-RWs), and magneto-optical disks, semiconductor devices such as readonly memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. Emulation (including binary translation, code morphing, etc.) In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor. Figure 19 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 19 shows a program in a high level language 1902 may be compiled using an x86 compiler 1904 to generate x86 binary code 1906 that may be natively executed by a processor with at least one x86 instruction set core 1916. The processor with at least one x86 instruction set core 1916 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1904 represents a compiler that is operable to generate x86 binary code 1906 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1916. Similarly, Figure 19 shows the program in the high level language 1902 may be compiled using an alternative instruction set compiler 1908 to generate alternative instruction set binary code 1910 that may be natively executed by a processor without at least one x86 instruction set core 1914 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1912 is used to convert the x86 binary code 1906 into code that may be natively executed by the processor without an x86 instruction set core 1914. This converted code is not likely to be the same as the alternative instruction set binary code 1910 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1912 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1906. In the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. In the description above, for the purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the embodiments of the invention. It will be apparent however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. The particular embodiments described are not provided to limit the invention but to illustrate it. The scope of the invention is not to be determined by the specific examples provided above but only by the claims below. In other instances, well-known circuits, structures, devices, and operations have been shown in block diagram form or without detail in order to avoid obscuring the understanding of the description. It will also be appreciated, by one skilled in the art, that modifications may be made to the embodiments disclosed herein, such as, for example, to the sizes, configurations, functions, and manner of operation, and use, of the components of the embodiments. All equivalent relationships to those illustrated in the drawings and described in the specification are encompassed within embodiments of the invention. Where considered appropriate, reference numerals or terminal portions of reference numerals have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar characteristics. Various operations and methods have been described. Some of the methods have been described in a basic form in the flow diagrams, but operations may optionally be added to and/or removed from the methods. In addition, while the flow diagrams show a particular order of the operations according to example embodiments, it is to be understood that that particular order is exemplary. Alternate embodiments may optionally perform the operations in different order, combine certain operations, overlap certain operations, etc. Many modifications and adaptations may be made to the methods and are contemplated. It should also be appreciated that reference throughout this specification to "one embodiment", "an embodiment", or "one or more embodiments", for example, means that a particular feature may be included in the practice of the invention. Similarly, it should be appreciated that in the description various features are sometimes grouped together in a single embodiment, Figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects may lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the invention. The following clauses and/or examples pertain to further embodiments. Specifics in the clauses and/or examples may be used anywhere in one or more embodiments. In one embodiment, a first apparatus includes a plurality of cores. The plurality of cores are logically grouped into a plurality of clusters. A cluster sharing map-based coherence directory is coupled with the plurality of cores and is to track sharing of data among the plurality of cores. The cluster sharing map-based coherence directory includes a tag array to store corresponding pairs of addresses and cluster identifiers. Each of the addresses is to identify data. Each of the cluster identifiers is to identify one of the clusters. The cluster sharing map-based coherence directory also includes a cluster sharing map array to store cluster sharing maps. Each of the cluster sharing maps corresponds to one of the pairs of addresses and cluster identifiers. Each of the cluster sharing maps is to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier. Embodiments include the first apparatus in which the clusters logically group non- overlapping sets of cores of equal size. Embodiments include any of the above first apparatus in which a pair of an address and a cluster identifier are to be stored in a given set and a given way of the tag array, and in which a cluster sharing map corresponding to the pair is to be stored in a corresponding set and a corresponding way of the cluster sharing map array. Embodiments include any of the above first apparatus further including logic to indicate inter-cluster sharing of a given data identified by a given address between a first cluster and a second cluster. The logic does this by storing both a first cluster identifier to identify the first cluster, and a second cluster identifier to identify the second cluster, in different ways of a same set of the tag array. The logic does this also by storing both a first cluster sharing map to indicate intra-cluster sharing of the given data within the first cluster, and a second cluster sharing map to indicate intra-cluster sharing of the given data within the second cluster, in different ways of a same set of the cluster sharing map array. Embodiments include any of the above first apparatus further including logic to generate an all-core sharing map from a plurality of cluster sharing maps, each corresponding to a given address, and each corresponding to a different cluster identifier. The logic does this by rearranging the plurality of cluster sharing maps, from positions where they are stored in the cluster sharing map array, to positions in the all-core sharing map, based on the different corresponding cluster identifiers. Embodiments include any of the above first apparatus further including tag comparison logic coupled with the tag array. The tag comparison logic is to compare a plurality of addresses in different ways of a same set of the tag array, and to provide a plurality of per-way match signals to indicate whether or not the addresses in the different ways match. Embodiments include any of the above first apparatus further including a small all- core sharing map-based coherence directory having no more than twenty entries per core of the plurality of cores. The small all-core sharing map-based coherence directory is to store corresponding pairs of addresses and all-core sharing maps. Each of the all-core sharing maps is to indicate sharing of data identified by a corresponding address within the plurality of cores. Embodiments include any of the above first apparatus in which the small all-core sharing map-based coherence directory has no more than fifteen entries per core of the plurality of cores. Embodiments include any of the above first apparatus in which the cluster sharing map-based coherence directory, and the small all-core sharing map-based coherence directory, are coupled to be accessed in parallel. Embodiments include any of the above first apparatus in which the cluster sharing map-based coherence directory, and the small all-core sharing map-based coherence directory, are coupled to be accessed in series. Embodiments include any of the above first apparatus in which the cores comprise at least thirty- two cores. Embodiments include any of the above first apparatus in which the cores comprise at least one hundred cores. In one embodiment, a first method includes storing corresponding pairs of addresses and cluster identifiers in a tag array of a cluster sharing map-based coherence directory. Each of the addresses is to identify data. Each of the cluster identifiers is to identify one of the clusters. The clusters logically group a plurality of cores. The first method also includes storing cluster sharing maps in a cluster sharing map array of the cluster sharing map-based coherence directory. Each of the cluster sharing maps corresponds to one of the pairs of addresses and cluster identifiers. Each of the cluster sharing maps is to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier. The first method also includes determining inter-cluster sharing of data corresponding to a given address by accessing from the cluster sharing map-based coherency directory a plurality of cluster sharing maps each corresponding to the given address and each having a different cluster identifier. Embodiments include the above first method further comprising logically grouping the cores into the clusters, in which logically grouping comprises logically grouping at least one hundred cores into the plurality of clusters. Embodiments include any of the above first methods further including indicating inter-cluster sharing of a given data identified by a given address between a first cluster and a second cluster by storing both a first cluster identifier to identify the first cluster, and a second cluster identifier to identify the second cluster, in different ways of a same set of the tag array. Also, storing both a first cluster sharing map to indicate intra-cluster sharing of the given data within the first cluster, and a second cluster sharing map to indicate intra-cluster sharing of the given data within the second cluster, in different corresponding ways of a same set of the cluster sharing map array. Embodiments include any of the above first methods further including generating an all-core sharing map from a plurality of cluster sharing maps, each corresponding to a given address, and each corresponding to a different cluster identifier. Embodiments include the above first method in which generating the all-core sharing map comprises rearranging the plurality of cluster sharing maps, from positions where they are stored in the cluster sharing map array, to positions in the all-core sharing map, based on the different corresponding cluster identifiers. Embodiments include any of the above first methods further including comparing a plurality of addresses in a plurality of different ways of a set of the tag array with a reference address and indicating which of multiple addresses match the reference address. Embodiments include any of the above first methods further comprising logically grouping the cores into the clusters, in which logically grouping the cores into the clusters comprises logically grouping the cores into clusters of non-overlapping sets of cores of equal size. Embodiments include any of the above first methods further including accessing a small all-core sharing map-based coherence directory having no more than twenty entries per core. The small all-core sharing map-based coherence directory stores corresponding pairs of addresses and all-core sharing maps. Each of the all-core sharing maps is to indicate sharing of data identified by a corresponding address by any of the plurality of cores. Embodiments include the above first method further including accessing the cluster sharing map-based coherence directory in parallel with accessing the small all-core sharing map-based coherence directory. Embodiments include either of the two above first methods further including accessing the cluster sharing map-based coherence directory in series with accessing the small all-core sharing map-based coherence directory. Embodiments include either of the three above first methods in which accessing comprises accessing a small all-core sharing map-based coherence directory that has no more than fifteen entries per core. In one embodiment, a first system includes a multi-core apparatus. The multi-core apparatus includes a plurality of cores. The plurality of cores are logically grouped into a plurality of clusters. The multi-core apparatus also includes a memory controller coupled with a first core of the plurality. The multi-core apparatus also includes a cluster sharing map-based coherence directory coupled with the plurality of cores to track sharing of data among the plurality of cores. The cluster sharing map-based coherence directory includes a tag array to store corresponding pairs of addresses and cluster identifiers, each of the addresses to identify data, each of the cluster identifiers to identify one of the clusters. The cluster sharing map-based coherence directory also includes a cluster sharing map array to store cluster sharing maps, each of the cluster sharing maps corresponding to one of the pairs of addresses and cluster identifiers, each of the cluster sharing maps to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier. The first system also includes a memory coupled with the memory controller. The memory comprises a dynamic random access memory (DRAM). Embodiments include the above first system further including logic to indicate inter-cluster sharing of a given data identified by a given address between a first cluster and a second cluster. The logic does this by storing both a first cluster identifier to identify the first cluster, and a second cluster identifier to identify the second cluster, in different ways of a same set of the tag array. The logic is also to store both a first cluster sharing map to indicate intra-cluster sharing of the given data within the first cluster, and a second cluster sharing map to indicate intra-cluster sharing of the given data within the second cluster, in different ways of a same set of the cluster sharing map array. Embodiments include any of the above first systems further including logic to generate an all-core sharing map from a plurality of cluster sharing maps, each corresponding to a given address, and each corresponding to a different cluster identifier. The logic is to rearrange the plurality of cluster sharing maps, from positions where they are stored in the cluster sharing map array, to positions in the all-core sharing map, based on the different corresponding cluster identifiers. In one embodiment, a second apparatus includes a plurality of cores. The plurality of cores are logically grouped into a plurality of clusters. A first means is coupled with the plurality of cores and is for tracking sharing of data among the plurality of cores. The first means includes a second means for storing corresponding pairs of addresses and cluster identifiers. Each of the addresses is to identify data. Each of the cluster identifiers is to identify one of the clusters. The first means also includes a second means for storing cluster sharing maps. Each of the cluster sharing maps corresponds to one of the pairs of addresses and cluster identifiers. Each of the cluster sharing maps is to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier. In one embodiment, an apparatus is configured and/or operable to perform any of the methods disclosed herein. |
Techniques for dynamically configuring a texture cache are disclosed. During a texture mapping process of a three-dimensional (3D) graphics pipeline, if the batch is for single texture mapping, the texture cache is configured as a n-way set-associative texture cache. However, if the batch is for multi-texture mapping the n-way set-associated texture cache is divided into at n/M-way set-associative sub-caches where n and M are integers greater than 1 and n is divisible by M. |
CLAIMS 1. A graphics processing unit comprising: a driver operable to determine whether single texture mapping is enabled or multi- texture mapping is enabled for a selected application; and a dynamically configurable cache having a first configuration corresponding to a n- way set-associative texture cache, when the single texture mapping is enabled, and a second configuration corresponding to a set of n/M-way set-associative sub-caches, when the multi-texture mapping is enabled, where n and M are integers greater than 1 and n is divisible by M. 2. The graphics processing unit of claim 1 , wherein n is 4 and M is 2. 3. The graphics processing unit of claim 1, wherein a respective one sub- cache of the set of n/M-way set-associative sub-caches is dedicated to a respective one texture map during the multi-texture mapping. 4. The graphics processing unit of claim 3, wherein the n-way set- associative texture cache includes: n-cache blocks operative to store texture data; an operand for generating a fetch command when all n-tag outputs of the n-cache blocks represent a miss for requested texture data; and a multiplexer operative to output the requested texture data from the n-cache blocks. 5. The graphics processing unit of claim 3, wherein each sub-cache of the set of n/M-way set-associative sub-caches includes: a plurality of sub-cache blocks operative to store texture data for a corresponding one texture map, and an operand for generating a fetch command output when all outputs of the plurality of sub-cache blocks represent a miss for requested texture data; and the set of n/M-way set-associative sub- caches includes a multiplexer to multiplex the fetch command output of said each sub- cache. 6. The graphics processing unit of claim 5, wherein each sub-cache of the set of n/M-way set-associative sub-caches further includes: a multiplexer operative to output the requested data from the plurality of sub-cache blocks. 7. An integrated circuit comprising: a driver operable to determine whether single texture mapping is enabled or multi- texture mapping is enabled for a selected application; and a dynamically configurable cache having a first configuration corresponding to a n- way set-associative texture cache, when the single texture mapping is enabled, and a second configuration corresponding to a set of n/M-way set-associative sub-caches, when the multi-texture mapping is enabled, where n and M are integers greater than 1 and n is divisible by M. 8. The integrated circuit of claim 7, wherein n is 4 and M is 2. 9. The integrated circuit of claim 7, wherein a respective one sub-cache of the set of n/M-way set-associative sub-caches is dedicated to a respective one texture map during the multi-texture mapping. 10. The integrated circuit of claim 9, wherein the n-way set-associative texture cache includes: n-cache blocks operative to store texture data; an operand for generating a fetch command when all n-tag outputs of the n-cache blocks represent a miss for requested texture data; and a multiplexer operative to output the requested texture data from the n-cache blocks. 11. The integrated circuit of claim 9, wherein each sub-cache of the set of n/M-way set-associative sub-caches includes: a plurality of sub-cache blocks operative to store texture data for a corresponding one texture map, and an operand for generating a fetch command output when all outputs of the plurality of sub-cache blocks represent a miss for requested texture data; and the set of n/M-way set-associative sub-caches includes a multiplexer to multiplex the fetch command output of said each sub-cache. 12. The integrated circuit of claim 11, wherein each sub-cache of the set of n/M-way set-associative sub-caches further includes: a multiplexer operative to output the requested data from the plurality of sub-cache blocks. 13. A processor comprising: a graphics processing unit having a dynamically configurable cache which has a first configuration corresponding to a n-way set-associative texture cache, when a single texture mapping mode is enabled, and a second configuration corresponding to a set of n/M-way set-associative sub-caches, when a multi-texture mapping mode is enabled, where n and M are integers greater than 1 and n is divisible by M; and a memory coupled to the graphics processing unit. 14. The processor of claim 13, wherein n is 4 and M is 2. 15. The processor of claim 13, wherein a respective one sub-cache of the set of n/M-way set-associative sub-caches is dedicated to a respective one texture map during the multi-texture mapping. 16. The processor of claim 15, wherein the n-way set-associative texture cache includes: n-cache blocks operative to store texture data; an operand for generating a fetch command when all n-tag outputs of the n-cache blocks represent a miss for requested texture data; and a multiplexer operative to output the requested texture data from the n-cache blocks. 17. The processor of claim 15, wherein each sub-cache of the set of n/M-way set-associative sub-caches includes: a plurality of sub-cache blocks operative to store texture data for a corresponding one texture map, and an operand for generating a fetch command output when all outputs of the plurality of sub-cache blocks represent a miss for requested texture data; and the set of n/M-way set-associative sub-caches includes a multiplexer to multiplex the fetch command output of said each sub-cache. 18. The processor of claim 17, wherein each sub-cache of the set of n/M-way set-associative sub-caches further includes: a multiplexer operative to output the requested data from the plurality of sub-cache blocks. 19. A wireless device comprising: a graphics processing unit having a dynamically configurable cache which has a first configuration corresponding to a n-way set-associative texture cache, when a single texture mapping mode is enabled, and a second configuration corresponding to a set of n/M-way set-associative sub-caches, when a multi-texture mapping mode is enabled, where n and M are integers greater than 1 and n is divisible by M; and a memory coupled to the graphics processing unit. 20. The device of claim 19, wherein n is 4 and M is 2. 21. The device of claim 19, wherein a respective one sub-cache of the set of n/M-way set-associative sub-caches is dedicated to a respective one texture map during the multi-texture mapping. 22. The device of claim 21 , wherein the n-way set-associative texture cache includes: n-cache blocks operative to store texture data; an operand for generating a fetch command when all n-tag outputs of the n-cache blocks represent a miss for requested texture data; and a multiplexer operative to output the requested texture data from the n-cache blocks. 23. The device of claim 21 , wherein each sub-cache of the set of n/M-way set-associative sub-caches includes: a plurality of sub-cache blocks operative to store texture data for a corresponding one texture map, and an operand for generating a fetch command output when all outputs of the plurality of sub-cache blocks represent a miss for requested texture data; and the set of n/M-way set-associative sub-caches includes a multiplexer to multiplex the fetch command output of said each sub-cache. 24. The device of claim 23, wherein said each sub-cache of the set of n/M- way set-associative sub-caches further includes: a multiplexer operative to output the requested data from the plurality of sub-cache blocks. 25. A computer program product including a computer readable medium having instructions for causing a computer to: determine whether a selected application has single texture mapping enabled or multi-texture mapping enabled; configure a n-way set-associative texture cache when the single texture mapping is enabled; and divide the n-way set-associated texture cache into a set of M n/M-way set- associative sub-caches when the multi-texture mapping is enabled, where n and M are integers greater than 1, n is divisible by M, and M corresponds to a number of texture maps. 26. A method comprising: determining whether a selected application has single texture mapping enabled or multi-texture mapping enabled; configuring a n-way set-associative texture cache when the single texture mapping is enabled; and dividing the n-way set-associated texture cache into a set of M n/M-way set- associative sub-caches when the multi-texture mapping is enabled, where n and M are integers greater than 1, n is divisible by M, and M corresponds to a number of texture maps. |
DYNAMIC CONFIGURABLE TEXTURE CACHE FORMULTI-TEXTURINGBACKGROUNDFieldThe present disclosure relates generally to graphics, and more specifically to techniques for dynamically configuring a texture cache.BackgroundTexture mapping is one of the most successful and popular techniques in a 3D graphics pipeline for adding realism to a computer-generated scene. A typical texture mapping (TM) process is highly memory access intensive because the characteristic of the TM process involves multiple texture lookups. The frequent texture lookups cause a bottleneck on the memory bus. To alleviate this problem, a texture cache is often used. The texture cache serves to eliminate redundancy of fetching texels from an external memory source (e.g. off-chip memory) and utilizes the natural spatial locality of a triangle's rasterization.Graphics applications typically send drawing commands in a batch mode. In the batch mode all the pixels share the same context state registers in a batch. In a single texture batch, all pixels fetch texels from one single texture map. However, in a multi-texture batch mode, if the different textures are stored inside one cache, conflict misses are very likely to occur. When two texture maps are assigned or allocated to the same cache line, the texture maps will thresh each other and generate redundant memory traffic.In view of the foregoing, using one cache for different texture maps reduces power and pixel performance.There is therefore a need in the art for techniques to dynamically configure a texture cache.SUMMARYTechniques to dynamically configure a texture cache are described herein. In an embodiment, a wireless device comprising a graphics processing unit having a dynamically configurable cache is provided. The dynamically configurable cache has a first configuration corresponding to a n-way set-associative texture cache, when a single texture mapping mode is enabled and a second configuration corresponding to a set of n/M-way set-associative sub-caches, when a multi-texture mapping mode is enabled where n and M are integers greater than 1 and n is divisible by M. The device also includes a memory coupled to the graphics processing unit. [0007] In another aspect, a graphics processing unit includes a driver operable to determine whether single texture mapping is enabled or multi-texture mapping is enabled for a selected application. The unit also includes a dynamically configurable cache having a first configuration corresponding to a n-way set-associative texture cache, when the single texture mapping is enabled and a second configuration corresponding to a set of n/M-way set-associative sub-caches, when the multi-texture mapping is enabled, where n and M are integers greater than 1 and n is divisible by M. [0008] In yet another aspect, a computer program product including a machine- readable medium has instructions for causing a machine to determine whether a selected application has single texture mapping enabled or multi-texture mapping enabled. The instructions cause the machine to configure a n-way set-associative texture cache, when the single texture mapping is enabled. The instructions also cause the machine to divide the n-way set-associated texture cache into a set of M n/M-way set-associative sub- caches, when the multi-texture mapping is enabled, where n and M are integers greater than 1, n is divisible by M and M corresponds to a number of texture maps. [0009] Various aspects and embodiments of the disclosure are described in further detail below.BRIEF DESCRIPTION OF THE DRAWINGSAspects and embodiments of the disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify corresponding throughout.FIG. 1 shows a block diagram of a wireless device.FIG. 2 shows a general block diagram of a graphics processing unit.FIG. 3A shows a conventional three-dimensional (3D) pipeline.FIG. 3B shows a conventional pixel rendering stage.FIG. 4 shows a general block diagram of a dynamic configurable texture cache. [0016] FIG. 5A shows a pixel batch in a single-texture mode.FIG. 5B shows a pixel batch in a multi-texture mode.FIGS. 6A-6B show a schematic diagram of a dynamic configurable texture cache in a single-texture mode.FIGS. 7A-7B show a schematic diagram of a dynamic configurable texture cache in a multi-texture mode.FIG. 8 shows a general block diagram of stored applications in the main memory.DETAILED DESCRIPTIONThe word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs.Many game applications, require three-dimensional (3D) graphics applications which display 3D objects in a two-dimensional (2D) space (e.g., a display screen). The pixels in a 2D graphic have the properties of position, color, and brightness while a 3D pixel adds a depth property that indicates where the point lies on an imaginary Z-axis. Texture is created as 3D pixels are combined, each with its own depth value.The techniques described herein may be used for wireless communication, computing, networking, personal electronics, etc. An exemplary use of the techniques for wireless communication is described below.FIG. 1 shows a block diagram of an embodiment of a wireless device 10 in a wireless communication system. The wireless device 10 may be a cellular or camera phone, a terminal, a handset, a personal digital assistant (PDA), or some other device. The wireless communication system may be a Code Division Multiple Access (CDMA) system, a Global System for Mobile Communications (GSM) system, or some other system.The wireless device 10 is capable of providing bi-directional communications via a receive path and a transmit path. On the receive path, signals transmitted by base stations are received by an antenna 12 and provided to a receiver (RCVR) 14. The receiver 14 conditions and digitizes the received signal and provides samples to a digital section 20 for further processing. On the transmit path, a transmitter (TMTR) 16 receives data to be transmitted from the digital section 20, processes and conditions the data, and generates a modulated signal, which is transmitted via the antenna 12 to the base stations.The digital section 20 includes various processing, interface and memory units such as, for example, a modem processor 22, a video processor 24, a controller/ processor 26, a display processor 28, an ARM/DSP 32, a graphics processing unit (GPU) 34, an internal memory 36, and an external bus interface (EBI) 38. The modem processor 22 performs processing for data transmission and reception (e.g., encoding, modulation, demodulation, and decoding). The video processor 24 performs processing on video content (e.g., still images, moving videos, and moving texts) for video applications such as camcorder, video playback, and video conferencing. The controller/processor 26 may direct the operation of various processing and interface units within digital section 20. The display processor 28 performs processing to facilitate the display of videos, graphics, and texts on a display unit 30. The ARM/DSP 32 may perform various types of processing for the wireless device 10. The graphics processing unit 34 performs graphics processing.The techniques described herein may be used for any of the processors in the digital section 20, e.g., the graphics processing unit 34. The internal memory 36 stores data and/or instructions for various units within the digital section 20. The EBI 38 facilitates the transfer of data between the digital section 20 (e.g., internal memory 36) and a main memory 40 along a bus or data line DL.The digital section 20 may be implemented with one or more DSPs, micro-processors, RISCs, etc. The digital section 20 may also be fabricated on one or more application specific integrated circuits (ASICs) or some other type of integrated circuits (ICs).The techniques described herein may be implemented in various hardware units. For example, the techniques may be implemented in ASICs, DSPs, RISCs, ARMs, digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, and other electronic units.FIG. 2 shows a general block diagram of a GPU, generally designated at34. The GPU 34 includes a three-dimensional (3D) switch driver 42 and a dynamic configurable texture cache 44. The 3D switch driver 42 provides a switching control signal SWl for the cache 44 to reconfigure. The GPU 34 also includes a 3D graphics pipeline 60 which will be described in detail later. Additionally, the GPU 34 includes a processor 46 having a fetch controller 48. The fetch controller 48 serving to generate commands to fetch requested texture data from one or more of the texture maps TM. [0031] FIG. 8 shows stored applications Al ...AZ in the main memory 40. The stored application Al ... AZ may include game applications or other graphics applications. Each application has associated therewith a texture type TT and one or more texture maps TM. In operation, depending on the selected application, the 3D switch driver 42 parses the selected application and determines which texture (single or multiple texture) type TT is enabled for the selected application. Thereafter, the 3D switch driver 42 generates the switching control signal SWl to cause the cache 44 to reconfigure for a single texture mode or a multi-texture mode. Texture mapping is a shading technique that maps, via at least one texture map TM, a 2D texture image onto the surface of a 3D object. The 2D texture image is stored in the main (external) memory 40. The individual elements of a texture image is called a texel. [0032] Referring also to FIGS. 3 A and 3B, there is shown an embodiment of a conventional 3D graphics pipeline and pixel rendering stage, generally designated at 60 and 64, respectively. The 3D graphics pipeline 60 divides the entire task of 3D representation on the display unit 30 into at least two (2) pipeline stages: a vertex processing stage 62, and a pixel rendering stage 64. In operation, the vertex processing stage 62 may include all the functions or a subset of the functions currently implemented in the OpenGL(R) or OpenGL(R)ES.The pixel rendering stage 64 includes rasterization, blending, and texture application operations 66 and hidden surface removal operations 68. Nevertheless, the pixel rendering stage 64 may include other operations defined by OpenGL(R) or OpenGL(R) ES. The pixel rendering stage 64 converts the information about 3D objects from the vertex processing stage 62 into a bit map that can be displayed on the display unit 30. The pixel rendering stage 64 processes input triangle sets to produce a pixel representation of a 3D graphic image. During the rasterization, blending, and texture application operations 66, the texture mapping engine 66A performs texturing operations. [0034] With reference to FIG. 4, a general block diagram of the dynamic configurable texture cache 44 is shown. The dynamic configurable texture cache 44 of the GPU 34 is dynamically configurable/reconfigurable to operate in one of a single- texture mode 100, when single texture mapping is enabled in the selected application, and a multi-texture mode 200, when multi-texture mapping is enabled in the selected application. The schematic diagram of the dynamic configurable texture cache 44 in the single-texture mode 100 is best seen in FIGS. 6A-6B. The dynamic configurable texture cache 44 in the single-texture mode 100 is an n-way set-associative cache. In the exemplary embodiment, n is an even number. In the illustrated embodiment, n is equal to 4. In the exemplary embodiment, the 4-way set-associated texture cache has a size of approximately 4 KB to handle texture lookups, each cache line is 128 bits wide. The dynamic configurable texture cache 44 in the single-texture mode 100 is designed to support up to two textures per pass.For illustrative purposes, two textures per pass means, for each pixel, that texels are simultaneously mapped from two different texture maps on top of it, without going through multiple passes. For example, if a cache only supports single texture mapping, to archive multi-texture mapping effect, single texture mapping must be performed multiple times on each pixel. Accordingly, multiple textures per pass means, for each pixel, texels are simultaneously mapped from multiple (different) texture maps without going through multiple passes.Referring now to FIG. 5A, a block diagram of a batch, generally denotedB, for a single-texture mode is shown. A graphics application typically sends drawing commands in a batch mode wherein all of the pixels, denoted as PIXELBi, PIXELB2, ... PIXELBX, in the batch B, share the same context state registers 50 (where X is equal to the number of pixels in the batch).In a single-texture mode as determined by the texture type TT, all pixelsPIXELB l, PIXELB2, ... PIXELB[chi] fetch texels from a single texture map TM. The texture map TM is one single texture map. In a two textures (multi-texture mode) batch, each of the pixels PIXELBI, PIXELB2, ... PIXELBX fetch texels from 2 different texture maps (where X is equal to the number of pixels in the batch). [0038] In the exemplary embodiment, every pixel PIXELBi, PIXELB2, ...PIXELB[chi] generates a texture address and other information for the pixel. The texture address of the pixel has a corresponding tag and index, denoted as TAG IN and INDEX[3:0], respectively. The component [3:0] corresponds to the addressing format. Here, "3:0" is the nomenclature representative of a four (0, 1, 2, 3) digit binary address. Thus, the index (of the exemplary embodiment) has 2<4> distinct addresses. The index is used to access a tagram 1020, 102i, 1022, 1023 (FIG. 6A). The subscript of the tagram1020, 102i, 1022, 1023 also corresponds to the way. Thus, a subscript of 0 corresponds to wayO, subscript 1 corresponds to wayl, subscript 2 corresponds to way2 and subscript 3 or (n-1) corresponds to way3 or way(n-l).In FIGS. 6A-6B, a schematic diagram of the dynamic configurable texture cache 44 in the single-texture mode 100 is shown. The cache's set associative is a 4-way (n=4). Thus, there are four (4) entries or cache lines to be selected by one index INDEX[3:0]. As best seen in FIG. 6A, the cache 44 in the single-texture mode 100 includes n-cache blocks where each block includes a way tagram lO2o, 102i, 1022, or 1023 and a way valid bit indicator 1040, 104i, 1042, or 1043 . As best seen in FIG. 6B, each block further includes a way dataram 12Oo, 12O1, 12O2, or 12O3. The each block also represents a complete "cache line."The dynamic configurable texture cache 44 is composed by a n "cache lines." Each cache line is selected by the index. The cache 44 is a level of memory hierarchy between the 3D hardwired pipeline and the main (external) memory 40. When 3D graphics pipeline 60 sends an address to the main (external) memory 40 to read back texels, the 3D graphics pipeline 60 first checks if the data (texel) is inside the dynamic configurable texture cache 44. The address is divided into: the index denoted as INDEX[3:0], which is used to select the cache line; and a tag field, denoted as TAG IN, which is used to compare with the value of the tag field of the cache. If there is a match, it means the content is inside the cache 44 and specifically, the cache line having the match.In a typical cache, each cache line has a valid bit indicator. In the exemplary embodiment, the values of the valid bit include 1) Valid bit = "1" means there is valid content stored in this cache line; and 2) Valid bit = "0" means the cache line is empty. The valid bits are implemented by registers, and are initialize to "0" by a reset signal. [0042] A valid bit indicator is associated with each respective way tagramsl020,1021, 1022, and 1023. Thus, the wayO tagramslO2o has associated therewith wayO valid bit indicator 104Q. The wayl tagramslO2i has associated therewith wayl valid bit indicator 104i. The way2 tagramslO22 has associated therewith way2 valid_bit indicator 1042. The way3 tagramslO23 has associated therewith way3 valid bit indicator 1043. The valid bit indicators indicate that a given entry into the cache 44 contains valid data. The way valid bit indicatosrl04o, 104i, 1042, and 1043 produce outputs on lines L30, L31, L32 and L33, respectively.Each way tagram 1020, 102b 1022, and 1023 receives three (3) inputs.The first input, on line L2, is the TAG IN, shown in bold, for a respective pixel in the batch B. The second input, on line L4, is the index, denoted as INDEX[3:0], shown as a line having a dash followed by two dots. The index is used to access a way tagram 1020, 102i, 1022, and 1023. The third input of each way tagram 1020, 102i, 1022, and 1023 is from the way update decoder 112 shown in dotted lines. The way update decoder 112 receives an input on line LlO from a way selector 106. [0044] As will be seen from the description below, the index INDEX[3:0] on line L4 selects one the way tagram 1020, 102l5 1022, and 1023 of the cache lines, which then outputs a stored tag value on the corresponding output line, L20, L21, L22, or L23. [0045] The way selector 106 includes a way select bits 108. The output of the way select bits 108 is fed to line LlO for processing by the way update decoder 112. The output of the way select bits 108 is also fed to an accumulator 110 which adds one (1) to the output of the way select bits 108. The number 2 (in the box labeled 106) represents a two-bit signal. The output on line L8 is looped back to the way select bits 108. The way update decoder 112 outputs control bits on lines L12, L14, L16 and L18, shown as dotted lines, to select one of the n-way associative sets. The way update decoder 112 receives the two-bit signal on line LlO and generates a one-bit signal to select any of the n blocks or the way tagram lO2o, 102i, 1022, and 1023 and/or way dataram 12Oo, 12O1, 12O2, and I2O3 of the cache lines shown in FIG. 6B. [0046] When a miss occurs in the cache 44, the requested data should go in one cache line, and the data occupying that cache line must be replaced. In an n-way associative cache, a choice of (n) dataramsl2Oo, 12O1, 12O2, and I2O3 are used to place the requested data. The way selector 106 is to pick up which cache line out of n-ways to be replaced.The outputs of the way valid bit indicators 1040, 104ls 1042, and 1043 produce outputs on lines L30, L31, L32 and L33, respectively, which are sent to comparators 1140, 114ls 1142, and 1143, respectively. Additionally, the outputs on lines L20, L21, L22 and L23 from the way tagrams lO2o, 102i, 1022, and 1023 are sent to comparators 1140, I H1, 1142, and 1143, respectively. The comparators 1140, I H1, 1 H2, and 1 H3 also receive the TAG IN as input from line L2.The comparisons by the comparators 114o, I H1, 1142, and 1143 are performed between the 4 (n=4) possible tag contents on lines L20, L21, L22 and L23, respectively, out of the tagramsl020, 102i, 1022, and 1023 and the incoming pixel's tag TAG IN. If one of the four comparison from the comparators 114o, I H1, 1142, and 1143 results in a match, such a match implies a cache hit. Thus, the output on line L50 from an Operand 116 represents a cache hit. By way of example, the Operand 116 is represented as an AND gate. Otherwise, if there are no matches, the output on line L50 from Operand 116 represents a cache miss. The comparators 1140, 114ls 1 H2, and 1 H3 output a result of their respective comparison on lines L40, L41, L42 and L43 which are fed to inputs of Operand gate 116. The Operand gate 116 also receive an input on line L6 representative of an active bit. If the output on line L50 from the Operand 116 is representative of a miss, the output is a fetch request sent to the fetch controller 48. The fetch controller 48 then communicates via the bus or data line DL to retrieve the necessary texture map data from the main (external) memory 40.However, if the Valid bit on any one of the lines L30, L31, L32, L33 is"0," the comparison associated with that Valid bit is not used.With specific reference to FIG. 6B, when there is a cache hit by any of the cache lines or blocks, the requested texture data is read out of the corresponding way dataram 12O0, 12O1, 12O2, or 12O3 on one of lines L70, L71, L72 or L73, respectively. The output texture data on lines L70, L71, L72 or L73 is sent to a multiplexer 122. The output from the multiplexer 122 is sent on line L80 to the texture mapping engine 66A. [0051] The output on line LlO from the way selector 106 is used to control the multiplexer 122. Each way dataram 12O0, 12O1, 12O2, or 12O3 is populated with corresponding texture map data from the main (external) memory 40 on line Ll . [0052] The Active bit on line L6 is specifically used in the 3D graphics pipeline60. Sometime, a pixel does not require a texture lookup. In this specific case the active bit on line L6 is also set to "0." Therefore, the cache 44 would not operate on this pixel. [0053] When the texture mapping engine 66A is in a multi-texture mode, the pixels, denoted as PIXELB1, PIXELB2, - - - PIXELBX, in the batch B, fetch texels from multiple different texture maps TM. [0054] Referring now to FIG. 5B, a block diagram of the batch in the multi- texture mode is shown. In the exemplary embodiment, the multi-texture mode, relates to two texture maps. Nevertheless, two or more texture maps may be used. For a two texture batch, each of the pixels PIXELBI, PIXELB2, - - - PIXELBX fetch texels from two (2) different texture maps (where X is equal to the number of pixels in the batch). Every pixel PIXELBi, PIXELB2, ... PIXELB[chi] generates a first texture address field 0, a second texture address field 1 and a field for other pixel information. The first texture address field 0 has a tag and index, denoted as TexO TAG IN and Tex 0 INDEX[3:0] for sub-cache CO. The second texture address field 1 has a tag and index denoted as Texl TAG IN and Tex 1 INDEX[3:0] for sub-cache Cl. The index Tex 0 INDEX[3:0] is used to access the TexO way tagram 2O2Oo, 2020i (FIG. 7A) of sub-cache CO. The index Tex 1 INDEX[3:0] is used to access the Texl way tagram 202i0, 202n (FIG. 7A) of sub-cache C 1.In FIGS. 7A-7B, a schematic diagram of the dynamic configurable texture cache 44 in the multi-texture mode 200 is shown. The cache's set associative is 4-way (n=4) of FIG. 6A-6B has been split or divided to create two 2-way set- associative sub-caches CO and Cl. Thus, there are two entries to be selected out by one index Tex 0 INDEX[3:0] in sub-cache CO. Likewise, there are two entries to be selected out by one index Tex 1 INDEX[3:0] in sub-cache Cl. The sub-cache CO includes two ways, "wayO" and "wayl". The sub-cache CO has at least two cache blocks, wayO and wayl. The wayO block includes a TexO wayO tagram 2O2oo and the wayl block includes a TexO wayl tagram 20201. Each block wayO and wayl further includes a wayO valid_bit indicator 2O4Oo and a wayl valid_bit indicator 2040i (where the first digit of the subscript represents the texture map and the second digit represents the way). The sub-cache Cl includes 2-ways ("wayO" and "wayl"). The sub-cache Cl has two blocks a wayO block and a wayl block. The wayO block includes a Texl wayO tagram 202i0 and the wayl block has a Texl wayl tagram 202n. The wayO block of sub-cache Cl further includes a wayO valid bit indicator 2O4io and the wayl block has a wayl valid bit indicator 204n.The valid bit indicators indicate that a given entry into the sub-cache CO or C 1 contains valid data. The wayO valid bit indicators 2O4Oo, 2040i produce outputs on lines L3Oo, L31o, respectively. The wayl valid_bit indicators 2O4io, 204n produce outputs on lines L30i, L3h, respectively. [0057] Each of the tagrams 2O2oo, 2O2oi of sub-cache CO receives three (3) inputs. The first input is the TEXO TAG IN on line L20, shown in bold, for a respective pixel in the batch B. The second input on line L4o is the index TEXO INDEX[3:0], shown as a line having a dash followed by two dots. The index is used to access a tagram 2O2oo, 20201. The third input to each way tagram 2O2oo, 2O2oi is from the way selector 2060 on line LlO0.The outputs of the valid bit indicators 2O4oo, 2O4oi produce outputs on lines L3Oo, L31o, respectively, which are sent to comparators 214oo, 21401, respectively. Additionally, the outputs on lines L2Oo, L21o from the tagrams 2O2oo, 2O2oi of sub- cache CO are sent to comparators 214Oo, 21401, respectively. The comparators 214Oo, 214oi also receive the TEXO TAGJN.However, if the Valid bit on any one of the lines L3Oo, L31o, L30i, orL31 i,is "0," the comparison associated with that Valid bit is not used. Furthermore, the Active bit on line L6 is specifically used in the 3D graphics pipeline 60 and functions in a similar manner as described above.The comparisons by the comparators 214oo, 21401 are performed between the two possible tag contents on lines L2Oo, L210, respectively, out of the 2-way tagrams 2O2oo, 2O2oi of sub-cache CO and the incoming pixel's tag TEXO TAG IN. If one of the two comparisons from the comparators 214Oo, 21401 results in a match, such a match implies a sub-cache hit. Thus, the output on line L5Oo from an Operand 216o represents a sub-cache hit. By way of example, the Operand 216o is represented as an AND gate. Otherwise, the output on line L5Oo from Operand gate 216o represents a cache miss. The comparators 214Oo, 21401 output a result of their respective comparison on lines L400, L41o, which are fed to inputs of Operand gate 216o. The Operand gate 216o also receives an input on line L6 representative of an active bit.Each tagram 2O2io, 202[pi] of sub-cache Cl receives three (3) inputs. The first input is the TEXl TAG IN on line L2ls shown in bold, for a respective pixel in the batch B. The second input on line L4i is the index TEXl INDEX[3:0], shown as a line having a dash followed by two dots. The index is used to access a tagram 2O2io, 202[pi]. The third input of each tagram 2O2io, 202[pi] is from the way selector 206ion line LlOi. [0062] The outputs of the way valid bit indicator 204i0, 204[pi] produce outputs on lines L30i, L3h respectively, which are sent to comparators 21410, 214lls respectively. Additionally, the outputs on lines L20i, L21i from the tagrams 2O2io, 202ii of sub-cache Cl are sent to comparators 21410, 2H11, respectively. The comparators 21410, 2H11 also receive the TEXl TAG_IN.The comparisons by the comparators 2H10, 2H11 are performed between the 2 possible tag contents on lines L20i, L21ls respectively, out of the 2-way tagrams 2O2io, 202ii of sub-cache Cl and the incoming pixel's tag TEXl TAG IN. If one of the two comparisons from the comparators 2Hi0, 2H11 results in a match, such a match implies a sub-cache hit. Thus, the output on line L50i from an AND gate 216i represents a sub-cache hit. Otherwise, the output on line L50i from Operand gate 216i represents a sub-cache miss. The comparators 2H10, 2H11 output a result of their respective comparison on lines 1-4O1 , 14I1 which are fed to inputs of Operand gate 21O1 The Operand gate 216o also receive an input on line L6 representative of an active bit. [0064] In a final stage, the outputs of the sub-cache CO and sub-cache C 1 are inputs to a multiplexer 218. The multiplexer 218 multiplexes the outputs on line L5Oo and L50i to form a new multiplexed output on line L60. The output on line L60 is representative of a fetch request sent to the fetch controller 48. The multiplexed output sends one fetch request at a time.With specific reference to FIG. 7B, when there is a cache hit by any of the cache lines or blocks of the sub-cache CO or C 1 , the requested texture data is read out of the corresponding way dataram 22Ooo, 22Ooi or 22Oio, 22On on one of lines L7Oo, L71o or L70i, L71ls respectively. The output texture data on lines L7Oo, L71o in sub- cache CO is sent to a multiplexer 222o. The output texture data on lines L70i, L7h in sub-cache Cl is sent to a multiplexer 222 [mu] The outputs from the multiplexers 222o and 222ion lines L8Oo and L80ls respectively, are sent to as the multiple texture maps to the texture mapping engine 66A.The output on line LlOo from the way selector 2O6o is used to control the multiplexer 222o. Likewise, the output on line LlOi from the way selector 2061 is used to control the multiplexer 2221. Each way dataram 22Oo, 220i,1203 is populated with corresponding texture map data from the main (external) memory 40 on line Ll. The inverters denoted as 21Oo, 21Oi invert the bit sent on the feed back lines L8o and L8i. The feed back lines L8o and L8i is coupled to loop back the inverted bit to the TexO way select bit 2080 and the Texl way select bit 2081, respectively, of the way selectors 2060, 206i.[0067] The above embodiments, the dynamic configurable texture cache 44 is easily configurable to optimize the texture mapping engine 66A in one of a single- texture mode 100 and a multi-texture mode 200 using one cache. Furthermore, the conflict misses do not generally occur. Moreover, the two (M=2) or more texture maps TM will not thresh each other and/or generate redundant memory traffics. [0068] It is prohibitive to describe each and every possible configuration of the dynamic configurable texture cache 44 (e.g. a reconfigurable n-way set-associative texture cache). However, the cache 44 when in the multi-texture mode 200 should be configured to have n/M set-associative texture sub-cache dedicated to each texture map. The n and M are integers greater than one (1) and n is divisible by M. The value of M may be the number of texture maps. In the example, M is two (2) providing two (2) sub-caches, each sub-cache being dedicated to a respective one of the two texture maps. [0069] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0070] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.WHAT IS CLAIMED IS: |
An operating system of a computational device manages access of a plurality of applications to a solid state drive. Separate bands are maintained in the solid state drive for storing writes of at least two different applications of the plurality of applications. Additionally, in other embodiments, a virtual machine manager of a computational device manages access of a plurality of virtual machines to a solid state drive. Separate bands are maintained in the solid state drive for storing writes of at least two different virtual machines of the plurality of virtual machines. |
1.A method comprising:By a computing device's operating system, access to multiple solid-state drives by multiple applications; andA separate band is maintained in the solid state drive nonvolatile memory for storing data written to the solid state drive by at least two different ones of the plurality of applications.2.The method of claim 1, the method further comprising:Sending from the operating system to the solid state drive a logical block address range for each of the plurality of applications for being mapped into a band in the solid state drive.3.The method of claim 2, the method further comprising:Sending from the operating system to the solid state drive a priority for each of the plurality of applications.4.The method of claim 3, wherein writing includes data to be written to the solid state drive, the method further comprising:In response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of applications, the writing of higher priority applications is stored in a different Bands and writes of lower priority applications are mixed into the same band.5.The method of claim 3, wherein the solid state drive stores data related to the plurality of applications in a non-volatile manner, the logical block address range, and each of the plurality of applications Priority across multiple starts.6.A method comprising:By a computing device's virtual machine manager, access of the plurality of virtual machines to the solid state drive; andA separate band is maintained in the non-volatile memory of the solid state drive for storing data written to the solid state drive by at least two different ones of the plurality of virtual machines.7.The method of claim 6, the method further comprising:From the virtual machine manager to the solid state drive, a logical block address range for each of the plurality of virtual machines for being mapped into a band in the solid state drive.8.The method of claim 7, the method further comprising:From the virtual machine manager to the solid state drive, the priority of each of the plurality of virtual machines.9.The method of claim 8, wherein writing includes data to be written to the solid state drive, the method further comprising:In response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of virtual machines, the write of higher priority virtual machines is stored In different bands and writes of lower priority virtual machines are mixed into the same band.10.The method of claim 8, wherein the solid-state drive stores data related to the plurality of virtual machines in a non-volatile manner, the logical block address range, and each of the plurality of virtual machines The priority of a virtual machine across multiple starts.11.A system comprising:Memory; andA processor, wherein the processor is configurable to perform operations comprising:Multiple applications are managed by the operating system to access the solid-state drive; andA separate band is maintained in the non-volatile memory of the solid state drive for storing data written to the solid state drive by at least two different ones of the plurality of applications.12.The system of claim 11, the operations further comprising:Sending from the operating system to the solid state drive a logical block address range for each of the plurality of applications for being mapped into a band in the solid state drive.13.The system of claim 12, the operations further comprising:Sending from the operating system to the solid state drive a priority for each of the plurality of applications.14.The system of claim 13, wherein writing includes data to be written to the solid state drive, the operations further comprising:In response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of applications, the writing of higher priority applications is stored in a different Bands and writes of lower priority applications are mixed into the same band.15.The system of claim 13, wherein the solid-state drive stores data related to the plurality of applications in a non-volatile manner, the logical block address range, and each of the plurality of applications Priority across multiple starts.16.A system comprising:Memory; andA processor, wherein the processor is configurable to perform operations comprising:By a virtual machine manager managing access of multiple virtual machines to the solid state drive; andA separate band is maintained in the non-volatile memory of the solid state drive for storing data written to the solid state drive by at least two different ones of the plurality of virtual machines.17.The system of claim 16, the operations further comprising:From the virtual machine manager to the solid state drive, a logical block address range for each of the plurality of virtual machines for being mapped into a band in the solid state drive.18.The system of claim 17, the operations further comprising:From the virtual machine manager to the solid state drive, the priority of each of the plurality of virtual machines.19.The system of claim 18, wherein writing includes data to be written to the solid state drive, the operations further comprising:In response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of virtual machines, the write of higher priority virtual machines is stored In different bands and writes of lower priority virtual machines are mixed into the same band.20.The system of claim 18, wherein the solid-state drive stores data related to the plurality of virtual machines in a non-volatile manner, the logical block address range, and each of the plurality of virtual machines The priority of a virtual machine across multiple starts.21.A solid state drive, wherein the solid state drive is configurable to perform operations comprising:Receive input and output operations from a plurality of virtual machines from a computing device's virtual manager; andA separate band is maintained in the non-volatile memory of the solid state drive for storing data written to the solid state drive by at least two different ones of the plurality of virtual machines.22.The solid state drive of claim 21, the operations further comprising:Receive a logical block address range for each of the plurality of virtual machines from the virtual machine manager for mapping into a band in the solid-state drive.23.The solid state drive of claim 22, the operations further comprising:Receive the priority of each of the plurality of virtual machines from the virtual machine manager.24.The solid state drive of claim 23, wherein writing includes data to be written to the solid state drive, the operations further comprising:In response to a determination that there is not a sufficient number of bands to provide a separate band for storing writes for each of the plurality of virtual machines, the writing of higher priority virtual machines is stored in different bands and The writes to lower priority virtual machines are mixed to the same band.25.The solid state drive of claim 21, wherein the solid state drive stores data pertaining to the plurality of virtual machines, the logical block address range, and the plurality of virtual machines in a nonvolatile manner The priority of each virtual machine across multiple starts. |
Reduce the mix of input and output operations in solid-state drivesBackground techniqueIn computing, a virtual machine (VM) is a simulation of a particular computer system. Virtual machines operate based on actual or assumed computer architectures and functions, and mechanisms for implementing virtual machines may include specialized hardware, software, or firmware.A hypervisor or virtual machine manager (VMM) is a piece of computer software, firmware, or hardware that creates and runs multiple virtual machines. The computer on which the hypervisor runs one or more virtual machines can be referred to as a host. Each virtual machine can be called a client. The hypervisor presents the guest operating system with a virtual operating platform and manages the execution of guest operating systems. Multiple instances of various operating systems can share virtualized hardware resources.The virtualized hardware resources shared by multiple instances of various operating systems may include one or more solid-state drives. Solid State Drives (SSDs) are data storage devices that use a combination of integrated circuits as a storage for persistent storage of data. SSDs have no moving mechanical components and this distinguishes SSDs from traditional motorized disks that contain rotating platters and removable read / write heads, such as hard disk drives (HDDs) or floppy disks. SSDs typically resist physical shocks, run quietly, have less access time, and have less latency than motorized disks. Many types of SSDs use NAND-based flash memory, which stores data without power and is a non-volatile storage technology.The SSD operates on the entire block of memory. Before writing to a memory cell, it is necessary to erase the flash memory, which requires a large voltage to be applied to the memory cell, which can only occur once on the entire memory cell block. For example, if you write 1KB of data to an SSD with an erase block size of 128KB, the SSD needs to read 127KB from the target block, erase it, and write the old data back with new data. SSD firmware can pre-eratize blocks and attempt to write new data to these pre-erased blocks.BRIEF DESCRIPTION OF THE DRAWINGS FIGReferring now to the drawings in which like numerals represent corresponding parts throughout the drawings:Figure 1 shows a block diagram of a computing environment in accordance with some embodiments in which a virtual machine manager controls access of multiple virtual machines to write data to an SSD;Figure 2 shows a block diagram of a computing environment in accordance with certain embodiments in which an operating system controls the writing of data to SSDs by multiple applications;3 illustrates a flowchart illustrating a process for writing data to an SSD in a computing environment in which the virtual machine manager controls the plurality of virtual machines to write data to an SSD, according to some embodiments ;Figure 4 shows a flowchart illustrating the operation for writing data to an SSD in a computing environment in which the operating system controls the plurality of applications to write data to the SSD's access in accordance with some embodiments;Figure 5 illustrates a flowchart illustrating a further embodiment of a computing environment for writing data to an SSD in a computing environment in which the virtual machine manager controls the plurality of virtual machines to write data to the SSD according to some embodiments operating;6 illustrates a flowchart illustrating a further operation for writing data to an SSD in a computing environment in which the operating system controls the plurality of applications to write data to the SSD for access in accordance with some embodiments;7 illustrates a flowchart illustrating a flowchart of a process for writing data to an SSD in a computing environment in which the virtual machine manager controls the plurality of virtual machines to write data to the SSD, according to some embodiments Operation, where the SSD may or may not support the priority and logical block address ranges used to generate the band; and8 illustrates a block diagram of a computing device, which may include an SSD or may be coupled to an SSD, according to some embodiments.detailed descriptionIn the following detailed description, reference is made to the accompanying drawings that form a part thereof and illustrate several embodiments. It is to be understood that other embodiments may be utilized and structural and operational changes may be made.The performance of solid-state drives for sequential input / output (I / O) can be superior to non-sequential (eg, random or hybrid) I / O. In non-sequential I / O, you can access a portion of a file in any order. In sequential I / O, before accessing the second part, the first part of the file is accessed, and the second part is accessed before the third part is accessed, and so on. For writes, sequential I / O is superior in performance to non-sequential I / O due to lower internal write amplification, and is superior in performance to read and sequential I / Os due to the ability to perform pre-fetch of data In non-sequential I / O. Write magnification is an undesirable phenomenon associated with SSDs as the amount of physical information actually written as a result of write amplification is a multiple of the amount of logic that it is intended to write. Write magnification can be measured by the ratio of writes submitted to the SSD to writes from the host system. Overwriting some numbers in the SSD requires reading, updating, and writing to the new location of the used part of the SSD, and if the new location was previously used at a certain point in time, at the same time first erasing the new location; Due to the way SSDs work, it is possible to erase and rewrite parts of the SSD that are much larger than what is actually needed for new data. This doubling effect increases the amount of writing required and results in a write magnification. In sequential I / O, write amplification is much less than in non-sequential I / O. As a result, superior performance is achieved when writing to the SSD through sequential I / O.However, in a virtualized system that executes I / O on SSDs, sequential I / O requests from various virtual machines (VMs) or applications may be at the hypervisor level until they are issued to the underlying SSDs Become intermingled, which leads to lower storage and system performance. Confusion at the hypervisor level has caused the data to be stored in SSDs in a mixed or non-sequential fashion, and as a result, the sequential I / O regarding SSDs may not be available on virtualized systems.Certain embodiments provide a mechanism to reduce or eliminate performance degradation in SSDs caused by I / O promiscuity from various virtual machines. In particular, the performance of processing write requests is greatly enhanced by reducing the mix of writes of different virtual machines.In some embodiments, a write request for a particular virtual machine is for a selected tape of a solid state disk, wherein the selected tape is independent of other tapes in the solid state disk, and wherein each of the SSDs is in an SSD A series of contiguous pre-erased erase blocks (ie, a strip may be a range of some sequential physical storage locations in non-volatile memory; for example, a sequential block that may include NAND memory). This is accomplished by providing solid state disks on a range of logical block addresses (LBAs) for each virtual machine that allow the solid state disks to determine which band of information to use for each incoming write request. This reduces the promiscuous mix of I / O from different virtual machines within a given band, resulting in lower write magnifications (especially when several virtual machines issue sequential I / O) and improves write performance.1 illustrates a block diagram of a computing environment 100 in accordance with some embodiments in which a virtual machine manager 102 controls access to a plurality of virtual machines 104a, 104b, 104c, ... 104n to write data to the solid state drive 106. In the example of FIG. The virtual machines 104 a, 104 b, 104 c,... 104 n and the virtual machine manager 102 are implemented in the computing device 108 in hardware, software, or firmware, or any combination thereof. The solid state drive 106 may reside within the computing device 108 (as shown in FIG. 1) or may be external to the computing device 108 and coupled to the computing device 108.The solid state drive 106 may be comprised of a non-volatile memory 120, such as a NAND memory, a NOR memory, or some other suitable non-volatile memory. In some embodiments, the solid-state drive 106 may be capable of storing several terabytes or more of data. Certain embodiments may be applied to other types of nonvolatile memory, phase change memory (PCM), three-dimensional crosspoint memory, resistive memory, nanowire memory, ferroelectric random access memory (FeTRAM) Magnetoresistive random access memory (MRAM) memory technology, spin-transfer torque (STT) -MRAM, byte-addressable random access non-volatile memory, and the like.In some embodiments, computing device 108 may be comprised of any suitable computing device, such as a personal computer, mainframe, telephony device, smart phone, memory controller, blade computer, processor with memory, and the like. In some alternative embodiments, the computing device 108 may be implemented over a bus (eg, Peripheral Component Interconnect (PCIe), Serial Advanced Technology Attachment (SATA), Serial Attached Small Computer System Interface (ASA) ) Or a network such as the Internet, a storage area network (SAN), a local area network (LAN), or the like. Further details of the SATA specification can be found in the Serial ATA Specification, Revision 3.2, released by SATA-IO, Oregon in August 2013. In another example, the interface and / or the interconnection protocol may comply with and / or be compatible with the NVMe (Nonvolatile Memory Host Controller Quick Interface). Further details of NVMe may be found in the publication entitled & quot; NVM Express (TM), Revision 1.2, & quot; published by NVM Express (TM) on November 3, 2014 and / or in previous and / or later versions of the specification (NVMExpress is NVM Exepress Ltd. trademark).Each of the virtual machines 104a ... 104n may have a priority associated therewith. For example, as shown in FIG. 1, virtual machines 104a, 104b may have a higher priority and virtual machines 104c, 104n may have a lower priority. In some exemplary embodiments, the priority may be indicated qualitatively as high or low, or as a quantitative indication of a value corresponding to a priority. The higher priority of a virtual machine means that I / O from that virtual machine should be given higher priority than I / O from a virtual machine with a lower priority.The virtual machine manager 102 (also referred to as a hypervisor) may include a virtual machine priority and logical block address (LBA) range indication module 110 implemented in hardware, software, firmware, or any combination thereof. The virtual machine priority and LBA range indication module 110 may send the priority of the virtual machine to the virtual machine priority and LBA range receiving module 112 executed in the solid state drive 106 and implemented in hardware, software, firmware, or any combination thereof and LBA range. An application program interface (API) may be defined for communication between the virtual machine priority and LBA range indication module 110 and the virtual machine priority and LBA range reception module 112. LBA is a value mapped to a specific address on the SSD.In a virtualized environment, multiple virtual machines 104a ... 104n share access to a single solid-state drive 106. Each virtual machine is allocated a range of LBA supported by the solid state drive 106, with the virtual machine having access to the LBA allocated in the solid state drive 106. Access to SSD 106 can be supervised by virtual machine manager 102 and SSD 106 to ensure that virtual machines only access their assigned LBA ranges.The virtual machine priority and LBA range indication module 110 interfaces with the virtual machine priority and LBA range receiving module 112 between the virtual machine manager 102 and the solid state drive 106 to notify the solid state drive 106 of the LBA ranges of different virtual machines and is optional Inform the virtual machine of its priority. This information is sent to the solid-state drive 106 each time the computing device is started, or the virtual machine is destroyed, created, or modified.The solid-state drive 106 writes data into the internal page within the contiguous, pre-erased erase block (EB), called non-volatile memory of the tape. Based on the LBA range information for the virtual machines, the solid state drive 106 may use a dedicated band for each virtual machine to ensure that writes to one virtual machine are not mixed with writes from other virtual machines.However, if the system supports more virtual machines than the number of available tapes, the solid-state drive 106 may mingle some of the virtual machine's writes into each open band or use its higher-priority virtual machines for its own openness The lower priority virtual machine write is mixed mechanism. For example, in FIG. 1, band 114 stores data (referred to as "write") received in a write request received from virtual machine # 1 104a, band 116 stores the write from virtual machine # 2 104b, and band 118 stores From writes by virtual machine # 3 104c and virtual machine # 4 104n. Thus, writing of low priority virtual machines 104c, 104d is promiscuous in band 118 while writing of higher priority virtual machines 104a, 104b is stored in a separate band. The performance of the virtual machines 104a, 104b will be better than the performance of the virtual machines 104c, 104n, at least with respect to write operations.Thus, FIG. 1 illustrates certain embodiments in which each virtual machine is assigned an LBA range and optionally a priority that is passed by the virtual machine manager 102 to the solid state drive 106. If a sufficient number of bands are available in the solid state drive 106, the writes for different virtual machines are stored in different bands. However, if a sufficient number of bands can not be obtained, writes to lower priority virtual machines are mixed into the same band.In some embodiments, the solid-state drive 106 stores in a nonvolatile manner data associated with the plurality of virtual machines 104a ... 104n, a range of logical block addresses, and a plurality of starts across each of the plurality of virtual machines The priority.2 illustrates a block diagram of a computing environment 200 in accordance with certain embodiments in which an operating system 202 controls the writing of data to a solid state drive 206 for access by a plurality of applications 204a, 204b, 204c, ..., 204n. Applications 204a ... 204n and operating system 202 are implemented in computing device 208 in hardware, software, or firmware, or any combination thereof. Solid state drive 206 may reside within computing device 208 (as shown in FIG. 2) or may be external to computing device 208.Solid state drive 206 may be comprised of non-volatile memory, such as NAND memory, NOR memory, or some other suitable non-volatile memory. In some embodiments, the solid state drive 206 may be capable of storing several terabytes or more. Certain embodiments may be applied to other types of nonvolatile memory, phase change memory (PCM), three-dimensional crosspoint memory, resistive memory, nanowire memory, ferroelectric random access memory (FeTRAM) Magnetoresistive random access memory (MRAM) memory technology, spin transfer torque (STT) -MRAM and so on.Each of applications 204a ... 204n may have a priority associated therewith. For example, as shown in FIG. 2, applications 204a, 204b may have a higher priority and applications 204c, 204n may have a lower priority. In some exemplary embodiments, the priority may be indicated qualitatively as high or low, or as a quantitative indication of a value corresponding to a priority. The higher priority of an application means that it should be given prioritized execution over applications with a lower priority.Operating system 202 may include an application priority and logical block address (LBA) range indication module 210 implemented in any combination of hardware, software, and firmware. The application priority and LBA range indication module 210 may send the application's priority and LBA range to the application priority and LBA range reception module 212 executing in the solid-state drive 206. The application program interface may be defined for communication between the application priority and logical block address (LBA) range indication module 110 and the application priority and LBA range reception module 112.In a virtualized environment, multiple applications 204a ... 204n share access to a single solid-state drive 206. Each application is allocated a range of LBA supported by the solid-state drive 206 that can access the LBA allocated in the solid-state drive 206. The solid state drive 206 access may be overseen by the operating system 202 to ensure that the application accesses only its assigned LBA range.The application priority and LBA range indication module 210 and the application priority and LBA range reception module 212 interface between the operating system 202 and the solid state drive 206 to inform the solid state drive 206 of LBA ranges for different applications, and optionally notify application priority level. This information is sent to the solid-state drive 206 each time the computing device is started, or the application is destroyed, created, or modified.The solid state drive 206 writes data in the band to the internal page. With information on the LBA range of the application, the solid state drive 206 may use a dedicated band for each application to ensure that writes to one application are not mixed with writes from other applications.However, if the system supports more applications than the number of available bands, the solid-state drive 206 may hash some application writes into each open band or obtain its own open band using a higher-priority application Lower priority application writes are promiscuous mechanisms. For example, in FIG. 2, band 214 stores writes from application # 1 204a, band 216 stores writes from application # 2 204b, and band 218 stores writes from applications # 3 204c and applications # 4 204n. Thus, writes to lower priority applications 204c, 204d are promiscuous in band 218 while writes to higher priority applications 204a, 204b are stored in separate bands.Thus, FIG. 2 illustrates certain embodiments in which each application is assigned a range of LBA and optional priorities passed by the operating system 202 to the solid-state drive 206. If a sufficient number of bands are available in the solid state drive 206, writes for different applications are stored in different bands. However, if a sufficient number of bands can not be obtained, the writes of lower priority applications are mixed into the same band 218. Applications that do not write with other applications are superior to those that write to mixed applications.In some embodiments, the solid-state drive 206 stores data related to multiple applications, a range of logical block addresses, and a priority across multiple starts for each of the multiple applications in a non-volatile manner.3 illustrates an operation for writing data to the SSD 106 in the computing environment 100 in accordance with certain embodiments where the virtual machine manager 102 controls the plurality of virtual machines 104a ... 104n to write to the SSD 106 Data access.Control begins at block 302 where virtual machine manager 102 controls access of one or more solid state drives 106 by multiple virtual machines 104a ... 104n. The virtual machine manager's virtual machine manager priority and LBA range indication module 110 sends (at block 304) an indication of the LBA range of each virtual machine and optionally the priority of each virtual machine to the solid state drive 106.Control continues to block 306, where the virtual machine manager priority and range receiving module 112 of the solid state drive 106 receives an indication of the LBA range for each virtual machine and, optionally, the priority of each virtual machine, and determines 308) Whether the number of bands in the solid-state drive 106 is sufficient to assign a different band to each virtual machine.If the number of bands in the solid-state drive 106 is sufficient to allocate different bands to each virtual machine, a separate band is assigned to each virtual machine's write (at block 310). If the number of bands in the solid-state drive 106 is not sufficient to assign a different band to each virtual machine, separate bands are assigned for writes to the higher-priority virtual machines and writes to lower-priority virtual machines are mixed In the same band (reference sign 312). If all virtual machines have a high priority, you can mix all virtual machine writes into the band.Thus, FIG. 3 illustrates certain embodiments in which attempts are made to write different virtual machines into different bands. If this is not possible, writes of higher priority virtual machines are placed into their own band, while writes of lower priority virtual machines are mixed into the same band. Thus, sequential I / O is retained to the extent possible even in a virtualized environment.4 illustrates an operation for writing data to the SSD 206 in the computing environment 200 in which the operating system 202 controls access of a plurality of applications 204a ... 204n to write data to the SSD 106 in accordance with certain embodiments.Control begins at block 402, where operating system 202 controls access by one or more of multiple applications 204a ... 204n to one or more solid-state drives 206. The operating system's operating system priority and LBA range indication module 210 sends (at block 404) an indication of the LBA range for each application and optionally the priority of each application to the solid-state drive 206.Control continues to block 406, where the operating system priority and range receiving module 212 of the solid state drive 206 receives an indication of the LBA range for each application, and optionally the priority of each application, and determines (at block 408) Whether the number of bands in drive 206 is sufficient to assign a different band to each application.If the number of bands in the solid-state drive 206 is sufficient to allocate different bands to each application, a separate band is assigned to each application's write (at block 410). If the number of bands in the solid state drive 206 is not sufficient to assign a different band to each virtual machine, separate bands are assigned for writes to higher priority applications and writes for lower priority applications are mixed in the same Band (reference 412).Thus, FIG. 4 illustrates certain embodiments in which attempts are made to write different applications into different bands. If this is not possible, writes of higher priority applications are put into their own band, while writes of lower priority applications are mixed into the same band.5 illustrates a further operation for writing data to the SSD 106 in the computing environment 100 in accordance with certain embodiments in which the virtual machine manager 102 controls the plurality of virtual machines 104a ... 104n to write data to the SSD 106 Access.Control begins at block 502 where the virtual machine manager 102 of the computing device 108 manages access of the plurality of virtual machines 104a ... 104n to the solid state drive 106. [The virtual machine manager 102 sends (at block 504) a logical block address range for each of the plurality of virtual machines 104a ... 104n to the solid state drive 106 for mapping into a band in the solid state drive 106. The virtual machine manager 102 then optionally sends (at block 506) the priority of each of the plurality of virtual machines to the solid-state drive 106.From block 506, control may continue to block 508 or block 510. At block 508, a separate band is maintained in the solid state drive to store writes for at least two different ones of the plurality of virtual machines. At block 510, in response to determining that there is not a sufficient number of bands in the solid state drive 106 to provide a separate band for storing writes for each of the plurality of virtual machines 104a ... 104n, the higher priority virtual Machine writes are stored in different bands and the writes of lower priority virtual machines are mixed into the same band.6 illustrates further operations for writing data to the SSD 206 in the computing environment 200 in which the operating system 202 controls access by the plurality of applications 204a ... 204n to write data to the SSD 206 in accordance with certain embodiments.Control begins at block 602 where the operating system 202 of the computing device 208 manages access of the plurality of applications 204a ... 204n to the solid state drive 206. [ The operating system 202 sends (at block 604) a logical block address range for each of the plurality of applications 204a ... 204n to be mapped into the strip in the solid state drive 206 to the solid state drive 206. The operating system 202 then optionally sends (at block 606) the priority of each of the plurality of applications to the solid-state drive 206.From block 606, control may continue to block 608 or block 610. At block 608, a separate band is maintained in the solid state drive 206 to store writes for at least two different ones of the plurality of applications 204a ... 204n. At block 610, in response to determining that there is not a sufficient number of bands in the solid state drive 206 to provide a separate band for storing writes for each of the plurality of applications 204a ... 204n, writing of higher priority applications Are stored in different bands and the writes of lower priority applications are mixed into the same band.7 illustrates further operations for writing data to the SSD 106 in the computing environment 100 in accordance with certain embodiments in which the virtual machine manager 102 controls the access of multiple virtual machines 104a ... 104b to write data to the SSD 106 , Where SSD 106 may or may not support the priority and logical block address ranges used to generate the band.Control begins at block 702, where the virtual machine manager 102 requests the SSD 106 as to whether the SSD 106 supports using virtual machine priority and LBA range. If SSD 106 supports (branch 704) the priority and LBA range for generation of bands, writes to different virtual machines are stored in different bands on SSDs to the extent possible. If SSD 106 does not support (branch 708) for generation of tape priority and LBA range, promiscuous writes of virtual machines are stored in solid state drive 106. In an alternative embodiment, operating systems and applications may be used instead of the virtual machine manager and virtual machine as shown in FIG. 7.Thus, FIGS. 1-7 illustrate certain embodiments in which, by placing writes of different virtual machines to different bands to the extent possible, sequential I / O is maintained even in a virtualized environment Performance advantages. In addition, when the operating system controls multiple applications, the advantage of sequential I / O for performance is retained by maintaining writes of different applications to different bands to the extent possible. In solid-state drives, promiscuous data from different virtual machines or applications is minimized to improve performance.The described operations may be implemented as a method, apparatus, or computer program product using standard programming and / or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code held in a "computer-readable storage medium", where the processor may read and execute the code from a computer-storage-readable medium. Computer-readable storage media include electronic circuits, storage materials, inorganic materials, organic materials, biological materials, packaging, shells, coatings, and hardware. Computer-readable storage media may include but are not limited to magnetic storage media (eg, hard disk drives, floppy disks, magnetic tapes, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile storage devices , EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash memory, firmware, programmable logic and the like), solid state devices (SSDs), and the like. The code implementing the described operations may further be implemented in hardware logic implemented in hardware devices (eg, integrated circuit chips, programmable gate arrays (PGA), application specific integrated circuits (ASICs), etc.). Further, the code implementing the described operations may be implemented as a "transmit signal," where the transmit signal may be propagated through space or over a transmission medium such as optical fiber, copper wire, or the like. The transmission signal in which the code or logic is encoded may further include a wireless signal, a satellite transmission, a radio wave, an infrared signal, Bluetooth, or the like. Program code embedded on a computer readable storage medium may be transmitted as a transmission signal from a sending station or computer to a receiving station or computer. Computer-readable storage media does not consist solely of transmitted signals. Those skilled in the art will recognize many modifications may be made to this configuration and articles of manufacture may include suitable information bearing media as is known in the art.Computer program code for carrying out operations for aspects of certain embodiments may be written in any combination of one or more programming languages. The flowchart blocks and block diagrams can be implemented by computer program instructions.8 shows a block diagram of a system 800 that includes one or more computing devices 108, 208 (computing devices 108, 208 include at least one processor) and one or more solid-state drives 106, 206 according to some embodiments. For example, in some embodiments, system 800 may have a computing device 108 and a solid-state drive 106 contained within system 800 and in some embodiments, system 800 may have a computing device 208 contained within system 800 and a solid-state Drive 206. In some embodiments, system 800 may be a laptop computer that includes solid-state drives 106, 206.The system 800 may include circuitry 802, which in some embodiments includes at least one processor 804. System 800 may also include memory 806 (eg, a volatile memory device) and storage device 808. Storage device 808 may include solid-state drives 106, 206 or other drives or devices that include non-volatile storage devices (eg, EEPROM, ROM, PROM, flash memory, firmware, programmable logic, etc.) Storage device 808 may also include a disk drive, optical disk drive, tape drive, and the like. Storage device 808 may include an internal storage device, an attached storage device, and / or a network accessible storage device. System 800 can include program logic 810 that includes code 812 that can be loaded into memory 806 and executed by processor 804 or circuitry 802. In some embodiments, the program logic 810 includes code 812 that may be stored in the storage device 808. In some other embodiments, program logic 810 may be implemented in circuitry 802. Thus, while FIG. 8 shows program logic 810 separated from other elements, program logic 810 may be implemented in memory 806 and / or circuitry 802. System 800 may also include a display 814 (eg, a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a touch screen display, or any other suitable display). System 800 may also include one or more input devices 816, such as a keyboard, mouse, joystick, touchpad, or any other suitable input device. Other components or devices than those shown in FIG. 8 may also be found in system 800.Certain embodiments may be directed to methods for deploying or automating computing instructions integrated into a computing system by computing instructions from a human in which the code in conjunction with a computing system is enabled to perform the operations of the described embodiments.Unless specified otherwise, the terms "an embodiment," "an embodiment," "several embodiments," "the embodiment," "these embodiments," "one or more embodiments," "some embodiments, , And "one embodiment" refer to "one or more (but not all) embodiments."Unless expressly stated otherwise, the terms "comprising," "including," "having", and variations thereof mean "including but not limited to."Unless specifically stated otherwise, the terms "a" and "an" and "the" refer to "one or more."Unless explicitly stated, devices that communicate with each other do not need to communicate continuously with each other. In addition, devices that communicate with each other may communicate indirectly, either directly or through one or more intermediary media.The description of an embodiment having several components in communication with one another does not imply that all such components are required. Rather, various alternative components are described to easily illustrate a wide variety of possible embodiments.Further, while process steps, method steps, algorithms, etc. may be described in sequence, such processes, methods, and algorithms may be configured to operate in a replaceable order. In other words, it may not be necessary to indicate in any sequence or step order that these steps need to be performed in this order. The process steps described in this application can be performed in any practical order. Further, some steps can be performed simultaneously.When describing a single device or article in this application, it will be readily understood that more than one device / article may be used in place of a single device / article (whether or not it collaborates). Similarly, where more than one device or article is described in this application, whether or not it cooperates, it will be readily understood that a single device / article may be used instead of or in the place of more than one device or article A different number of devices / articles can be used. The functionality and / or the features of a device may be alternatively implemented by one or more other devices not expressly described as having such functionality / features. Thus, other embodiments need not include the device itself.At least some of the operations that may have been shown in the figures show that certain events occur in a certain order. In alternative embodiments, certain operations may be performed in a different order and may be modified or removed. In addition, steps may be added to the logic described above and still conform to the described embodiments. Further, the operations described in this application may occur sequentially or certain operations may be processed in parallel. Still further, the operations can be handled by a single processing unit or by a distributed processing unit.The foregoing description of various embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to be limited to the precise forms disclosed. Based on the teachings above, many modifications and variations are possible.ExampleThe following examples relate to further embodiments.Example 1 is a method in which an operating system of a computing device manages access by multiple applications to a solid-state drive. A separate band in the solid state drive's non-volatile memory is maintained for storing data written to the solid state drive by at least two different applications among the plurality of applications.In Example 2, the subject matter of Example 1 can include sending a logical block address range for each of a plurality of applications from an operating system to a solid-state drive for mapping into a band in a solid-state drive.In Example 3, the subject matter of Example 2 can include sending the priority of each of the plurality of applications from the operating system to the solid state drive.In Example 4, the subject matter of Example 3 can include, in response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of applications, the higher priority application Writes to different tapes and writes to lower priority applications into the same tape, where writing includes data to be written to the solid-state drive.In Example 5, the subject matter of Example 3 can include a solid-state drive storing in a non-volatile manner data associated with multiple applications, a range of logical block addresses, and a multi-start priority for each of a plurality of applications .Example 6 is a method in which a virtual machine manager of a computing device manages access of multiple virtual machines to a solid state drive. A separate band in the solid state drive's non-volatile memory is maintained for storing data written to the solid state drive by at least two different ones of the plurality of virtual machines.In Example 7, the subject matter of Example 6 can include sending a range of logical block addresses for each of a plurality of virtual machines from a virtual machine manager to a solid-state drive for mapping into a band in the solid-state drive.In Example 8, the subject matter of Example 7 can include sending the priority of each of the plurality of virtual machines from the virtual machine manager to the solid state drive.In Example 9, the subject matter of Example 8 can include that in response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of virtual machines, a higher priority The writes of the virtual machines are stored in different bands and the writes of the lower priority virtual machines are mixed into the same band, where the write includes the data to be written to the solid state drive.In Example 10, the subject matter of Example 8 can include the solid state drive storing in a non-volatile manner data associated with a plurality of virtual machines, a range of logical block addresses, and a plurality of starts of each of the plurality of virtual machines The priority.Example 11 is a system comprising a memory and a processor, wherein the processor is configured to perform operations comprising: managing, by the operating system, access to a plurality of application multi-solid state drives; and accessing the solid state drive's non-volatile memory A separate band is maintained to store data written to the solid state drive by at least two different applications among the plurality of applications.In Example 12, the subject matter of Example 11 can include sending a range of logical block addresses for each of a plurality of applications from an operating system to a solid-state drive for mapping into a band in a solid-state drive.In Example 13, the subject matter of Example 12 can include sending the priority of each of the plurality of applications from the operating system to the solid state drive.In Example 14, the subject matter of Example 13 can include in response to determining that there is not a sufficient number of bands in the solid-state drive to provide a separate band for storing writes for each of the plurality of applications, the higher-priority applications Writes to different tapes and writes to lower priority applications into the same tape, where writing includes data to be written to the solid-state drive.In Example 15, the subject matter of Example 13 can include a solid-state drive storing in a non-volatile manner data associated with a plurality of applications, a range of logical block addresses, and a multi-start priority for each of a plurality of applications .Example 16 is a system comprising a memory and a processor, wherein the processor is configurable to perform operations wherein the virtual machine manager of the computing device manages access of the plurality of virtual machines to the solid state drive. A separate band in the solid state drive's non-volatile memory is maintained for storing data written to the solid state drive by at least two different ones of the plurality of virtual machines.In Example 17, the subject matter of Example 16 can include sending a logical block address range for each virtual machine in a plurality of virtual machines from a virtual machine manager to a solid state drive for mapping into a band in the solid-state drive.In Example 18, the subject matter of Example 17 can include sending the priority of each of the plurality of virtual machines from the virtual machine manager to the solid state drive.In Example 19, the subject matter of Example 18 can include in response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of virtual machines, a higher priority The writes of the virtual machines are stored in different bands and the writes of the lower priority virtual machines are mixed into the same band, where the write includes the data to be written to the solid state drive.In Example 20, the subject matter of Example 18 can include a solid state drive storing data related to a plurality of virtual machines in a non-volatile manner, a range of logical block addresses, and a plurality of starts of each of the plurality of virtual machines The priority.Example 21 is a solid state drive in which a solid state drive may be configured to perform operations including: receiving input and output operations from a plurality of virtual machines from a virtualization manager of the computing device; and in a nonvolatile A separate band is maintained in memory for storing data written to the solid state drive by at least two different ones of the plurality of virtual machines.In Example 22, the subject matter of Example 21 further includes receiving, from the virtual machine manager, a range of logical block addresses for each of the plurality of virtual machines to map into a band in the solid state drive.In Example 23, the subject matter of Example 22 further includes receiving, from the virtual machine manager, the priority of each of the plurality of virtual machines.In Example 24, the subject matter of Example 23 further includes, in response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of virtual machines, a higher priority The writes of the virtual machines are stored in different bands and the writes of the lower priority virtual machines are mixed into the same band, where the write includes the data to be written to the solid state drive.In Example 25, the subject matter of Example 23 further includes the solid state drive storing in a nonvolatile manner data pertaining to a plurality of virtual machines, a range of logical block addresses, and a plurality of starts of each of the plurality of virtual machines The priority.Example 26 is a system comprising a memory and a processor, wherein the processor is configurable to perform operations wherein the virtual machine manager of the computing device manages access of the plurality of virtual machines to the solid state drive. A separate band in the solid state drive's non-volatile memory is maintained for storing data written to the solid state drive by at least two different applications among the plurality of applications.In Example 27, the subject matter of Example 26 can include sending a range of logical block addresses for each of a plurality of virtual machines from a virtual machine manager to a solid-state drive for mapping into a band in the solid-state drive.In Example 28, the subject matter of Example 27 can include sending the priority of each of the plurality of virtual machines from the virtual machine manager to the solid state drive.In Example 29, the subject matter of Example 28 can include, in response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of virtual machines, a higher priority The writes of the virtual machines are stored in different bands and the writes of the lower priority virtual machines are mixed into the same band, where the writes include the data to be written to the solid state drive.In Example 30, the subject matter of Example 28 can include a solid state drive storing in a nonvolatile manner data pertaining to a plurality of virtual machines, a range of logical block addresses, and a multi-started priority of each of a plurality of applications level.In Example 31, the subject matter of Example 26 can include a solid state drive configurable to perform operations, the operations performed by the solid state drive including: receiving input and output operations from a plurality of virtual machines from a virtualization manager of the computing device; Volatile memory to maintain separate stripes of data for writing to the solid-state drive by at least two different ones of the plurality of virtual machines.In Example 32, the subject matter of Example 31 further includes receiving, from the virtual machine manager, a range of logical block addresses for each of the plurality of virtual machines for mapping into a band in the solid state drive.In Example 33, the subject matter of Example 32 further includes receiving, from the virtual machine manager, the priority of each of the plurality of virtual machines.In Example 34, the subject matter of Example 33 further includes, in response to determining that there is not a sufficient number of bands in the solid state drive to provide a separate band for storing writes for each of the plurality of virtual machines, a higher priority The writes of the virtual machines are stored in different bands and the writes of the lower priority virtual machines are mixed into the same band, where the write includes the data to be written to the solid state drive.In Example 35, the subject matter of Example 33 further includes the solid state drive storing in a non-volatile manner data pertaining to a plurality of virtual machines, a range of logical block addresses, and a plurality of starts of each of the plurality of virtual machines The priority. |
Embodiments herein describe techniques for a semiconductor device including a three dimensional capacitor. The three dimensional capacitor includes a pole, and one or more capacitor units stacked around the pole. A capacitor unit of the one or more capacitor units includes a first electrode surrounding and coupled to the pole, a dielectric layer surrounding the first electrode, and a second electrode surrounding the dielectric layer. Other embodiments may be described and/or claimed. |
1.A semiconductor device including:A three-dimensional capacitor, the three-dimensional capacitor includes:PillarAnd one or more capacitor units stacked around the pillars,Wherein, the capacitor unit of the one or more capacitor units includes:A first electrode surrounding and coupled to the pillar,A dielectric layer surrounding the first electrode, andA second electrode surrounding the dielectric layer.2.The semiconductor device according to claim 1, further comprising:A transistor, wherein the transistor includes a channel along a first direction, and wherein the pillar is placed along a second direction orthogonal to the first direction.3.The semiconductor device according to claim 1 or 2, wherein the capacitor unit is a first capacitor unit, and the capacitor further includes a second capacitor unit, and wherein the dielectric layer of the first capacitor unit and the The dielectric layer of the second capacitor unit forms a continuous dielectric layer that conformally surrounds the pillar and the first electrode of the first capacitor unit and the second capacitor unit, and the The second electrode and the second electrode of the second capacitor form a continuous electrode.4.The semiconductor device according to claim 1 or 2, wherein the capacitor unit is a first capacitor unit, and the capacitor further includes a second capacitor unit, and wherein the first electrode of the first capacitor unit contains and The first electrode of the second capacitor unit has a different material, or the dielectric layer of the first capacitor unit includes a different material from the dielectric layer of the second capacitor unit.5.The semiconductor device according to claim 1 or 2, wherein the capacitor includes a first capacitor unit and a second capacitor unit, and the first capacitor unit has a first capacitor unit having a first circumference and a first area in a plan view. An electrode, the second capacitor unit has a first electrode having a second perimeter and a second area in a plan view, and wherein the first perimeter is different from the second perimeter, or the first electrode One area is different from the second area.6.The semiconductor device according to claim 1 or 2, wherein the first electrode, the dielectric layer, or the second electrode surrounds a square shape, a rectangular shape, a circle, an oval shape, or includes three or more The area of a polygon with multiple sides.7.The semiconductor device according to claim 1 or 2, wherein the first electrode includes a first metal material having a first work function, and the second electrode includes a second metal material having a different work function from the first work function. Work function of the second metal material.8.The semiconductor device according to claim 1 or 2, wherein the first electrode or the second electrode includes W, Mo, Ti, Ta, Al, TaN, TiN, TiC, WN, MoN, MoC, Co, Ni, Cu, Ru, Pd, Pt, Ir, IrOx, graphene, MnO2, Li, RuOx, ITO, SrRuOx, metal oxides, graphite carbon, alkali metals, low work function metals, transition metal oxides, Co oxides , LiCoO2, NaCoO2, transition metal disulfide, spinel oxide, LiMn2O4, LiNiMnO4, conductive polymer or conductive metal.9.The semiconductor device according to claim 1 or 2, wherein the dielectric layer comprises Al2O3, HfO2, ZrO2, TiO2, Nb2O5, Ta2O5, SrTiOx, BaTiOx, Ga2O3, Y2O3, rare earth oxides, solid electrolytes, glass electrolytes, ceramics Electrolyte, ion-conducting inverse perovskite, Li3ClO, doped Li(3-2x)DxClO, hafnium silicate, zirconium silicate, hafnium dioxide, hafnium zirconate, zirconium dioxide, aluminum oxide, titanium oxide, nitrogen Silicon, carbon-doped silicon nitride, silicon carbide, and hafnium nitride silicate, high-k dielectric materials or alloys thereof, in the doped Li(3-2x)DxClO, D is a divalent cation doped Miscellaneous agents.10.The semiconductor device according to claim 9, wherein the solid electrolyte layer includes an oxide or a chalcogenide-based layer.11.The semiconductor device according to claim 1 or 2, wherein the capacitor unit further comprises an interface between the first electrode and the dielectric layer, or between the dielectric layer and the second electrode Floor.12.11. The semiconductor device according to claim 11, wherein the interface layer includes a pseudocapacitance layer, and wherein the pseudocapacitance layer includes RuOx, MnOx, VOx, an active redox center material, or a catalytic relay material.13.The semiconductor device according to claim 1 or 2, wherein the first electrode or the second electrode is coupled to a power rail.14.The semiconductor device according to claim 1 or 2, wherein the three-dimensional capacitor is a super capacitor, an electrostatic double layer capacitor (EDLC), an electrochemical capacitor, a pseudo capacitor, a pseudo capacitor based on the redox Faraday reaction, a lithium ion capacitor, Electrochemical energy storage devices, or hybrid battery-supercapacitor devices.15.The semiconductor device according to claim 1 or 2, wherein the capacitor is located at an interposer coupled to the processor or at the back of the processor.16.The semiconductor device according to claim 1 or 2, wherein the three-dimensional capacitor has a dielectric breakdown voltage greater than about 1V and less than about 5V.17.A method for forming a semiconductor device, the method comprising:Forming a transistor, wherein the transistor includes a channel along a first direction;Forming a pillar placed along a second direction orthogonal to the first direction;Forming a first electrode surrounding and coupled to the pillar;Forming a dielectric layer surrounding the first electrode, andA second electrode surrounding the dielectric layer is formed, wherein the first electrode, the dielectric layer, and the second electrode form a capacitor unit around the pillar.18.The method according to claim 17, wherein the capacitor unit is a first capacitor unit, and the method further comprises:Forming a second capacitor unit above the first capacitor unit, wherein forming the second capacitor unit includes:Forming a first electrode of the second capacitor unit surrounding and coupled to the pillar and above the first capacitor unit;Forming the dielectric layer of the second capacitor unit surrounding the first electrode of the second capacitor, andA second electrode of the second capacitor unit surrounding the dielectric layer of the second capacitor is formed.19.18. The method of claim 18, wherein the dielectric layer of the first capacitor unit and the dielectric layer of the second capacitor unit are formed to conformally surround the pillar and the first capacitor unit And a continuous dielectric layer of the first electrode of the second capacitor unit, and the second electrode of the first capacitor and the second electrode of the second capacitor form a continuous electrode.20.The method according to claim 17, 18 or 19, further comprising:An interface layer is formed between the first electrode and the dielectric layer, or between the dielectric layer and the second electrode.21.A computing device including:A transistor including a channel in a first direction in a semiconductor device; andA three-dimensional capacitor coupled to the transistor, wherein the three-dimensional capacitor includes a pillar, and one or more capacitor units stacked around the pillar, and the pillar runs along a first direction orthogonal to the first direction. Two-direction placement, wherein the capacitor unit of the one or more capacitor units includes:A first electrode surrounding and coupled to the pillar,A dielectric layer surrounding the first electrode, andA second electrode surrounding the dielectric layer.22.The computing device of claim 21, wherein the transistor is part of a processor.23.The computing device according to claim 21 or 22, wherein the three-dimensional capacitor is coupled to the transistor through a power rail located at a later stage of the manufacturing process for the semiconductor device.24.The computing device of claim 21 or 22, wherein the three-dimensional capacitor is located in an interposer.25.The computing device according to claim 21 or 22, wherein the computing device is a wearable device or a mobile computing device, and the wearable device or the mobile computing device includes an antenna coupled with the memory device, a touch screen control One or more of a monitor, display, battery, processor, audio codec, video codec, power amplifier, global positioning system (GPS) device, compass, Geiger counter, accelerometer, gyroscope, speaker, or camera indivual. |
Capacitor architecture in semiconductor devicesTechnical fieldThe embodiments of the present disclosure generally relate to the field of semiconductor devices, and more particularly, to capacitors in semiconductor devices.Background techniqueCapacitors are an important part of integrated circuits (IC) and semiconductor devices. For example, a capacitor can be used as an information storage unit in a memory device. A memory device (e.g., a dynamic random access memory (DRAM) array) may include a plurality of memory cells, where the memory cell may include a selector (e.g., a transistor) to control access to the memory cell such as a capacitor. In addition to memory devices, capacitors can also be used in many other applications (e.g., energy storage devices). In particular, supercapacitors (SC) have gained progress as energy storage devices due to their high power density, good performance, and long maintenance-free life. Currently, capacitors or supercapacitors can be manufactured in back-end interconnect structures with limited layout space. Therefore, current capacitors may have limited capacity, resulting in insufficient power density or information capacity.Description of the drawingsThe embodiments will be easily understood through the following specific embodiments in conjunction with the drawings. To facilitate this description, similar reference signs designate similar structural elements. In the figures of the drawings, embodiments are shown by way of example and not by way of limitation.Figures 1(a)-1(e) schematically show diagrams of a semiconductor device including a three-dimensional capacitor having one or more capacitor units stacked around a pillar according to some embodiments.FIG. 2 illustrates a process for forming a semiconductor device including a three-dimensional capacitor having one or more capacitor units stacked around a pillar according to some embodiments.3(a)-3(b) schematically show diagrams of a semiconductor device including a three-dimensional capacitor having one or more capacitor units stacked around a pillar according to some embodiments.Fig. 4 schematically illustrates an interposer implementing one or more embodiments of the present disclosure according to some embodiments.Fig. 5 schematically shows a computing device constructed according to an embodiment of the present disclosure according to some embodiments.Detailed waysFront End of Line (FEOL) semiconductor processing and structure can refer to the first part of IC manufacturing, where individual devices (e.g., transistors, capacitors, resistors, etc.) are patterned in a semiconductor substrate or layer. FEOL generally covers all operations up to (but not including) the deposition of the metal interconnect layer. The transistors formed in FEOL can also be referred to as front-end transistors. After the final FEOL operation, the result is usually a wafer with isolated transistors (for example, without any wires). Back-end-of-line (BEOL) semiconductor processing and structure can refer to the second part of IC manufacturing, in which individual devices (e.g., transistors, capacitors, resistors, etc.) interact with wiring (e.g., one or more metallization layers) on the wafer. even. BEOL includes metal contacts, dielectric layers, metal levels, and bonding sites for chip-to-package connections. In the manufactured BEOL part, metal contacts, pads, interconnect wires, vias, and dielectric structures can be formed. For modern IC processes, more than 10 metal layers can be added to BEOL.Capacitors are an important part of integrated circuits (ICs) and semiconductor devices, for example, used as information storage units in memory devices or energy storage devices. Capacitors can have different architectures. For example, a metal-insulator-metal (MIM) capacitor includes two metal plates with an insulator between the plates. Currently, MIM capacitors are usually manufactured in the interconnect structure at the BEOL, for example, usually over the metal layer 5 or 7 with limited space. Since the capacitance of a capacitor is linearly proportional to the area of the capacitor, the lack of layout space at the BEOL limits the number of conventional MIM capacitors placed there, resulting in insufficient power density when the capacitor is used as an energy storage device. In addition, in BEOL, the manufacturing process is limited by the thermal budget that the device can handle. On the other hand, in order to pursue Moore's Law, modern processors are getting faster and faster and require a lot of power. For example, with the emergence of 5G technology and three-dimensional integrated stacks for artificial intelligence (AI) and machine learning (ML) processors, when capacitors are used as energy storage devices, power density will become a major challenge for capacitors. When capacitors are used as information storage devices, similar capacity challenges exist.The embodiments herein propose capacitors that can provide improved power density for modern processors or information storage capacities. The capacitor is formed with a corrugated structure to increase the surface area of the capacitor. In addition, in some embodiments, the capacitor may integrate a high-efficiency solid electrolyte (SSE) instead of a high-k dielectric to further increase the energy capacity and make it a supercapacitor. As a result, the embodiments herein may include an electric double layer capacitor (EDLC)-based supercapacitor array or a redox Faraday reaction-based pseudocapacitor array. The efficacy of the SSE capacitor will enable the realization of an electric double layer across the electrode-SSE surface for EDLC. In the case of pseudocapacitors, SSE will enable redox reactions to span the same interface. In addition, the capacitor array can be vertically integrated into the 3-dimensional interposer or on the back of the processor. In addition, the capacitor can be connected to the processor directly or indirectly through the power rail. As a result, the embodiments herein can enable the processor to operate at an improved frequency while running on a battery including the capacitor proposed herein. Customers will be able to remotely utilize the full processing power of modern processors without connecting to a wired power source.The embodiments herein propose a semiconductor device including a three-dimensional capacitor. The three-dimensional capacitor includes a pillar, and one or more capacitor units stacked around the pillar. The capacitor unit of the one or more capacitor units includes a first electrode surrounding and coupled to the pillar, a dielectric layer surrounding the first electrode, and a second electrode surrounding the dielectric layer.The embodiments herein propose methods for forming semiconductor devices. The method includes forming a transistor, wherein the transistor includes a channel along a first direction. The method further includes forming a pillar placed in a second direction orthogonal to the first direction, forming a first electrode surrounding and coupled to the pillar, forming a dielectric layer surrounding the first electrode, and forming a second dielectric layer surrounding the dielectric layer. electrode. The first electrode, the dielectric layer, and the second electrode form a capacitor unit around the pillar.The embodiments herein propose a computing device that includes a transistor and a three-dimensional capacitor coupled to the transistor, the transistor including a channel in a first direction in a semiconductor device. The three-dimensional capacitor includes a pillar placed in a second direction orthogonal to the first direction, and one or more capacitor units stacked around the pillar. The capacitor unit of the one or more capacitor units includes a first electrode surrounding and coupled to the pillar, a dielectric layer surrounding the first electrode, and a second electrode surrounding the dielectric layer.In the following description, terms commonly used by those skilled in the art will be used to describe various aspects of the illustrative embodiments, so as to convey the essence of their work to others skilled in the art. However, it will be obvious to those skilled in the art that only some of the described aspects may be utilized to practice the present disclosure. For the purpose of explanation, specific quantities, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be obvious to those skilled in the art that the present disclosure can be practiced without these specific details. In other instances, well-known features are omitted or simplified so as not to make the illustrative embodiments difficult to understand.Each operation will be sequentially described as a plurality of discrete operations in a manner that is most helpful for understanding the present disclosure. However, the order of description should not be interpreted as implying that these operations must depend on the order. In particular, these operations may not be performed in the order presented. For the purposes of this disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of this disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A , B and C).As used herein, the terms "above", "below", "between", "above" and "above" can refer to a layer or part of a material relative to other layers or parts relative position. For example, a layer disposed above or below another layer may directly contact the other layer or may have one or more intervening layers. In addition, a layer disposed between the two layers may be in direct contact with the two layers or may have one or more intervening layers. Instead, the first layer "on" the second layer is in direct contact with the second layer. Similarly, unless explicitly stated otherwise, a feature disposed between two features may be in direct contact with an adjacent feature or may have one or more intervening features.This specification may use the phrase "in an embodiment," which may refer to one or more of the same or different embodiments. In addition, the terms "including", "including", "having" and the like as used in relation to the embodiments of the present disclosure are synonymous.This article may use the term "coupled with" and its derivatives. "Coupled" can mean one or more of the following. "Coupled" can mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are in indirect contact with each other but still cooperate or interact with each other, and may mean that one or more other elements are coupled or connected between elements that are said to be coupled to each other. The term "directly coupled" may mean that two or more elements are in direct contact.In various embodiments, the phrase "a first feature formed, deposited, or otherwise disposed on a second feature" may mean that the first feature is formed, deposited, or disposed on the second feature, and that at least the first feature is A portion may be in direct contact with at least a portion of the second feature (e.g., direct physical and electrical contact) or indirect contact (e.g., having one or more other features between the first feature and the second feature).Where the present disclosure states "a" or "first" element or its equivalent, such disclosure includes one or more such elements, neither requiring nor excluding two or more such elements. In addition, the order indication (for example, first, second, or third) used to identify elements is used to distinguish between elements, and does not indicate or imply a required or limited number of such elements, nor does it indicate such elements. The specific position or order of the elements, unless specifically stated otherwise.As used herein, the term "circuit" can refer to or include an application specific integrated circuit (ASIC), electronic circuit, processor (shared, dedicated, or group), and/or memory (shared, dedicated, or group) that executes one or more software or firmware programs Dedicated or group), combinational logic circuits and/or other suitable hardware components that provide the described functions, or the term "circuit" may be a part of them. As used herein, a "computer-implemented method" may refer to a computer system with one or more processors, a mobile device such as a smart phone (which may include one or more processors), a tablet Any method executed by a computer, laptop, set-top box, game console, etc.The embodiments of the present disclosure may be formed or implemented on a substrate such as a semiconductor substrate. In one embodiment, the semiconductor substrate may be a crystalline substrate formed using bulk silicon or silicon-on-insulator substructures. In other embodiments, the semiconductor substrate can be formed using alternative materials, which can be combined with silicon or not. Alternative materials include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, and phosphorus. Indium, gallium arsenide, indium gallium arsenide, gallium antimonide, or other combinations of III-V or IV materials. Although a few examples of materials that can form a substrate are described here, any material that can be used as a basis on which a semiconductor device can be built falls within the spirit and scope of the present disclosure.Multiple transistors, such as metal-oxide-semiconductor field effect transistors (MOSFETs, or MOS transistors for short) can be fabricated on the substrate. In various embodiments of the present disclosure, the MOS transistor may be a planar transistor, a non-planar transistor, or a combination of both. Non-planar transistors include FinFET transistors (e.g., double-gate transistors or tri-gate transistors), and surround or all-surround gate transistors (e.g., nanoribbon and nanowire transistors). Although the embodiments described herein may only show planar transistors, it should be noted that non-planar transistors may also be used to practice the present disclosure.Each MOS transistor includes a gate stack formed of at least two layers (ie, a gate dielectric layer and a gate electrode layer). The gate dielectric layer may include one layer or a stack of layers. One or more layers may include silicon oxide, silicon dioxide (SiO2), and/or high-k dielectric materials. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that can be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium oxide silicon, tantalum oxide, titanium oxide, barium strontium titanium, Barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, when a high-k material is used, an annealing process may be performed on the gate dielectric layer to improve its quality.The gate electrode layer is formed on the gate dielectric layer, and may be composed of at least one P-type work function metal or N-type work function metal according to whether the transistor is a PMOS or NMOS transistor. In some embodiments, the gate electrode layer may be composed of a stack of two or more metal layers, where one or more metal layers are work function metal layers, and at least one metal layer is a filling metal layer. For other purposes, additional metal layers, such as barrier layers, may be included.For PMOS transistors, metals that can be used for the gate electrode include but are not limited to ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, such as ruthenium oxide. The P-type metal layer will enable the formation of a PMOS gate electrode with a work function between about 4.9 eV and about 5.2 eV. For NMOS transistors, metals that can be used for gate electrodes include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals, for example, hafnium carbide, zirconium carbide, titanium carbide, and carbide Tantalum and aluminum carbide. The N-type metal layer will enable the formation of an NMOS gate electrode with a work function between about 3.9 eV and about 4.2 eV.In some embodiments, when viewed as the cross-section of the transistor along the source-channel-drain direction, the gate electrode may be composed of a "U"-shaped structure including a surface substantially parallel to the substrate The bottom part and the two sidewall parts substantially perpendicular to the top surface of the substrate. In another embodiment, at least one of the metal layers forming the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate, and does not include sidewalls that are substantially perpendicular to the top surface of the substrate. part. In other embodiments of the present disclosure, the gate electrode may be composed of a combination of a U-shaped structure and a planar non-U-shaped structure. For example, the gate electrode may be composed of one or more U-shaped metal layers formed on top of one or more planar non-U-shaped layers.In some embodiments of the present disclosure, a pair of sidewall spacers may be formed on opposite sides of the gate stack, the pair of sidewall spacers supporting the gate stack. The sidewall spacers may be formed of materials such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. The process for forming the sidewall spacers is well known in the art, and generally includes deposition and etching process operations. In alternative embodiments, multiple pairs of spacers may be used; for example, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposite sides of the gate stack.As is well known in the art, source and drain regions are formed in the substrate adjacent to the gate stack of each MOS transistor. Generally, an implantation/diffusion process or an etching/deposition process is used to form the source and drain regions. In the former process, dopants (such as boron, aluminum, antimony, phosphorus, or arsenic) can be ion-implanted into the substrate to form source and drain regions. The annealing process that activates the dopants and diffuses them further into the substrate usually follows the implantation process. In the latter process, the substrate may be etched first to form recesses at the locations of the source and drain regions. An epitaxial deposition process can then be performed to fill the recesses with materials used to fabricate the source and drain regions. In some embodiments, a silicon alloy (eg, silicon germanium, or silicon carbide) may be used to fabricate the source and drain regions. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with a dopant (eg, boron, arsenic, or phosphorus). In other embodiments, one or more alternative semiconductor materials (eg, germanium or III-V materials, or alloys) may be used to form the source and drain regions. And in other embodiments, one or more layers of metals and/or metal alloys may be used to form the source/drain regions.One or more interlayer dielectrics (ILD) are deposited on the MOS transistor. The ILD layer can be formed using known dielectric materials suitable for use in integrated circuit structures (for example, low-k dielectric materials). Examples of dielectric materials that can be used include, but are not limited to, silicon dioxide (SiO2), carbon-doped oxide (CDO), silicon nitride, organic polymers (for example, perfluorocyclobutane, or polytetrafluoroethylene) , Fluorosilicate glass (FSG), and organosilicate (for example, silsesquioxane), siloxane, or organosilicate glass. The ILD layer may include pores or air gaps to further reduce its dielectric constant.Figures 1(a)-1(e) schematically illustrate diagrams of a semiconductor device 100 including a three-dimensional capacitor having one or more capacitor units stacked around a pillar according to some embodiments. FIG. 1(a)-FIG. 1(b) show the device 100 and the capacitor 110 in cross-sectional view and top view. Figure 1(c) shows a device 100 including an array of three-dimensional capacitors in a cross-sectional view. Figures 1(d) and 1(e) show the device 100 and the array of three-dimensional capacitors in plan views. Three-dimensional capacitors can be referred to simply as capacitors.In an embodiment, as shown in FIG. 1( a ), the semiconductor device 100 includes a substrate 131 and a transistor 130 formed in the substrate 131 or above the substrate 131. The capacitor 110 may be further formed above the transistor 130. In some embodiments, the transistor 130 may be a FEOL transistor, for example, a transistor of a processor formed in the substrate 131. In some other embodiments, the transistor 130 may be a BEOL transistor, such as a thin film transistor (TFT), which may be part of a memory device. The transistor 130 includes a channel 134, a source electrode 133, and a drain electrode 135. The channel 134 may pass through the channel 134 along the horizontal direction 132 from the source electrode 133 to the drain electrode 135. The capacitor 110 may be an information storage unit controlled by the transistor 130 or an energy storage device that supplies power to the transistor 130.In an embodiment, the capacitor 110 includes a pillar 112 and one or more capacitor units stacked around the pillar 112, for example, the capacitor unit 111, the capacitor unit 113, etc., wherein the pillar 112 is perpendicular to the first direction 132 The second direction 142 is placed. The pillar 112 may include various materials, for example, a conductive material, a dielectric material, an insulator, or other materials. As shown in FIG. 1(b), the capacitor unit 111 and the capacitor unit 113 may form a pair, and the capacitor 110 may include 128 pairs of such capacitor units. Figure 1(b) is shown as an example only. There may be other numbers of capacitor cells stacked around the pillars.The capacitor unit (for example, the capacitor unit 111) may include a first electrode 103 surrounding and coupled to the pillar 112, a dielectric layer 105 surrounding the first electrode 103, and a second electrode 101 surrounding the dielectric layer 105. In some embodiments, the capacitor unit 111 may further include an interface layer 107 between the first electrode 103 and the dielectric layer 105 or an interface layer 109 between the dielectric layer 105 and the second electrode 101. The interface layer 107 or the interface layer 109 is optional, and may not be included in all capacitor cells. The capacitor unit 113 also includes a first electrode 123 surrounding and coupled to the pillar 112, a dielectric layer 125 surrounding the first electrode 123, and a second electrode 121 surrounding the dielectric layer 125. The capacitor unit 113 may further include an interface layer 127 between the first electrode 123 and the dielectric layer 125 or an interface layer 129 between the dielectric layer 125 and the second electrode 121. In some embodiments, as shown in FIG. 1(b), the height of the capacitor unit 111 (for example, the height of the first electrode 103) may be about 0.05 μm, and the height of the first electrode 123 may also be about 0.05 μm. .In some embodiments, the capacitor unit (for example, the capacitor unit 111) may have a pillar 112 to serve as the first electrode 103. In some other embodiments, the first electrode may be an additional component surrounding the pillar. For example, for the capacitor unit 113, in order to have a larger area, the first electrode 123 is a component that is coupled to the pillar 112 and extends into a direction 132 orthogonal to the pillar 112.In some embodiments, the capacitor unit 111 and the capacitor unit 113 may share a dielectric layer, an interface layer, or an electrode. As shown in FIG. 1(a), the dielectric layer 105 of the capacitor unit 111 and the dielectric layer 125 of the capacitor unit 113 are formed to conformally surround the pillar 112 and the first electrode 103 of the capacitor unit 111 and the first electrode 123 of the capacitor unit 113 Continuous dielectric layer. Similarly, the interface layer 107 and the interface layer 109 of the capacitor unit 111 and the interface layer 127 and the interface layer 129 of the capacitor unit 113 also form a continuous interface layer that conformally surrounds the pillar 112. The second electrode 101 of the capacitor unit 111 and the second electrode 121 of the capacitor unit 113 form a continuous electrode that conformally surrounds the pillar 112 and the continuous dielectric layer. For example, the second electrode 101 and the second electrode 121 may be a piece of continuous conductive metal formed at the same time.In some other embodiments, the first electrode 103 of the capacitor unit 111 includes a different material from the first electrode 123 of the capacitor unit 113, and the dielectric layer 105 of the capacitor unit 111 includes a different material from the dielectric layer 125 of the capacitor unit 113, or The second electrode 101 of the capacitor unit 111 contains a different material from the second electrode 121 of the capacitor unit 113. Similarly, the interface layer 107 or the interface layer 109 of the capacitor unit 111 may include a material different from the material of the interface layer 127 or the interface layer 129 of the capacitor unit 113.In an embodiment, the first electrode 103 of the capacitor unit 111 or the first electrode 123 of the capacitor unit 113, the dielectric layer 105 of the capacitor unit 111 or the dielectric layer 125 of the capacitor unit 113, or the second electrode 101 or the capacitor of the capacitor unit 111 The second electrode 121 of the cell 113 may enclose an area of a square shape, a rectangular shape, a circle, an oval shape, a polygon including three or more sides, or any other irregular shape. FIG. 1(a) shows that the first electrode 103, the first electrode 123, the dielectric layer 105, and the dielectric layer 125 enclose a circle, while the second electrode 101 and the second electrode 121 enclose a square. For other embodiments, other shapes are possible. In some embodiments, the first electrode, the second electrode, the dielectric layer, and the interface layer may surround different shapes.In the embodiment, the first electrode 103 of the capacitor unit 111 has a first circumference and a first area in a plan view, and the first electrode 123 of the capacitor unit 113 has a second circumference and a second area in a plan view. The first perimeter may be different from the second perimeter, or the first area may be different from the second area. Similarly, the second electrodes, dielectric layers, or interface layers of different capacitor units may have different perimeters or areas.In an embodiment, the first electrode (for example, the first electrode 103 of the capacitor unit 111 or the first electrode 123 of the capacitor unit 113) may include a first metal material having a first work function, and the second electrode (for example, The second electrode 101 of the capacitor unit 111 or the second electrode 121 of the capacitor unit 113) may include a second metal material having a second work function different from the first work function. The first electrode 103, the first electrode 123, the second electrode 101, or the second electrode 121 may include W, Mo, Ti, Ta, Al, TaN, TiN, TiC, WN, MoN, MoC, Co, Ni, Cu, Ru , Pd, Pt, Ir, IrOx, graphene, MnO2, Li, RuOx, ITO, SrRuOx, metal oxides, graphitic carbon, alkali metals, low work function metals, transition metal oxides, Co oxides, LiCoO2, NaCoO2 Transition metal disulfide, spinel oxide, LiMn2O4, LiNiMnO4, conductive polymer or conductive metal.In an embodiment, the dielectric layer 105 or the dielectric layer 125 may include Al2O3, HfO2, ZrO2, TiO2, Nb2O5, Ta2O5, SrTiOx, BaTiOx, Ga2O3, Y2O3, rare earth oxides, solid electrolytes, glass electrolytes, ceramic electrolytes, ion conductive Inverse perovskite, Li3ClO, doped Li(3-2x)DxClO, hafnium silicate, zirconium silicate, hafnium dioxide, hafnium zirconate, zirconium dioxide, alumina, titanium oxide, silicon nitride, carbon doped Hybrid silicon nitride, silicon carbide, and hafnium silicate nitride, high-k dielectric materials or alloys thereof, in the doped Li(3-2x)DxClO, D is a divalent cation dopant. When the dielectric layer 105 or the dielectric layer 125 includes a solid electrolyte layer, the solid electrolyte layer may include an oxide or a chalcogenide-based layer.In an embodiment, the interface layer 107, the interface layer 109, the interface layer 127, and the interface layer 129 may include a pseudocapacitance layer, and the pseudocapacitance layer includes RuOx, MnOx, VOx, an active redox center material, or a catalytic relay material. For example, the dielectric layer 105 may be a solid electrolyte, and the interface layer 107 or the interface layer 109 may be a pseudocapacitance layer in contact with the dielectric layer and the electrode. The pseudocapacitance layer includes a material with an active redox pair where the Faraday reaction will occur. The pseudocapacitance layer is in contact with the electrodes and electrolyte so that the pseudocapacitor functions as an electrochemical energy storage device. In fact, pseudocapacitors store energy electrostatically in an electric double layer, and store energy electrochemically through the Faraday reaction.In an embodiment, the capacitor 110 may be a normal capacitor for storing information, a super capacitor, an electrostatic double layer capacitor (EDLC), an electrochemical capacitor, a pseudo capacitor, a pseudo capacitor based on the redox Faraday reaction, a lithium ion capacitor, an electrochemical capacitor Energy storage devices or hybrid battery-supercapacitor devices. The capacitor 110 may have a dielectric breakdown voltage greater than about 1V and less than about 5V.In an embodiment, when the capacitor 110 is a normal capacitor, the dielectric layer 105 or the dielectric layer 125 may not be ionically conductive, that is, not a solid electrolyte, but a high-k dielectric material. When the capacitor 110 is a super capacitor, the dielectric layer 105 or the dielectric layer 125 may be a solid electrolyte, but there is no pseudo-capacitance layer as an interface layer. In such a case, the capacitor 110 mainly stores energy electrostatically in the electric double layer at each electrode-electrolyte interface. When the capacitor 110 is mainly used as a pseudocapacitor, the dielectric layer 105 or the dielectric layer 125 is a solid electrolyte, and the pseudocapacitance layer exists and contacts both the electrode and the solid electrolyte. In this case, the capacitor 110 electrostatically stores energy in the electric double layer, and electrochemically stores energy through an electric Faraday reaction at each electrode-pseudocapacitance layer-electrolyte interface. In some embodiments, the supercapacitor may also include a thin separation layer that acts as a separator at the center of the electrolyte layer.Figure 1(c) shows, in cross-section, the device 100 including an array of three-dimensional capacitors, capacitor 110, capacitor 120, capacitor 160, and more. As described above for FIG. 1(a), the capacitor 110 includes the pillar 112, and one or more capacitor units stacked around the pillar 112, for example, the capacitor unit 111, the capacitor unit 113, and more. In an embodiment, the pillar 112 includes a conductive material, and serves as the first electrode of the capacitor 110. On the other hand, the capacitor units of the capacitor 110 share the same second electrode 101. Similarly, the capacitor 120 includes a pillar 122 and one or more capacitor cells stacked around the pillar 122. In an embodiment, the pillar 122 includes a conductive material, and serves as the first electrode of the capacitor 120. On the other hand, the capacitor unit of the capacitor 120 shares the same second electrode 101 that is shared with the capacitor 110. Similarly, the capacitor 160 includes a pillar 162 and one or more capacitor units stacked around the pillar 162. In an embodiment, the pillar 162 includes a conductive material, and serves as the first electrode of the capacitor 160. On the other hand, the capacitor unit of the capacitor 160 shares the same second electrode 101 as the capacitor 110 and the capacitor 120.The pillar 112, the pillar 122, and the pillar 162 are coupled to the shared electrode 152 at a position 154, and the shared second electrode 101 is coupled to the shared electrode 151 at a position 153. The shared electrode 151 or the shared electrode 152 may be coupled to a bus or power rail. The capacitor 110, the capacitor 120, the capacitor 160, and more form a capacitor array embedded in the dielectric layer 161 and the dielectric layer 163.Fig. 1(d) shows a top view of a device 100 having a capacitor 110, a capacitor 120, and a capacitor 160. The device 100 is the same as the device 100 shown in Figs. 1(a) to 1(c). The capacitor unit (for example, the capacitor unit 111) of the capacitor 110 includes a first electrode 103 surrounding and coupled to the pillar 112, a dielectric layer 105 surrounding the first electrode 103, and a second electrode 101 surrounding the dielectric layer 105. The interface layer 107 between the electrode 103 and the dielectric layer 105 and the interface layer 109 between the dielectric layer 105 and the second electrode 101. The first electrode 103, the dielectric layer 105, the interface layer 107, and the interface layer 109 surround a circular area. The second electrode 101 surrounds a square-shaped area.Similarly, the capacitor unit of the capacitor 120 includes a first electrode coupled to the pillar 122, a dielectric layer 106 surrounding the first electrode 122, and a second electrode 101 surrounding the dielectric layer 106, between the first electrode 122 and the dielectric layer 106. The interface layer 104 between and the interface layer 108 between the dielectric layer 106 and the second electrode 101. The first electrode 122, the dielectric layer 106, the interface layer 104, and the interface layer 108 surround a circular area.In addition, the capacitor unit of the capacitor 160 includes a first electrode coupled to the pillar 162, a dielectric layer 164 surrounding the first electrode 102, and a second electrode 101 surrounding the dielectric layer 164. The first electrode 162 or the dielectric layer 164 surrounds a square-shaped area.Figure 1(e) shows a device 100 including an array of capacitors of various sizes as an example. Although the capacitor may have multiple layers (for example, 3 layers or 5 layers) of the first electrode, the dielectric layer, and the interface layer, only two such layers (layer 171 and layer 172) are shown as examples for calculation. The layer 171 may be a first electrode, and the layer 172 may be a dielectric layer. The capacitor may have a plurality of stacked bodies (#nl). For each double layer in the stack (#nl), the total surface area (SALH)=(2*pi*R*hbt+2*pi*(R+d)*h+2*pi*((R+ d)^2-R^2)), wherein hbt and h are shown in Figure 1(b). The total surface area of each hole=SALH*#nl. Bit cell top-view area (BCA)=PV*PH. Bit unit #/unit area = 1/BCA. The total "hole" surface area of each unit "top view" area=(1/BCA)*SALH*#nl.FIG. 2 illustrates a process 200 for forming a semiconductor device including a three-dimensional capacitor having one or more capacitor cells stacked around a pillar according to some embodiments. In an embodiment, a process 200 may be applied to form a semiconductor device 100, which includes a capacitor 110 having a capacitor unit 113 stacked around a pillar, as shown in FIG. 1(a).At block 201, the process 200 includes forming a transistor, where the transistor includes a channel along a first direction. For example, the process 200 may include a transistor 130, where the transistor 130 includes a channel 134 along a horizontal direction, as shown in FIG. 1(a).At block 203, the process 200 includes forming pillars placed in a second direction orthogonal to the first direction. For example, the process 200 includes forming pillars 112 placed in a vertical direction orthogonal to the horizontal direction, as shown in FIG. 1(a).At block 205, the process 200 includes forming a first electrode surrounding and coupled to the pillar. For example, the process 200 includes forming a first electrode 123 surrounding and coupled to the pillar 112, as shown in FIG. 1(a).At block 207, the process 200 includes forming a dielectric layer surrounding the first electrode. For example, the process 200 includes forming a dielectric layer 125 surrounding the first electrode 123, as shown in FIG. 1(a).At block 209, the process 200 includes forming a second electrode surrounding the dielectric layer, where the first electrode, the dielectric layer, and the second electrode form capacitor cells around the pillars. For example, the process 200 includes forming the second electrode 121 surrounding the dielectric layer 125, where the first electrode 123, the dielectric layer 125, and the second electrode 121 form the capacitor unit 113 around the pillar 112.In addition, the process 200 may include additional operations. For example, the process 200 includes forming another capacitor cell over the capacitor cell formed in block 201-209. In detail, forming the second capacitor unit includes: forming a first electrode of the second capacitor unit surrounding and coupled to the pillar and above the first capacitor unit, and forming a dielectric of the second capacitor unit surrounding the first electrode of the second capacitor Layer, and the second electrode of the second capacitor unit forming the dielectric layer surrounding the second capacitor. In some embodiments, the dielectric layer of the first capacitor unit and the dielectric layer of the second capacitor unit form a continuous dielectric layer that conformally surrounds the pillar and the first electrode of the first capacitor unit and the second capacitor unit, and the first The second electrode of the capacitor and the second electrode of the second capacitor form a continuous electrode. In addition, the process 200 may further include forming an interface layer between the first electrode and the dielectric layer, or between the dielectric layer and the second electrode.3(a)-3(b) schematically show diagrams of a semiconductor device including a three-dimensional capacitor having one or more capacitor units stacked around a pillar according to some embodiments. FIG. 3(a) shows a semiconductor device 300 including a capacitor 321 having one or more capacitor units stacked around a pillar. In an embodiment, the capacitor 321 may be part of the capacitor array 320. FIG. 3(b) shows a semiconductor device 350 including a capacitor 361 having one or more capacitor units stacked around a pillar. In an embodiment, the capacitor 361 may be part of the capacitor array 360. The capacitor 321 and the capacitor 361 may be similar to the capacitor 110 shown in FIG. 1(a).In an embodiment, as shown in FIG. 3(a), the device 300 includes a substrate 301 and a transistor 310 formed at the FEOL 302. In some embodiments, the transistor 310 may be a transistor of a processor. The transistor 310 includes a channel 311, a source electrode 312, and a drain electrode 313. The channel 311 may pass through the channel 311 along the horizontal direction 315 from the source electrode 312 to the drain electrode 313. The interconnect structure 303 is formed in the BEOL 304 of the device 300. The power rail 305 is coupled to the transistor 310, where the power rail 305 is located at the BEOL 304.In an embodiment, the capacitor array 320 is formed over or within the substrate 322, where the substrate 322 is different from the substrate 301. Instead, the substrate 322 is coupled to the device 300 at the back side through one or more connectors 307. The capacitor array 320 includes a capacitor 321, wherein the first electrode or the second electrode of the capacitor 321 is coupled to the power rail 309. The power rail 309 is coupled to the power rail 305 through one or more connectors 307. In some embodiments, the connector 307 may be a solder ball, microsphere, or any other connector. The capacitor 321 is coupled to the transistor 310 through the power rail 305, the connector 307, and the power rail 309 located at the BEOL 304 of the device 300.In an embodiment, as shown in FIG. 3(b), the device 350 includes a substrate 341 and a transistor 330 formed at the FEOL 342. In some embodiments, the transistor 330 may be a transistor of a processor. The transistor 330 includes a channel 331, a source electrode 332, and a drain electrode 333. The channel 331 may pass through the channel 331 along the horizontal direction 335 from the source electrode 332 to the drain electrode 333. The interconnect structure 343 is formed in the BEOL 344 of the device 350. Power rail 345 is coupled to transistor 330, where power rail 345 is located at BEOL 344.In an embodiment, the capacitor array 360 is formed over or within the substrate 362, where the substrate 362 is different from the substrate 341. In contrast, the substrate 362 is directly coupled to the device 300 at the back surface, such as by direct bonding. The capacitor array 360 includes a capacitor 361, wherein the first electrode or the second electrode of the capacitor 361 is coupled to the power rail 349. The power rail 349 is coupled to the power rail 345 without passing through the connector. Capacitor 361 is coupled to transistor 330 through power rail 345 and power rail 349 located at BEOL 344 of device 350.FIG. 4 shows an interposer 400 that includes one or more embodiments of the present disclosure. The interposer 400 is an intermediate substrate for bridging the first substrate 402 to the second substrate 404. The first substrate 402 may be, for example, a substrate support portion for a device, and the device is, for example, a processor 421 including a transistor 422. The transistor 422 may have a channel along the horizontal direction. The second substrate 404 may be, for example, a computer motherboard, a circuit board, or another integrated circuit die. Generally, the purpose of the interposer 400 is to extend the connection to a wider pitch or to reroute the connection to a different connection. For example, the interposer 400 can couple the integrated circuit die to a ball grid array (BGA) 406, which can then be coupled to the second substrate 404. In some embodiments, the first and second substrates 402/404 are attached to opposite sides of the interposer 400. In other embodiments, the first substrate and the second substrate 402/404 are attached to the same side of the interposer 400. And in another embodiment, three or more substrates are interconnected through the interposer 400.In an embodiment, the interposer 400 includes a capacitor 420, wherein the capacitor 420 is coupled to the processor 421 through a metal interconnect 408, a via 410, or a through silicon via (TSV) 412. In an embodiment, the capacitor 420 includes a pillar 424 and one or more capacitor units stacked around the pillar 424, for example, the capacitor unit 423, the capacitor unit 425, and more, wherein the pillar 424 is placed in a vertical direction.The interposer 400 may be formed of epoxy resin, glass fiber reinforced epoxy resin, ceramic material, or polymer material such as polyimide. In other embodiments, the interposer 400 may be formed of alternating rigid or flexible materials, which may include the same materials as those used in the semiconductor substrate, such as silicon, germanium, and other III-V and IV groups. Family material.The interposer 400 may include metal interconnects 408 and vias 410, including but not limited to through silicon vias (TSV) 412. The interposer 400 may also include embedded devices 414, including both passive and active devices. These devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. It is also possible to form more complex devices on the interposer 400, such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices.According to an embodiment of the present disclosure, the device or process disclosed herein may be used in the manufacture of the interposer 400.FIG. 5 shows a computing device 500 according to an embodiment of the present disclosure. The computing device 500 may include multiple components. In one embodiment, these components are attached to one or more motherboards. In alternative embodiments, some or all of these components are fabricated onto a single system-on-chip (SoC) die (e.g., SoC for mobile devices). The components in the computing device 500 include, but are not limited to, an integrated circuit die 502 and at least one communication logic unit 508. In some embodiments, the communication logic unit 508 is fabricated within the integrated circuit die 502, while in other embodiments, the communication logic unit 508 is fabricated in a separate integrated circuit chip that is bonded to the substrate A bottom or main board that is shared with or electrically coupled to the integrated circuit die 502. The integrated circuit die 502 may include a processor 504 and on-die memory 506, which is often used as a cache memory that may be provided by technologies such as embedded DRAM (eDRAM) or SRAM. For example, the on-die memory 506 may include a plurality of memory cells, where the memory cells may include capacitors similar to the capacitors 110, 120, 160, or 420 shown in FIGS. 1-4. The computing device 500 may also include a capacitor or capacitor array 550 coupled to the processor integrated circuit die 502, where the capacitor or capacitor array 550 may be similar to the capacitors 110, 120, 160, 420 shown in FIGS. 1-4 Or capacitor array 320, 360.In an embodiment, the computing device 500 may include a display or touch screen display 524, and a touch screen display controller 526. The display or touch screen display 524 may include FPD, AMOLED display, TFT LCD, micro light emitting diode (μLED) display, and the like.The computing device 500 may include other components that may or may not be physically and electrically coupled to the motherboard or manufactured within the SoC die. These other components include, but are not limited to, volatile memory 510 (e.g., dynamic random access memory (DRAM)), non-volatile memory 512 (e.g., ROM or flash memory), graphics processing unit 514 (GPU), digital signal Processor (DSP) 516, encryption processor 542 (for example, a dedicated processor that executes encryption algorithms in hardware), chipset 520, at least one antenna 522 (in some embodiments, two or more antennas may be used ), battery 530 or other power source, power amplifier (not shown), voltage regulator (not shown), global positioning system (GPS) device 528, compass, motion co-processor or sensor 532 (which may include accelerometer, gyroscope Instrument and compass), microphone (not shown), speaker 534, camera 536, user input device 538 (e.g. keyboard, mouse, stylus and touchpad), and mass storage device 540 (e.g. hard drive, compact disk (CD) , Digital Versatile Disk (DVD), etc.). The computing device 500 may incorporate other transmission, telecommunications, or radio functions not described herein. In some embodiments, the computing device 500 includes a radio for communicating over a certain distance by modulating and radiating electromagnetic waves in the air or space. In another embodiment, the computing device 500 includes a transmitter and a receiver (or transceiver) for communicating over a certain distance by modulating and radiating electromagnetic waves in the air or space.The communication logic unit 508 can implement wireless communication for transmitting data to and from the computing device 500. The term "wireless" and its derivatives can be used to describe circuits, devices, systems, methods, technologies, communication channels, etc. that can transmit data via non-solid media by using modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not. The communication logic unit 508 can implement any of a variety of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE802.20, Long Term Evolution (LTE), Ev-Fi DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, infrared (IR), near field communication (NFC), Bluetooth, its derivatives, and are designated as 3G, 4G, 5G and higher Version of any other wireless protocol. The computing device 500 may include a plurality of communication logic units 508. For example, the first communication logic unit 508 may be dedicated to short-range wireless communication such as Wi-Fi, NFC, and Bluetooth, and the second communication logic unit 508 may be dedicated to such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, EV -Long-range wireless communication such as DO.The processor 504 of the computing device 500 includes one or more devices, such as transistors. The term "processor" may refer to any device or part of a device that processes electronic data from a register and/or memory to convert the electronic data into other electronic data that can be stored in the register and/or memory. The communication logic unit 508 may also include one or more devices, such as transistors.In another embodiment, another component housed in the computing device 500 may include one or more devices formed according to the current embodiment of the present disclosure, such as a DRAM.In various embodiments, the computing device 500 may be a laptop computer, a netbook computer, a notebook computer, an ultrabook computer, a smart phone, a non-smart phone, a tablet computer, a tablet/laptop hybrid computer, a personal digital assistant (PDA) , Ultra mobile PC, mobile phone, desktop computer, server, printer, scanner, monitor, set-top box, entertainment control unit, digital camera, portable music player, or digital video recorder. In other embodiments, the computing device 500 may be any other electronic device that processes data.Some non-limiting examples are provided below.Example 1 may include a semiconductor device including: a three-dimensional capacitor including: a pillar; and one or more capacitor units stacked around the pillar, wherein the capacitor unit of the one or more capacitor units includes: surrounding and A first electrode coupled to the pillar, a dielectric layer surrounding the first electrode, and a second electrode surrounding the dielectric layer.Example 2 may include the semiconductor device according to Example 1 and/or some other examples herein, further including: a transistor, wherein the transistor includes a channel along a first direction, and wherein the pillar is placed orthogonal to the first direction In the second direction.Example 3 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the capacitor unit is a first capacitor unit, and the capacitor further includes a second capacitor unit, and wherein the dielectric layer of the first capacitor unit and the second capacitor unit The dielectric layer of the capacitor unit forms a continuous dielectric layer that conformally surrounds the pillars and the first electrodes of the first and second capacitor units, and the second electrode of the first capacitor and the second electrode of the second capacitor form the continuous electrode .Example 4 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the capacitor unit is a first capacitor unit, and the capacitor further includes a second capacitor unit, and wherein the first electrode of the first capacitor unit includes and The first electrode of the second capacitor unit has a different material, or the dielectric layer of the first capacitor unit contains a different material from the dielectric layer of the second capacitor unit.Example 5 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the capacitor includes a first capacitor unit and a second capacitor unit, and the first capacitor unit has a first perimeter and a first area in a top view. The first electrode, the second capacitor unit has a first electrode having a second perimeter and a second area in a plan view, and wherein the first perimeter is different from the second perimeter, or the first area is different from the second area different.Example 6 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the first electrode, the dielectric layer, or the second electrode surrounds a square shape, a rectangular shape, a circle, an oval shape, or includes three or more The area of a polygon with multiple sides.Example 7 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the first electrode includes a first metal material having a first work function, and the second electrode includes a second metal material having a work function different from the first work function. Work function of the second metal material.Example 8 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the first electrode or the second electrode includes W, Mo, Ti, Ta, Al, TaN, TiN, TiC, WN, MoN, MoC, Co, Ni, Cu, Ru, Pd, Pt, Ir, IrOx, graphene, MnO2, Li, RuOx, ITO, SrRuOx, metal oxides, graphitic carbon, alkali metals, low work function metals, transition metal oxides, Co Oxide, LiCoO2, NaCoO2, transition metal disulfide, spinel oxide, LiMn2O4, LiNiMnO4, conductive polymer or conductive metal.Example 9 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the dielectric layer includes Al2O3, HfO2, ZrO2, TiO2, Nb2O5, Ta2O5, SrTiOx, BaTiOx, Ga2O3, Y2O3, rare earth oxides, solid electrolytes, Glass electrolyte, ceramic electrolyte, ion-conducting inverse perovskite, Li3ClO, doped Li(3-2x)DxClO, hafnium silicate, zirconium silicate, hafnium dioxide, hafnium zirconate, zirconium dioxide, alumina, Titanium oxide, silicon nitride, carbon-doped silicon nitride, silicon carbide, and hafnium nitride silicate, high-k dielectric materials or their alloys, in the doped Li(3-2x)DxClO, D is divalent Cationic dopant.Example 10 may include the semiconductor device according to Example 9 and/or some other examples herein, wherein the solid electrolyte layer includes an oxide, or a chalcogenide-based layer.Example 11 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the capacitor unit further includes an interface layer between the first electrode and the dielectric layer, or between the dielectric layer and the second electrode.Example 12 may include the semiconductor device according to Example 11 and/or some other examples herein, wherein the interface layer includes a pseudocapacitive layer, and wherein the pseudocapacitance layer includes RuOx, MnOx, VOx, an active redox center material, or a catalytic relay Material.Example 13 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the first electrode or the second electrode is coupled to the power rail.Example 14 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the three-dimensional capacitor is a supercapacitor, an electrostatic double layer capacitor (EDLC), an electrochemical capacitor, a pseudocapacitor, a pseudocapacitor based on a redox Faraday reaction, Lithium ion capacitors, electrochemical energy storage devices, or hybrid battery-supercapacitor devices.Example 15 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the capacitor is located at an interposer coupled to the processor, or at the back of the processor.Example 16 may include the semiconductor device according to Example 1 and/or some other examples herein, wherein the three-dimensional capacitor has a dielectric breakdown voltage greater than about 1V and less than about 5V.Example 17 may include a method for forming a semiconductor device, the method including: forming a transistor, wherein the transistor includes a channel along a first direction; forming a pillar placed along a second direction orthogonal to the first direction; Forming a first electrode surrounding and coupled to the pillar; forming a dielectric layer surrounding the first electrode, and forming a second electrode surrounding the dielectric layer, wherein the first electrode, the dielectric layer, and the second electrode form a capacitor around the pillar unit.Example 18 may include the method according to Example 17 and/or some other examples herein, wherein the capacitor unit is the first capacitor unit, and the method further includes: forming a second capacitor unit over the first capacitor unit, wherein the second capacitor is formed The unit includes: forming a first electrode of the second capacitor unit surrounding and coupled to the pillar and above the first capacitor unit; forming a dielectric layer of the second capacitor unit surrounding the first electrode of the second capacitor, and forming a dielectric layer surrounding the first electrode of the second capacitor unit; The second electrode of the second capacitor unit of the dielectric layer of the two capacitor.Example 19 may include the method according to Example 18 and/or some other examples herein, wherein the dielectric layer of the first capacitor unit and the dielectric layer of the second capacitor unit are formed to conformally surround the pillar and the first capacitor unit and the second capacitor The continuous dielectric layer of the first electrode of the cell, and the second electrode of the first capacitor and the second electrode of the second capacitor form a continuous electrode.Example 20 may include the method according to Example 17 and/or some other examples herein, further including: forming an interface layer between the first electrode and the dielectric layer, or between the dielectric layer and the second electrode.Example 21 may include a computing device including: a transistor including a channel in a first direction in a semiconductor device; and a three-dimensional capacitor coupled to the transistor, wherein the three-dimensional capacitor includes pillars, and is stacked around the pillars The pillars are placed along a second direction orthogonal to the first direction, wherein the capacitor units of the one or more capacitor units include: a first electrode surrounding and coupled to the pillars, surrounding the first The dielectric layer of the electrode, and the second electrode surrounding the dielectric layer.Example 22 may include the computing device according to Example 21 and/or some other examples herein, wherein the transistor is part of the processor.Example 23 may include the computing device according to Example 21 and/or some other examples herein, wherein the three-dimensional capacitor is coupled to the transistor through a power rail located at a back-end process for the semiconductor device.Example 24 may include the computing device according to Example 21 and/or some other examples herein, wherein the three-dimensional capacitor is located in the interposer.Example 25 may include the computing device according to Example 21 and/or some other examples herein, wherein the computing device is a wearable device or a mobile computing device, and the wearable device or mobile computing device includes an antenna coupled with a memory device, a touch screen controller, One or more of display, battery, processor, audio codec, video codec, power amplifier, global positioning system (GPS) device, compass, Geiger counter, accelerometer, gyroscope, speaker or camera.The various embodiments may include any suitable combination of the above-mentioned embodiments, and the embodiments include alternative (or) embodiments of the embodiments described above in a combined form (and) (for example, "and" may be "and/ or"). In addition, some embodiments may include one or more manufactured items (e.g., non-transitory computer-readable media) having instructions stored thereon that, when executed, cause the action. In addition, some embodiments may include devices or systems having any suitable units for carrying out the various operations of the above-described embodiments.The above description of the illustrated embodiments, including the content described in the abstract, is not intended to be exhaustive or to limit the embodiments of the present disclosure to the precise form disclosed. Although specific embodiments and examples are described herein for illustrative purposes, as those skilled in the relevant art will recognize, various equivalent modifications are possible within the scope of the present disclosure.In consideration of the above detailed description, these modifications can be made to the embodiments of the present disclosure. The terms used in the appended claims should not be construed to limit the various embodiments of the present disclosure to the specific embodiments disclosed in the specification and claims. On the contrary, the scope is entirely determined by the appended claims, and the claims should be interpreted in accordance with the established principles of claim interpretation. |
The techniques of this disclosure include deferred batching of incremental constant loads. Graphics APIs include the ability to use lightweight constants for use by shaders. A buffer is allocated by a graphics processing unit (GPU) driver that contains a snapshot of the current lightweight constants. This may provide a complete set of state to serve as a starting point. From then on updates to the lightweight constants may be appended to this buffer in an incremental fashion by inserting the update and increasing the size of the buffer by a command processor on a graphics processing unit (GPU). The incremental nature of the updates may be captured, but removes the need for issuing them on every draw call and instead the incremental updates may be batch processed when a live draw call is encountered. |
WHAT IS CLAIMED IS:1. A method of operating a graphics processing unit (GPU), the method comprising:determining, during a binning operation, that a first command in a command buffer is configured to update one or more constant values in a state buffer, the state buffer comprising a copy of a current constant state;appending one or more constant updates from the first command to the state buffer based on the first command without updating the one or more constant values until a draw operation;determining that a second command in the command buffer is configured to perform the draw operation;determining whether the second command is a visible draw operation; and when the second command is the visible draw operation, batch processing the one or more constant updates in the state buffer.2. The method of claim 1, further comprising: when the second command is the visible draw operation, loading the processed one or more constant updates to a hardware state via an indirect pointer.3. The method of claim 1, wherein processing the one or more constant updates comprises processing the one or more constant updates in the order appended to the state buffer.4. The method of claim 1, further comprising:when the second command is not the visible draw operation, processing a next command in the command buffer without updating the one or more constant values.5. The method of claim 1, further comprising increasing the size of the state buffer based on appending the one or more constant values.6. The method of claim 1, further comprising when the second command is not the visible draw operation, bypassing the processing of the one or more constant updates.7. The method of claim 1, wherein batch processing the one or more constant updates in the state buffer is based on the determination that the second command is the visible draw operation.8. The method of claim 1, wherein the binning operation comprises the binning operation of a first tile in a plurality of tiles of an image to be rendered by the GPU.9. The method of claim 8, further comprising:appending one or more constant updates from one or more third commands to the state buffer without updating the one or more constant values;at the completion of the binning operation of the first tile in the plurality of tiles, discarding the appended one or more constant updates that have not been batch processed.10. The method of claim 1, further comprising:determining, prior to retrieving the second command and during the binning operation, that a third command in the command buffer is configured to update the one or more constant values in the state buffer;bypass updating the one or more constant values; andappending one or more constant updates from the third command to the state buffer based on the third command.11. The method of claim 1, wherein batch processing comprises only when the second command is the visible draw operation, batch processing the one or more constant updates in the state buffer.12. The method of claim 1, wherein the one or more constant values comprise constant values mapped directly to a hardware resource on the GPU.13. An apparatus for processing data, the apparatus comprising:a graphics processing unit (GPU), the GPU comprising a command buffer and a state buffer, and a command processor, wherein the command processor is configured to: determine, during a binning operation, that a first command in the command buffer is configured to update one or more constant values in the state buffer, the state buffer comprising a copy of a current constant state;append one or more constant updates from the first command to the state buffer based on the first command without updating the one or more constant values until a draw operation;determine that a second command in the command buffer is configured to perform the draw operation;determine whether the second command is a visible draw operation; and when the second command is the visible draw operation, batch process the one or more constant updates in the state buffer.14. The apparatus of claim 13, wherein the command processor is further configured to, when the second command is the visible draw operation, load the processed one or more constant updates to a hardware state via an indirect pointer.15. The apparatus of claim 13, wherein the command processor configured to process the one or more constant updates comprises the command processor configured to process the one or more constant updates in the order appended to the state buffer.16. The apparatus of claim 13, wherein the command processor is further configured to, when the second command is not the visible draw operation, process a next command in the command buffer without an update of the one or more constant values.17. The apparatus of claim 13 further comprising a central processing unit (CPU) comprising a GPU driver, the GPU driver configured to increase the size of the state buffer based on appending the one or more constant values.18. The apparatus of claim 13, wherein the command processor is further configured to, when the second command is not the visible draw operation, bypass the processing of the one or more constant updates.19. The apparatus of claim 13, wherein batch processing the one or more constant updates in the state buffer is based on the determination that the second command is the visible draw operation.20. The apparatus of claim 13, wherein the binning operation comprises the binning operation of a first tile in a plurality of tiles of an image to be rendered by the GPU.21. The apparatus of claim 13, wherein the command processor is further configured to:append one or more constant updates from one or more third commands to the state buffer without updating the one or more constant values;at the completion of the binning operation of the first tile in the plurality of tiles, discard the appended one or more constant updates that have not been batch processed.22. The apparatus of claim 13, wherein the command processor is further configured to,determine, prior to retrieving the second command and during the binning operation, that a third command in the command buffer is configured to update the one or more constant values in the state buffer;bypass the update of the one or more constant values; andappend one or more constant updates from the third command to the state buffer based on the third command.23. The apparatus of claim 13, wherein the command processor configured to batch process comprises, only when the second command is the visible draw operation, batch process the one or more constant updates in the state buffer.24. The apparatus of claim 13, wherein the one or more constant values comprise constant values mapped directly to a hardware resource on the GPU.25. An apparatus configured to operate a graphic processing unit (GPU), the apparatus comprising: means for determining, during a binning operation, that a first command in a command buffer is configured to update one or more constant values in a state buffer, the state buffer comprising a copy of a current constant state;means for appending one or more constant updates from the first command to the state buffer based on the first command without updating the one or more constant values until a draw operation;means for determining that a second command in the command buffer is configured to perform the draw operation;means for determining whether the second command is a visible draw operation; andmeans for batch processing the one or more constant updates in the state buffer when the second command is the visible draw operation.26. The apparatus of claim 25, further comprising: means for loading the processed one or more constant updates to a hardware state via an indirect pointer when the second command is the visible draw operation.27. The apparatus of claim 25, further comprising: means for processing a next command in the command buffer without updating the one or more constant values when the second command is not the visible draw operation.28. The apparatus of claim 25, wherein the means for batch processing the one or more constant updates in the state buffer is based on the determination that the second command is the visible draw operation.29. The apparatus of claim 25, wherein the one or more constant values comprise constant values mapped directly to a hardware resource on the GPU.30. A non-transitory computer-readable storage medium including instructions stored thereon that, when executed, cause at least one processor to:determine, during a binning operation, that a first command in the command buffer is configured to update one or more constant values in the state buffer, the state buffer comprising a copy of a current constant state; append one or more constant updates from the first command to the state buffer based on the first command without updating the one or more constant values until a draw operation;determine that a second command in the command buffer is configured to perform the draw operation;determine whether the second command is a visible draw operation; and when the second command is the visible draw operation, batch process the one or more constant updates in the state buffer. |
DEFERRED BATCHING OF INCREMENTAL CONSTANT LOADSTECHNICAL FIELD[0001] The present disclosure relates to graphics processingBACKGROUND[0002] Computing devices often utilize a graphics processing unit (GPU) to accelerate the rendering of graphics data for display. Such computing devices may include, e.g., computer workstations, mobile phones such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles. GPUs typically execute a graphics processing pipeline that includes a plurality of processing stages which operate together to execute graphics processing commands. A host central processing unit (CPU) may control the operation of the GPU by issuing one or more graphics processing commands to the GPU, via e.g., an application programming interface (API). Modern day CPUs are typically capable of concurrently executing multiple applications, each of which may need to utilize the GPU during execution.SUMMARY[0003] The techniques of this disclosure include deferred batching of incremental constant loads. Specifically, graphics APIs include lightweight constants (also known as push constants) for use by shaders. A buffer is allocated by a graphics driver on a central processing unit (CPU), where the buffer contains a snapshot of the current lightweight constants. The snapshot may provide a complete set of state to serve as a starting point. From then on updates to the lightweight constants may be appended to this buffer, by the graphics driver, in an incremental fashion by inserting the update and increasing the size of the buffer, without needing to perform the updates at the time an update command is received. This may effectively capture the incremental nature of the updates, but removes the need for issuing them on every draw call and instead the incremental updates may be batch processed when a live (e.g., visible) draw call is encountered. For example, processing time is not wasted on immediately performing the updates especially if the updates are for pixels that are ultimately not visible.[0004] In one example of this disclosure, a method of operating a graphic processing unit (GPU) comprising: determining, during a binning operation, that a first command in a command buffer is configured to update one or more constant values in a state buffer, the state buffer comprising a copy of a current constant state, appending one or more constant updates from the first command to the state buffer based on the first command without updating the one or more constant values until a draw operation, determining that a second command in the command buffer is configured to perform the draw operation, determining whether the second command is a visible draw operation, and when the second command is the visible draw operation, batch processing the one or more constant updates in the state buffer.[0005] In another example, an apparatus for processing data, the apparatus comprising: a graphics processing unit (GPU), the GPU comprising command buffer and a state buffer, and a command processor, wherein the command processor is configured to: determine, during a binning operation, that a first command in the command buffer is configured to update one or more constant values in the state buffer, the state buffer comprising a copy of a current constant state, append one or more constant updates from the first command to the state buffer based on the first command without updating the one or more constant values until a draw operation, determine that a second command in the command buffer is configured to perform the draw operation, determine whether the second command is a visible draw operation, and when the second command is the visible draw operation, batch process the one or more constant updates in the state buffer.[0006] In another example, an apparatus configured to operate a graphic processing unit (GPU), the apparatus comprising: means for determining, during a binning operation, that a first command in a command buffer is configured to update one or more constant values in a state buffer, the state buffer comprising a copy of a current constant state, means for appending one or more constant updates from the first command to the state buffer based on the first command without updating the one or more constant values until a draw operation, means for determining that a second command in the command buffer is configured to perform the draw operation, means for determining whether the second command is a visible draw operation, and means for batch processing the one or more constant updates in the state buffer when the second command is the visible draw operation.[0007] In another example, a non-transitory computer-readable storage medium including instructions stored thereon that, when executed, cause at least one processor to: determine, during a binning operation, that a first command in the command buffer is configured to update one or more constant values in the state buffer, the state buffer comprising a copy of a current constant state, append one or more constant updates from the first command to the state buffer based on the first command without updating the one or more constant values until a draw operation, determine that a second command in the command buffer is configured to perform the draw operation, determine whether the second command is a visible draw operation, and when the second command is the visible draw operation, batch process the one or more constant updates in the state buffer.[0008] The details of one or more aspects of the present disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the present disclosure will be apparent from the description and drawings, and from the claims.BRIEF DESCRIPTION OF DRAWINGS[0009] FIG. 1 is a block diagram illustrating an example computing device that may be configured to implement one or more aspects of this disclosure.[0010] FIG. 2 is a conceptual diagram illustrating an exemplary operation of the command buffer 46 and state buffer 48 of FIG. 1, according to aspects of this disclosure.[0011] FIG. 3 is a flowchart illustrating an example method of deferred batching of constants according to aspects of the present disclosure.[0012] FIG. 4 is a flowchart illustrating an example method according to aspects of the present disclosure.DETAILED DESCRIPTION[0013] Graphics application programming interfaces (APIs), such as Khronos Group™ Vulkan®, include lightweight constants for use by shaders. A shader is a program that executes on a graphics processing unit (GPU) and causes the GPU to perform various operations. Lightweight constants, also called push constants, may include constants that are mapped directly to registers without overhead to a GPU driver, executing on a central processing unit (CPU)to write to these constants. These constants are accessible by shaders and include, in some examples, uniform values that are stored within the command buffer and may be accessed from the shaders similar to a single global uniform buffer. [0014] When rendering graphics data, various graphics processing techniques perform graphics processing in two passes. A first pass is referred to as a binning pass, in which a GPU determines which primitives belong to which bin (also called a tile) and which primitives are visible. In a second pass, referred to as a rendering pass, the GPU renders each bin sequentially based on the determination of which primitives belong to which bin and the visibility of the primitives.[0015] Lightweight constants may be designed to be incremental in nature and may be difficult and inefficient to use in a binning architecture because the stream of incremental changes may have to be executed for each bin, regardless of whether draws in the bin are visible (e.g., live) or not (e.g., dead). In a binning architecture, an image frame is divided into bins, and the GPU processes each bin. A live draw in a bin being visible means a draw call to render pixels that are visible, and a dead draw in a bin being not visible means a draw call to render pixel that are not visible.[0016] The techniques of this disclosure include a graphics processing unit (GPU) that may group register and other state writes and load them to the hardware via an indirect pointer. In an exemplary binning architecture, this indirect pointer load may be deferred until a visible draw call is encountered in the command stream. Deferred batching of incremental constant loads may leverage this functionality to allow accumulation of incremental constant writes on top of a snapshot of the current lightweight constant state. A buffer may be allocated that contains a snapshot of the current lightweight constants. In one example, the graphics driver places all register write commands in a large command buffer. In another example, the graphics driver can allocate smaller, individual buffers that can be loaded with a subset of the register state (e.g., grouped by function). The graphics driver then places an indirect pointer to these smaller buffers in the main command buffer. When the Command Processor (CP) in the GPU hardware consumes the main command buffer, the CP stores these indirect pointers until a live draw call is encountered, at which point the register programming in them is executed. For dead draw calls, the only overhead is storing away the indirect pointers.[0017] From then on, updates to the lightweight constants may be appended to this buffer in an incremental fashion by inserting the update and increasing the size of the buffer. This may effectively capture the incremental nature of the updates, but may remove the need for issuing them on every draw call: live, if the draw call will be visible, or dead, if the draw call will not be visible. [0018] Techniques of the present disclosure may represent a significant advantage to alternatives, for example, capturing a snapshot of the entire lightweight constant state for each draw call or processing constant updates in an immediate fashion, where they will be executed for every draw, regardless whether a particular draw is visible or not as these techniques may incur unnecessary processing for draw calls that do not affect what is displayed.[0019] FIG. 1 is a block diagram illustrating an example computing device 10 that may be configured to implement one or more aspects of this disclosure. Computing device 10 may be a computing device including but not limited to video devices, media players, set-top boxes, wireless handsets such as mobile telephones and so-called smartphones, personal digital assistants (PDAs), desktop computers, laptop computers, gaming consoles, video conferencing units, tablet computing devices, and the like.[0020] In the example of FIG. 1, computing device 10 includes central processing unit (CPU) 12, GPU 14, and system memory 16. Computing device 10 also includes transceiver module 19, user interface 20, and display 21. It should be understood, however, that other examples of computing device 10 may include more, fewer, or an alternative arrangement of components than those shown.[0021] For example, computing device 10 may include a speaker and a microphone, neither of which are shown in FIG. 1, to effectuate telephonic communications in examples where computing device 10 is a mobile wireless telephone, or a speaker where computing device 10 is a media player. Computing device 10 may also include a video camera. In another example, certain units such as transceiver module 19 or a display processor associated with display 21 may be part of the same integrated circuit (IC) as CPU 12 and/or GPU 14, may both be external to the IC or ICs that include CPU 12 and/or GPU 14, or may be formed in the IC that is external to the IC that includes CPU 12 and/or GPU 14.[0022] CPU 12 may comprise a general-purpose or a special-purpose processor that controls operation of computing device 10. For example, CPU 12 may include one or more processors, such as one or more microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), or other equivalent integrated or discrete logic circuitry. As described in greater detail below, CPU 12 may issue one or more graphics rendering commands to GPU 14 to cause GPU 14 to render graphics data. [0023] GPU 14 may include a programmable pipeline of processing components having a highly-parallel structure that provides efficient processing of complex graphic-related operations. GPU 14 may include one or more processors, such as one or more microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), or other equivalent integrated or discrete logic circuitry. GPU 14 may also include one or more processor cores, such that GPU 14 may be referred to as a multi-core processor. GPU 14 may, in some instances, be integrated into a motherboard of computing device 10. In other instances, GPU 14 may be present on a graphics card that is installed in a port in the motherboard of computing device 10 or may be otherwise incorporated within a peripheral device configured to interoperate with computing device 10.[0024] GPU 14 may output rendered data to system memory 16, e.g., frame buffer 18 of system memory 16. System memory 16 may store an operating system (not shown) that controls the operation of components of computing device 10. System memory 16 may also be used by software or applications (as described below) executed by computing device 10 to store information during program execution. System memory 16 may include a computer-readable storage medium or computer-readable storage device. In some examples, system memory 16 may include one or more of a short-term memory or a long-term memory. System memory 16 may include, for example, random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), cache memory, magnetic hard discs, optical discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM).[0025] Frame buffer 18 stores destination pixels for GPU 14. Each destination pixel may be associated with a unique screen pixel location. In some examples, frame buffer 18 may store color components and a destination alpha value for each destination pixel. For example, frame buffer 18 may store Red, Green, Blue, Alpha (RGB A) components for each pixel where the "RGB" components correspond to color values and the "A" component corresponds to a destination alpha value (e.g., a transparency value that may be used in compositing, which may also be referred to as opacity). Although frame buffer 18 and system memory 16 are illustrated as being separate memory units, in other examples, frame buffer 18 may be part of system memory 16.[0026] Transceiver module 19 may include circuitry to allow wireless or wired communication between computing device 10 and another device or a network. Transceiver module 19 may include modulators, demodulators, amplifiers and other such circuitry for wired or wireless communication.[0027] User interface 20 may allow a user to provide input to computing device 10. Examples of user interface 20 include, but are not limited to, a trackball, a mouse, a keyboard, and other types of input devices. User interface 20 may also be a touch screen and may be incorporated as a part of display 21.[0028] Display 21 may display image content generated by GPU 14, e.g., such as rendered graphics data from frame buffer 18. Display 21 may be a liquid crystal display (LCD), an organic light emitting diode display (OLED), a cathode ray tube (CRT) display, a plasma display, or another type of display device. Display 21 may include a display processor that retrieves images from system memory 16 and outputs values that cause the pixels of display 21 to illuminate to display the image.[0029] In operation, CPU 12 may execute one or more software applications 24.Examples of software applications 24 include applications that utilizes the functionality of GPU 14. For example, software applications 24 may include a graphical user interface (GUI) application, an operating system, a portable mapping application, a computer-aided design program for engineering or artistic applications, a video game application, or another type of software application that uses 2D or 3D graphics.[0030] Software applications 24 may include one or more drawing instructions that instruct GPU 14 to render a graphical user interface (GUI) and/or a graphics scene. For example, the drawing instructions may include instructions that define a set of one or more graphics primitives to be rendered by GPU 14. In some examples, the drawing instructions may, collectively, define all or part of a plurality of windowing surfaces used in a GUI. In additional examples, the drawing instructions may, collectively, define all or part of a graphics scene that includes one or more graphics objects within a model space or world space defined by the application.[0031] Software applications 24 may use graphics application programming interface (API) 26 to invoke GPU driver 28. Example graphics APIs include a Khronos Group™ Vulkan® API, an Open Graphics Library (OpenGL®) API, an Open Graphics Library Embedded Systems (OpenGL ES) API, a Direct3D API, an X3D API, a RenderMan API, a WebGL API, an Open Computing Language (OpenCL™), RenderScript or any other heterogeneous computing APIs, or any other public or proprietary standard graphics or compute API. Graphics API 26 may support lightweight or push constants. Lightweight constants map directly to a hardware resource, and thus avoid the memory allocation and tracking required with a normal constant buffer. They include a bank of values writable via the API and accessible in shaders. Push constants allow the application to set values used in shaders without creating buffers or modifying and binding descriptor sets for each update. Lightweight constants may include floating point values. In other examples, lightweight constants include integer values.[0032] GPU driver 28 may issue one or more commands to GPU 14 for rendering one or more graphics primitives into displayable graphics images. For example, software applications 24 may invoke GPU driver 28, via graphics API 26, to provide primitive definitions to GPU 14. In some instances, GPU driver 28 may provide primitive definitions to GPU 14 in the form of a list of drawing primitives, e.g., triangles, rectangles, triangle fans, triangle strips, etc. The primitive definitions may include vertex specifications that specify one or more vertices associated with the primitives to be rendered. The vertex specifications may include positional coordinates for each vertex and, in some instances, other attributes associated with the vertex, such as, e.g., color coordinates, normal vectors, and texture coordinates. The primitive definitions may also include primitive type information (e.g., triangle, rectangle, triangle fan, triangle strip, etc.), scaling information, rotation information, and the like. Hence, based on the instructions issued by software applications 24 to GPU driver 28, GPU driver 28 may formulate one or more commands that specify one or more operations for GPU 14 to perform in order to render the primitive.[0033] In some examples, GPU driver 28 may include a compiler configured to compile the commands as one or more shader programs, and to download the compiled shader programs to GPU 14. The compiled shader programs may include one or more instructions that control the operation of shader units 32 within GPU 14. The shader programs may be written in a high-level shading language, such as, e.g., an OpenGL Shading Language (GLSL), a High-Level Shading Language (HLSL), a C for Graphics (Cg) shading language, an OpenCL C kernel, etc.[0034] GPU 14 includes shader units 32 for executing the shader programs, and may perform a variety of shading operations for rendering graphics. For example, shader units 32 (also shader cores) may execute the shader programs to implement a variety of shader stages (which may collectively be referred to as a shader pipe) of a graphics processing pipeline. The shader programs (or simply shaders) may include vertex shader programs that may be executed by shader units 32 to perform the functions of a vertex shader stage, hull shader programs that may be executed by shader units 32 to perform the functions of a hull shader stage, domain shader programs that may be executed by shader units 32 to perform the functions of a domain shader stage, geometry shader programs that may be executed by shader units 32 to perform the functions of a geometry shader stage and/or pixel shader programs that may be executed by shader units 32 to perform the functions of a pixel shader.[0035] In the example of FIG. 1, shader units 32 each have shader processors 34.Shader processors 34 may include a plurality of processing elements for operating on multiple vertices or pixels in a parallel manner. For example, shader processors 34 may each include one or more components for fetching and decoding operations, one or more arithmetic logic units (ALUs) for carrying out arithmetic calculations, one or more memories, caches, and registers.[0036] GPU 14 includes command processor 52. Command processor 52 may fetch instructions from locations in system memory 16 identified by GPU driver 28 and store those instructions in command buffer 46. Command processor 52 may process the instructions found in command buffer 46.[0037] GPU 14 includes graphics memory 42 which contains constant buffer 44, command buffer 46, state buffer 48. GPU 14 also includes register file 50. In some examples, command buffer 46 is located within system memory 16. Command buffer 46 may store commands to be processed by command processor 52 and executed by GPU 14 including at shader units 32, such as draw commands. GPU driver 28 stores commands in system memory 16. GPU driver 28 then instructions GPU 14 when to retrieve the commands, from where to get the commands, and when to execute the commands. GPU driver 28 may instruct GPU 14 to retrieve commands and store these commands in command buffer 46. State buffer 48 is comprised of commands to either write registers or write resources descriptors (used for textures, samplers, shaders, etc.). State buffer 48 contains commands to set the current state of the hardware (e.g., GPU 14). In some examples, state buffer 48 does not include commands that perform actions (e.g., draws, etc.). State buffer 48 may store a copy of the state of all or part of register file 50. The copy of the state of all or part of register file 50 may include sections corresponding to the lightweight constants stored in register file 50. Register file 50 may store one or more lightweight constants accessible by shader units 32.[0038] GPU driver 28 may allocate memory in state buffer 48 for the initial snapshot of constant data and GPU driver 28 may allocate additional memory for updates. This additional memory may be allocated but remain unused. State buffer 48 may receive and store a copy of the state of lightweight constants in GPU 14, from register file 50, and may also store constant updates that have not been processed. For example, command processor 52 may determine that a command in command buffer 46 is configured to update one or more constant values. When this command, stored in command buffer 46, is encountered, e.g., by shader processors 34 of shader units 32, that alters the state of a lightweight constant, the size of used memory (as opposed to e.g., the physical size of the total allocation) in state buffer 48 may be increased by GPU driver 28 and the update may be appended to the state buffer 48 for later processing. When GPU driver 28 appends constants to state buffer 48, GPU driver 28 may increase the size of the valid data (by, e.g., the size of the constant update command(s)). At some point, the extra room may be all used up, and GPU driver 28 may create a new state buffer and begin the process again. Command processor 52 may determine that a command in command buffer 46 is a draw command. When the draw command is encountered, e.g., by shader processors 34 of shader units 32, a determination is made as to whether the command is live (e.g., visible) or dead (e.g., not visible). Command processor 52 may determine whether the command is live or dead by reading the visibility stream for the current bin.[0039] In some examples, the scene to be drawn is broken up into multiple bins, and rendered one at a time in an on-chip memory buffer, e.g., in graphics memory 42, to save memory bandwidth. When drawing a scene, the GPU driver 28 sends the rendering commands to GPU 14 in multiple passes. The first pass is a "visibility" pass. GPU 14 does not draw the scene, but instead computes which draw commands and triangles are visible in each bin. This information is stored in a per-bin visibility stream in graphics memory 42. Once the visibility pass is complete, GPU driver 28 then sends the rendering commands again, one pass per bin. Command processor 52 reads the visibility stream for the current bin as it processes command buffer 46. Only the draw commands and triangles that are visible in the current bin are executed on each pass.[0040] If the command is live, command processor 52 may process all or substantially all the deferred constant updates (in order of receiving them in the stack in state buffer 48). The processing of the updates may be performed one or more batches. Following processing of the constant updates, the updated constants may be updated via loading the processed updates to a hardware state (e.g., register file 50, via e.g. an indirect pointer) and a new snapshot may be saved in state buffer 48 without any appended updates. If the draw command is dead, the constant update(s) may be appended to state buffer 48 for processing at a potentially later time (e.g., without updating the constant values). In an example, if the draw command is determined to be live, but there have been no updates to constant values needed to process the draw command, command processor 52 of GPU 14 may not process constant updates (e.g., bypass the processing of the constant updates) and allow them to continue to accumulate in state buffer 48.[0041] Because the updating of the constants is deferred, in the example where there is no live draw call that uses these lightweight constant values, command processor 52 of GPU 14 may never need to process the foregoing updates in state buffer 48 saving processing resources. These unprocessed updates may be discarded or overwritten.[0042] FIG. 2 is a conceptual diagram illustrating an exemplary operation of the command buffer 46 and state buffer 48 of FIG. 1 in greater detail. In the example of FIG. 2, command buffer 46 may be populated with one or more commands 60A-60H (collectively "commands 60") to be executed on GPU 14. In a binning architecture, each bin (or tile) may process draw calls and constant updates independently. The disclosed techniques may allow processing of constant updates to only[0043] GPU 14 (e.g., via command processor 52) may process the next command in command buffer 46, command 60A. Command 60A includes a deferred state load mechanism that, when processed, loads a snapshot of lightweight constant values from register file 50, including constants 0-7. The size field may include the number of constants (e.g., dwords) in the state buffer 48 and as shown is equal to eight. Further, command 60A includes an address of where the deferred state constants will be loaded. As shown in FIG. 2, this may be an address of state buffer 48. GPU 14 may then process the next command in command buffer 46, command 60B. Command 60B, when processed, is a draw command to render a graphical element. Command processor 52 of GPU 14 may determine whether the draw command is live (e.g. visible) or dead (e.g. not visible). GPU 14 may determine that there are no deferred constant updates to process.[0044] In an example, state buffer 48 only includes lightweight constants. Other state information may be stored in other state buffers. In such an example, lightweight constants are not mixed with other state information in a single state buffer as it may complicate the append mechanism. Each of the entries in state buffer 48 labeled "Constant X" in FIG. 2 includes a hardware command to load that constant, an (address, data) pair. Command processor 52 executes these commands in order as it processes state buffer 48. In other examples, state information may be mixed one or more buffers. [0045] GPU 14 may process to the next command in command buffer 46, command 60C. Command 60C includes a deferred state load mechanism that, when processed, increases the size of state buffer 48 to eleven (from eight as updated by command 60A). A pointer to (or address of) state buffer 48 remains unchanged. GPU 14 may then process the next command in command buffer 46, command 60D. Command 60D, when processed, is a draw command to render a graphical element. GPU 14 may determine whether the draw command 60D is live (e.g. visible) or dead (e.g. not visible). If draw command 60D is live, GPU 14 may process the update commands by altering the values of the constant values in register file 50. In some examples, GPU 14 may reload a snapshot of the state of the lightweight constants. If the draw call is dead, GPU 14 may append updates to constants 1, 3 and 6 to state buffer 48.[0046] GPU 14 may process the next command in command buffer 46, command 60E. Command 60E includes a deferred state load mechanism that, when processed, increases the size of state buffer 48 to fourteen (from eleven as updated by command 60C). A pointer to (or address of) state buffer 48 remains unchanged. GPU 14 may then process the next command in command buffer 46, command 60F. Command 60F, when processed, is a draw command to render a graphical element. GPU 14 may determine whether the draw command 60F is live (e.g. visible) or dead (e.g. not visible). If draw command 60F is live, GPU 14 may process the update command(s) by altering the values of the constant values in register file 50. In some examples, GPU 14 may reload a snapshot of the state of the lightweight constants. If the draw call is dead, GPU 14 may append updates to constants 1, 2 and 7 to state buffer 48.[0047] GPU 14 may process the next command in command buffer 46, command 60G. Command 60G includes a deferred state load mechanism that, when processed, increases the size of state buffer 48 to eighteen (from fourteen as updated by command 60E). A pointer to (or address of) state buffer 48 remains unchanged. GPU 14 may then process the next command in command buffer 46, command 60H. Command 60H, when processed, is a draw command to render a graphical element. GPU 14 may determine whether the draw command 60H is live (e.g. visible) or dead (e.g. not visible). If draw command 60H is live, GPU 14 may process the update command(s) by altering the values of the constant values in register file 50. In some examples, GPU 14 may reload a snapshot of the state of the lightweight constants. If the draw call is dead, GPU 14 may append updates to constants 3, 4, 5, and 6 to state buffer 48. [0048] As is shown in FIG. 2, upon reaching a live draw command, GPU 14 may perform batched constant updates. For example, if draw 1 (command 60D) and draw 2 (command 60F) are not live, each of the constants to be updated are stored in state buffer 48 and remain deferred. Then, if draw 3 (command 60H) is live, GPU 14 may be configured to process each update (e.g. updates 1-3) sequentially beginning with update 1 and stored, e.g., in register file 50 (e.g., loaded into a hardware state via an indirect pointer). Thus, in the example shown in FIG. 2, constants 1, 3, and 6 will be updated 2 separate times each. In other examples, multiple updates for a single constant are combined which allows the constant to be updated once. A clean snapshot of the lightweight constants may be retrieved from register file 50 into state buffer 48. If draw 3 (command 60H) is dead and is the final draw command for, e.g., the bin, all of the updates (e.g. updates 1-3) may be discarded without further processing.[0049] In some examples, updates to constants are interspersed with both live and dead draw commands in the same bin. In such examples, deferred constant updates may be processed multiple times for the bin when a live draw command is encountered. In such examples, there may be deferred constant updates after the last live draw command is executed. This occurs, for example, where there are constant updates added to the constant buffer followed by one or more dead draw commands. Even in these examples, remaining deferred constant updates may not be processed and may be discarded.[0050] FIG. 3 is a flowchart illustrating an example method of deferred batching of constants according to aspects of the present disclosure.[0051] Command processor 52 of GPU 14 may process the next command in command buffer 46 (300). If the next command is an update to a lightweight constant value (302, constant update branch), command processor 52 of GPU 14 may review the update command in command buffer 46 (304). GPU 14 may append the one or more constant updates in the constant update command to the state buffer 48 (306). Command processor 52 of GPU 14 may determine whether the current command is the last command in command buffer 46 (e.g., for the current bin) (316) and continue.[0052] If the next command is a draw command (302, draw command branch), command processor 52 may review the draw command in command buffer 46 (308). Command processor 52 of GPU 14 may determine whether the draw command is a visible (e.g., live) draw command. If the command is not a visible draw command (310, no branch), command processor 52 of GPU 14 may determine whether the current command is the last command in command buffer 46 (e.g., for the current bin) (316) and continue. If the command is a visible draw command (310, yes branch), command processor 52 of GPU 14 may batch process the one or more constant updates in state buffer 48 (312). The resulting updated values may be loaded to a hardware state (in e.g., register file 50) via, e.g., an indirect pointer. State buffer 48 may be updated with a new snapshot from register file 50 (314). Command processor 52 of GPU 14 may determine whether the current command is the last command in command buffer 46 (e.g., for the current bin) (316) and continue.[0053] If the next command is neither a constant update or draw command (302, no branch), command processor 52 of GPU 14 may determine whether the current command is the last command in command buffer 46 (e.g., for the current bin) (316). If the command is not the last command (316, no branch), command processor 52 may review the next command in command buffer 46 (300). If the command is the last command (316, yes branch), the method ends (318). After the method ends (318), GPU 14 may process another bin in an image. Therefore, there may be examples where there were deferred updates appended to state buffer 48 (306) that were never batch processed (312) due to command processor 52 not encountering a live draw command. In such examples, GPU 14 may save processing time processing the appended and unprocessed constant updates.[0054] FIG. 4 is a flowchart illustrating an example method according to aspects of the present disclosure.[0055] Command processor 52 of GPU 14 may be configured to determine a first command in command buffer 46 during a binning operation is configured to update constant values (400). The command may include instructions to update the value of a constant, e.g., a lightweight constant. Rather than performing the update immediately and update snapshot values in state buffer 48 or in register file 50, GPU 14 may append one or more of the constant updates from the first command to the state buffer 48 (402). Appending the constant updates may include increasing the size of the state buffer based on the number of constant values updated. This operation may enlarge the size of state buffer 48 by the commensurate number of updates.[0056] Command processor 52 may determine a second command, in command buffer 46, is configured to perform a draw operation (404). GPU 14 may determine whether the draw command is a visible draw operation or is non-visible (406). If the draw command is visible, GPU 14 may batch process the one or more constant updates in state buffer 48 (408). The batch processing may be performed in the order added to the state buffer. GPU 14 may load the processed one or more constant updates to a hardware state (e.g., in register file 50) via an indirect pointer (410). If the draw command is not visible, GPU 14 may bypass (e.g., halt the processing of) the constant updates in state buffer 48. At the completion of processing the first bin (or tile) of the image, GPU 14 may discard appended but not batch-processed constant updates.[0057] It should be understood that the techniques shown in FIGS. 3 and 4 are provided for purposes of illustration only. In other examples, the process may include more, fewer, or an alternative arrangement of steps than those show. For example, as described above, filtering operations may not be performed for all texture data.[0058] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, cache memory, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0059] The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor" and "processing unit," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation on of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.[0060] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (i.e., a chip set). Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.[0061] Various aspects of the disclosure have been described. These and other embodiments are within the scope of the following claims. |
The subject of the present disclosure is' Varied Ball Ball-Grid Array (BGA) Package'. Embodiments disclosed herein include electronic packages. In an embodiment, the electronic package comprises a first substrate; a second substrate; and an array of interconnects electrically coupling the first substrate to the second substrate. In an embodiment, the array of interconnects comprises first interconnects, wherein the first interconnects have a first volume and a first material composition, and second interconnects, wherein the second interconnects have a second volume and a second material composition, and wherein the first volume is different than the second volume and/or the first material composition is different than the second material composition. |
1.An electronic package including:First substrateThe second substrate; andAn interconnected array that electrically couples the first substrate to the second substrate, wherein the interconnected array includes:A first interconnection, wherein the first interconnection has a first volume and a first material composition; andA second interconnection, wherein the second interconnection has a second volume and a second material composition, and wherein the first volume is different from the second volume, and/or the first material composition is different from the The second material composition.2.The electronic package of claim 1, wherein the number of first interconnections is greater than the number of second interconnections.3.The electronic package of claim 2, wherein the number of the first interconnections is at least one hundred times the number of the second interconnections.4.The electronic package of claim 1, 2, or 3, wherein one or more corner positions in the interconnected array are each filled with a second interconnect.5.The electronic package of claim 1, 2 or 3, wherein one or more central locations in the interconnected array are each filled with a second interconnect.6.The electronic package of claim 1, 2 or 3, wherein the second interconnection is located at the non-function critical (NCTF) pin, redundant power supply pin, redundant ground pin or non-connected position of the electronic package .7.The electronic package of claim 1, 2 or 3, wherein the second interconnects each comprise:Core; andSolder surrounding the core.8.The electronic package of claim 7, wherein the core is a metallic material.9.The electronic package of claim 7, wherein the core comprises copper or nickel.10.The electronic package of claim 7, wherein the core is a polymer material.11.The electronic package of claim 1, 2 or 3, wherein the first substrate is a package substrate, and wherein the second substrate is a board.12.The electronic package of claim 1, 2 or 3, wherein the first substrate is a package substrate, and wherein the second substrate is an interposer.13.The electronic package of claim 1, 2 or 3, wherein the first substrate is a die, and wherein the second substrate is a package substrate.14.An electronic package including:A package substrate having a first surface and a second surface opposite to the first surface;A die attached to the first surface of the package substrate;An array of pads on the second surface of the packaging substrate; andA plurality of solder balls, wherein each solder ball is on one of the pads in the array of the pads, and wherein the plurality of solder balls includes:The first solder ball; andA second solder ball different from the first solder ball.15.The electronic package of claim 14, wherein the second solder balls each comprise:Core; andSolder around the core.16.The electronic package of claim 15, wherein the thickness of the solder around the core is not uniform.17.The electronic package of claim 16, wherein the solder around the core includes fillet.18.The electronic package of claim 14, 15, 16 or 17, wherein the height of the first solder ball is greater than the height of the second solder ball.19.The electronic package of claim 14, 15, 16 or 17, wherein one or more corner pads of the array of pads are each covered by one of the second solder balls.20.The electronic package of claim 14, 15, 16 or 17, wherein one or more center pads of the array of pads are each covered by one of the second solder balls.21.The electronic package of claim 14, 15, 16 or 17, wherein the first solder ball has a first volume and a first material composition, wherein the second solder ball has a second volume and a second material composition, And wherein the first volume is different from the second volume, and/or the first material composition is different from the second material composition.22.An electronic system including:First substrateA second substrate attached to the first substrate through an interconnected array of the first level; andA third substrate attached to the second substrate through a second-level interconnected array, wherein at least one of the first-level interconnected array and the second-level interconnected array includes:First interconnection; andThe second interconnection, wherein the number of the first interconnection is greater than the number of the second interconnection.23.The electronic system of claim 22, wherein the first substrate is a die, wherein the second substrate is a package substrate, and wherein the third substrate is a printed circuit board (PCB).24.The electronic system of claim 22 or 23, wherein the second interconnections each include:Core; andSolder around the core.25.The electronic system according to claim 22 or 23, wherein the second interconnection is located at an angular position and/or a center position of the array of the first level interconnection and/or the array of the second level interconnection . |
Varying Ball Grid Array (BGA) packageTechnical fieldEmbodiments of the present disclosure relate to semiconductor devices, and more particularly to an interconnect architecture including a first solder ball and a second solder ball to provide improved yield during assembly.Background techniqueDuring the surface mount technology (SMT) process, the dynamic warpage of the flip chip ball grid array (BGA) package causes various defects. FIG. 1 is a cross-sectional illustration of a BGA package 100, which shows an example of some of the typical defects. The BGA package 100 may include a board 105 in which the package substrate 115 is attached to the board 105 through an interconnection 125 between the pads 107 and 117. The die 120 may be attached to the package substrate 115. As shown, the warpage of the package substrate 115 may cause several defects. Defect 126 shows solder ball bridging (SBB). SBB defects occur when the solder balls are compressed, causing the interconnect width to extend beyond tolerance and coalesce with adjacent interconnects. Compression may be due to increased warpage due to the number of thinner substrate layers, package shape changes due to the presence of large and thick reinforcements, and/or each increase due to increased die size and package form factor. The result of bump weight. Warpage can also cause other defects. For example, the defect 127 is a head on pillow (HoP) defect caused by the solder ball not coalescing with the solder paste. Defect 128 is non-contact open (NCO), and defect 129 is non-wet open (NWO).The warpage can be solved by adding reinforcement to the package substrate. However, reinforcements (and their necessary keep out zone) occupy valuable real estate on the top surface of the package. In order to solve the warpage, template design optimization for customizing the paste volume to control the solder volume at different positions has been proposed. However, the stencil design is approaching to further reduce the paste volume to prevent SBB's printing process limits. Another proposed option is to use a land side component (LSC) as a stand-off on the back side of the package during SMT to prevent SBB. However, the choice of LSC height is limited, and is usually too short to function as a support, or too high and cause NCO defects. Also, the height of LSC between suppliers varies greatly and is not easily controllable. Another option is to use copper bumps or pillars on the motherboard to act as a support during SMT. However, this requires SMT process changes to use pick-and-place equipment to attach the posts, as well as occupy limited package and motherboard effective area.Description of the drawingsFIG. 1 is a cross-sectional illustration of an electronic package having multiple defects caused by warpage of the package substrate.2A is a cross-sectional illustration of a package substrate having a first solder ball and a second solder ball according to an embodiment.Figure 2B is a cross-sectional illustration of a package substrate attached to a board according to an embodiment.Figure 2C is a cross-sectional illustration of an electronic package having an interconnection formed by a first solder ball and a second solder ball, according to an embodiment.3A is a cross-sectional illustration of a first solder ball and a second solder ball after ball attach (BA) reflow according to an embodiment.3B is a cross-sectional illustration of the first interconnection and the second interconnection after SMT according to an embodiment.4A is a plan view illustration of a package substrate showing a pin diagram with a second solder ball at a corner of the package substrate according to an embodiment.4B is a plan view illustration of a package substrate showing pin diagrams with second solder balls at the corners and the center of the package substrate according to an embodiment.4C is a plan view illustration of a package substrate showing pin diagrams with second solder balls at the corners, inside, and center of the package substrate, according to an embodiment.4D is a plan view illustration of a package substrate showing a pin diagram with a second solder ball along the perimeter of the package substrate and at the center of the package substrate, according to an embodiment.5A is a cross-sectional illustration of a first solder ball and a second solder ball according to an embodiment, wherein the first solder ball and the second solder ball have different material compositions.FIG. 5B is a cross-sectional illustration of a first solder ball and a second solder ball according to an embodiment, wherein the first solder ball and the second solder ball have different volumes.5C is an illustration of a cross-sectional view of a first solder ball and a second solder ball according to an embodiment, where the second solder ball includes a core.5D is a cross-sectional illustration of a first solder ball and a second solder ball according to an embodiment, wherein the second solder ball includes a core and a solder composition different from that of the first solder ball.6 is a cross-sectional illustration of an electronic system including a first-level interconnection and a second-level interconnection according to an embodiment, wherein the first-level interconnection and the second-level interconnection each include a first solder ball and a second solder ball.Fig. 7 is a schematic diagram of a computing device constructed according to an embodiment.Detailed waysDescribed herein is an electronic package having an interconnect architecture that includes a first solder ball and a second solder ball to provide improved yield during the assembly process according to various embodiments. In the following description, terms commonly adopted by those skilled in the art to convey the essence of their work to others skilled in the art will be used to describe various aspects of illustrative implementations. However, those skilled in the art will understand that only some of the described aspects may be used to practice the present invention. For the purpose of explanation, specific numbers, materials, and configurations are set forth in order to provide a comprehensive understanding of the illustrative implementation. However, those skilled in the art will understand that the present invention can be practiced without these specific details. In other cases, well-known features are omitted or simplified so as not to obscure the illustrative implementation.The various operations will be described as a plurality of discrete operations in sequence in a manner that is most helpful for understanding the present invention, however, the order of description should not be construed as implying that these operations must be order-dependent. In particular, these operations need not be performed in the order of presentation.As described above, warpage is a significant problem in the assembly of packages. In particular, the warpage of the package substrate causes various interconnection defects during the surface mount technology (SMT) process. One of the typical defects observed is solder bump bridging (SBB), which occurs when the solder ball collapses beyond tolerance. Therefore, the embodiments disclosed herein use a variable ball architecture to reduce or eliminate SMT defects.Embodiments include the use of a first solder ball for most interconnections and a second solder ball that is strategically placed to minimize defects. The second solder ball may be different from the first solder ball in terms of volume and/or composition. Composition differences can include different solders (for different reflow temperatures) and the use of cored solder balls (ie, cores that remain substantially solid during reflow and are surrounded by reflowable solder). The second solder ball including different solder metallurgical properties with different melting and collapse behaviors during SMT reflow functions as a collapse limiter to maintain pillow effect (HoP), non-contact air soldering (NCO) and non-wetting air Welding (NWO) margin while providing improved SBB margin.The use of such a variable ball architecture provides significant SMT benefits. For example, yield is improved, and the process is more resistant to package warpage changes caused by assembly/test changes and handling. In addition, fewer solder paste template modifications are required to provide a high-yield process. In embodiments that utilize cored solder balls, the size of the core can be selected to provide a seat with close tolerances. Changing the size of the core can also be used to control the height of the support required by different package architectures. In addition, as long as the reliability and performance requirements are met, the use of cored solder balls does not occupy additional effective area of the package and motherboard, because the cored balls can play an electrical role. Alternatively, core solder balls may be filled at non-functional critical (NCTF) locations, redundant power/ground locations, and/or non-connection locations designated for each package. In addition, due to the improved warpage adaptability, the embodiments disclosed herein can eliminate the need (or relax the size of the reinforcement) for reinforcement for controlling package warpage in order to solve the SMT challenge.Referring now to FIGS. 2A-2C, according to an embodiment, a series of cross-sectional illustrations depicting a process for attaching a first substrate to a second substrate using a variable ball architecture are shown. In certain embodiments, the process may be an SMT process. That is, the first substrate may be a package substrate, and the second substrate may be a board (for example, a printed circuit board (PCB)). However, it will be understood that a similar attachment process using an array of solder interconnects with a variable ball architecture can be used to attach any two substrates together.Referring now to FIG. 2A, according to an embodiment, a cross-sectional illustration of the first substrate 215 is shown. In certain embodiments, the first substrate 215 may be an organic packaging substrate. For example, the first substrate 215 may include a plurality of laminated organic layers with (or without) a core. In an embodiment, conductive features (not shown) may be embedded in the first substrate 215 to provide electrical coupling between the pad 217 and the component 220. For example, the component 220 may be a semiconductor die (eg, a processor die, a memory die, etc.). Although a single component 220 is shown on the first substrate 215, it will be understood that any number of components 220 may be coupled to the first substrate 215.In an embodiment, the first substrate 215 may exhibit warpage. For example, the corners of the first substrate 215 may be bent away from the component 220. That is, the surface of the first substrate 215 where the pad 217 is located may be concave, and the surface of the first substrate 215 where the component 220 is located may be convex.In an embodiment, the solder balls 231/232 may be positioned on each of the pads 217. The embodiment includes varying solder balls 231/232. For example, the first solder ball 231 or the second solder ball 232 may be provided on each of the pads 217. The first solder ball 231 is different from the second solder ball 232. For example, the first solder ball 231 may have a first volume and a first composition, and the second solder ball 232 may have a second volume and a second composition. In an embodiment, the first volume is different from the second volume, the first composition is different from the second composition, or the first volume and the second volume are different from each other and the first composition and the second composition are different from each other.In the embodiment shown in FIG. 2A, the first solder ball 231 includes a uniform composition, and the second solder ball 232 is a cored solder ball. That is, the second solder ball 232 includes the core 233 and the solder 234 surrounding the core 233. The core 233 may be a material that remains substantially solid during reflow in which the solder 234 is melted. In an embodiment, the core 233 may be a metal material, such as copper or nickel. Other embodiments may include a core 233 that is a polymer material. The use of the polymer core 233 can improve the compliance of the second solder ball 232 and allow for improved reliability.Since the core 233 does not substantially melt, the core 233 can function as a support during the SMT process (or any other reflow process). Therefore, the diameter of the core 233 can be selected to provide a desired seat height during the SMT process. In an embodiment, the core 233 may have a diameter of approximately 300 µm or less, approximately 250 µm or less, or approximately 100 µm or less.In an embodiment, a pick and place tool may be used to realize the attachment of the first solder ball 231 and the second solder ball 232 to the pad 217 of the first substrate 215. In a particular embodiment, the pick and place tool is configured to dispense both the first solder ball 231 and the second solder ball 232. After the ball attachment of the first solder ball 231 and the second solder ball 232, ball attachment reflow may be achieved.In an embodiment, the second solder ball 232 may be selectively placed at a location that will provide improved assembly yield. Generally, the second solder ball 232 can be placed at a high risk position of SBB. For example, the high risk of SBB usually occurs at the corners of the first substrate 215 and the center of the first substrate 215. However, depending on the shape of the first substrate 215 and the dynamic warping behavior during attachment to the second substrate, other positions for the second solder ball 232 may be selected. A more detailed description of where the second solder ball 232 can be located is provided below with respect to FIGS. 4A-4D.Referring now to FIG. 2B, according to an embodiment, a cross-sectional illustration of the electronic package 200 is shown. The electronic package 200 during the attach process is shown with the solder balls 231/232 of the first substrate 215 in contact with the second substrate 205, as indicated by the arrows. In particular, the solder balls 231/232 are brought into contact with the solder paste 208 on the pad 207 on the second substrate 205. In an embodiment, the second substrate 205 may be a board (for example, a motherboard, a PCB, etc.). In such embodiments, the attachment process may be an SMT process.In some embodiments, the attaching process may also include the application of force (eg, applied by a pick and place tool used to attach the first substrate 215 to the second substrate 205). The application of force can be used to reduce warpage during the SMT process. Without including the second solder ball 232, the application of force will cause the solder ball 231 to collapse out of tolerance and cause SBB defects. However, in the embodiments disclosed herein, the additional application of force is considered because the second solder ball 232 provides a uniform seat height and prevents the solder ball 231/232 from collapsing.Referring now to FIG. 2C, according to an embodiment, a cross-sectional illustration of the electronic package 200 after the reflow has been completed during the attach process is shown. In an embodiment, the reflow process coalesces the first solder ball 231 and the paste 208 to form a first interconnection 225, and coalesces the solder 234 of the second solder ball 232 and the paste 208 to form a second interconnection 235. Since the first interconnection 225 and the second interconnection 235 are formed with different solder balls 231/232, the first interconnection 225 is different from the second interconnection 235. For example, in a case where the second solder ball 232 includes a cored solder ball, the entire volume of the first interconnection 225 may be substantially solder, and the volume of the second interconnection 235 may include the core 233 and the solder 234.Referring now to FIG. 3A, according to an embodiment, an enlarged cross-sectional illustration of the outline of the first solder ball 331 and the second solder ball 332 after the ball attachment reflow is shown. The first solder ball 331 and the second solder ball 332 can be reflowed on the pad 317 on the first substrate 315. In the illustrated embodiment, the second solder ball 332 is a cored solder ball, which includes a core 333 and a solder 334 surrounding the core 333.As shown, after reflow, the contour of the second solder ball 332 is clearly distinguishable from the contour of the first solder ball 331. One difference between the first solder ball 331 and the second solder ball 332 may be that the first solder ball 331 has a first height H1 that is greater than the second height H2 of the second solder ball 332. In an embodiment, the first solder ball 331 may include a substantially circular cross-section having a flat bottom butted with the pad 317. In contrast, the second solder ball 332 may have a fillet 336 extending from the circular top surface to the pad 317. Therefore, the solder 334 may have an uneven thickness surrounding the core 333. For example, the first thickness T1 of the solder 334 above the core 333 may be smaller than the second thickness T2 of the solder 334 on the side of the core 333. In certain embodiments, the first thickness T1 may be between approximately 1 μm or more.Referring now to FIG. 3B, according to an embodiment, an enlarged cross-sectional illustration of the first interconnect 325 and the second interconnect 335 after the reflow of the first substrate 315 to the second substrate 305 is shown. In an embodiment, the first interconnection 325 may have a substantially uniform composition (for example, solder), and the second interconnection 335 may have a composition including the solder 334 surrounding the core 333. In an embodiment, the second interconnection 335 may include solder 334 that completely surrounds all surfaces of the core 333. However, in other embodiments, the portion of the core 333 may directly contact the pad 317 of the first substrate 315 and/or the pad 307 of the second substrate 305. Since the core 333 does not melt during reflow, the core 333 can provide a highly uniform support height between the first substrate 315 and the second substrate 305. Therefore, the core 333 in the second interconnect 335 can prevent excessive collapse of the second solder ball 332 (and the first solder ball 331), which may otherwise cause SBB defects.Referring now to FIGS. 4A-4D, according to an embodiment, a plan view illustration of an exemplary pin diagram of the first substrate 415 is shown. It will be understood that the number of pins in the pin diagrams shown in FIGS. 4A-4D is reduced in order to simplify these diagrams. That is, an embodiment may include an array of solder balls that includes hundreds of solder balls or thousands of solder balls, depending on the specific package. In the illustrated embodiment, the number of first solder balls 431 is greater than the number of second solder balls 432. Depending on the package architecture, the number of the first solder balls 431 may be tens, hundreds, or thousands of times the number of the second solder balls 432. The second solder ball 432 is different from the first solder ball 431. For example, the second solder ball 432 may be a cored solder ball, may have a different volume from the first solder ball 431, or may include a different solder composition than the first solder ball.Referring now to FIG. 4A, according to an embodiment, a plan view illustration of a pin diagram of the first substrate 415 is shown. In the illustrated embodiment, the second solder balls 432 are located close to the corners of the array of first solder balls 431. In particular, the second solder ball 432 is shown as an absolute angle of the array of the first solder ball 431. However, it will be understood that the second solder balls 432 may be located in the corner regions of the array of the first solder balls 431. That is, the second solder ball 432 may not be at the absolute corner of the array of the first solder ball 431. In the illustrated embodiment, the second solder ball 432 is located at each corner of the array of solder balls 431. However, it will be appreciated that in some embodiments, not all corner areas may include the second solder ball 432. In addition, the array of the first solder balls 431 in FIG. 4A includes four corner regions. However, it will be understood that the array may include shapes other than rectangular, and therefore may include more than four corner regions. In such embodiments, there may be more than four second solder balls 432. In addition, although a single second solder ball 432 is located in each corner area, it will be understood that in some embodiments, a plurality of second solder balls 432 may be located in one or more of the corner areas of the array of first solder balls 431. Corner area.Referring now to FIG. 4B, according to an additional embodiment, a plan view illustration of the pin diagram of the first substrate 415 is shown. As shown, the second solder balls 432 may be filled in the middle and corner regions of the array of the first solder balls 431. For example, the second solder ball 432C is in the corner area, and the second solder ball 432M is in the middle of the array of the first solder balls 431. Although four middle second solder balls 432M are shown, it will be understood that any number (ie, one or more) of middle second solder balls 432M may be included in the array of first solder balls 431. In addition, although referred to as the “middle” second solder ball 432M, it will be understood that the middle second solder ball 432M may be close to the middle of the array, and need not be precisely centered in the middle of the array of first solder balls 431. In addition, although both the middle second solder ball 432M and the corner second solder ball 432C are shown, it will be understood that in some embodiments, the corner second solder ball 432C may be optionally omitted, and only the middle second solder ball may be included. The ball is 432M.Referring now to FIG. 4C, according to an additional embodiment, a plan view illustration of the pin diagram of the first substrate 415 is shown. The pin diagram in FIG. 4C may be basically similar to the pin diagram in FIG. 4B, except that the internal second solder ball 432I is also included. The inner second solder ball 432I can be filled at a position in the array that is more susceptible to SBB defects. For example, the inner corners of the array of the first solder balls 431 may be filled with the inner second solder balls 432I. Although four inner second solder balls 432I are shown, it will be understood that embodiments may include any number (ie, one or more) of inner second solder balls 432I.Referring now to FIG. 4D, according to an additional embodiment, a plan view illustration of the pin diagram of the first substrate 415 is shown. The pin diagram in FIG. 4D may be substantially similar to the pin diagram in FIG. 4B, except that the entire outer periphery of the array of first solder balls 431 is filled with outer second solder balls 432O. Although the entire outer perimeter includes second solder balls 432O in FIG. 4D, it will be understood that any part of the outer perimeter of the array of first solder balls 431 may be filled with outer second solder balls 432O.In the embodiment shown in FIGS. 4A-4D, the second solder balls 432 are positioned in the array of the first solder balls 431, without considering the change in electrical properties. When the second solder balls 432 can meet the reliability and performance requirements, the positioning and number of the second solder balls 432 are not limited. That is, in some embodiments, the second solder ball 432 may have sufficiently similar electrical properties to the first solder ball 431 so that they may be interchangeable. Alternatively, when the second solder ball 432 does not have the same reliability and/or performance characteristics as the first solder ball 431, the non-functional critical (NCTF) location designated for each package, redundant power/ground The second solder ball 432 is filled in the position and/or the non-connection position.Referring now to FIGS. 5A-5D, a series of cross-sectional illustrations depicting a first solder ball 531 and a second solder ball 532 are shown in accordance with various embodiments.Referring now to FIG. 5A, according to an embodiment, a cross-sectional illustration of a first solder ball 531 and a second solder ball 532 on the pad 517 on the first substrate 515 is shown. The first solder ball 531 may have a first radius R1, and the second solder ball 532 may have a second radius R2 that is substantially the same as the first radius R1. In an embodiment, the first solder ball 531 may have a first material composition, and the second solder ball 532 may have a second material composition different from the first material composition. Therefore, the reflow temperature (and collapse behavior) of the first solder ball 531 may be different from the reflow temperature (and collapse behavior) of the second solder ball 532. In a specific embodiment, the first solder ball 531 may be SnAgCu (SAC) solder, and the second solder ball 532 may be low temperature solder (eg, SnBi solder, etc.).Referring now to FIG. 5B, according to different embodiments, a cross-sectional view of the first solder ball 531 and the second solder ball 532 is shown. In the illustrated embodiment, the first solder ball 531 and the second solder ball 532 may have different compositions and different volumes. For example, the first solder ball 531 may have a volume defined by a first radius R1 that is greater than the second radius R2 of the second solder ball 532. Although shown as having a different composition (ie, different shading), the first solder ball 531 may optionally have the same composition as the second solder ball 532 as long as there is at least some difference (eg, a different volume).Referring now to FIG. 5C, according to an embodiment, a cross-sectional illustration of a first solder ball 531 and a second solder ball 532 is shown. In an embodiment, the first solder ball 531 may have a composition that is substantially all solder, and the second solder ball 532 may be a cored solder ball including a core 533 and a solder 534 surrounding the core 533. In an embodiment, the core 533 may be a non-solder material. For example, the core 533 may include copper, nickel, a polymer material, or a polymer material coated with copper or nickel. In an embodiment, the solder 534 surrounding the core 533 may have substantially the same solder composition as the solder in the first solder ball 531.Referring now to FIG. 5D, according to an embodiment, a cross-sectional illustration of a first solder ball 531 and a second solder ball 532 is shown. In an embodiment, the second solder ball 532 in FIG. 5D may be substantially similar to the second solder ball 532 in FIG. 5C except that the solder 534 is of a different material composition from the solder of the first solder ball 531.Referring now to FIG. 6, according to an embodiment, a cross-sectional illustration of an electronic system 670 is shown. The electronic system 670 may include multiple substrates coupled together by different levels of interconnection. For example, the electronic system 670 may include a first substrate 605, a second substrate 615, and a third substrate 620. The first level interconnect 661 may couple the third substrate 620 to the second substrate 615, and the second level interconnect 662 may couple the second substrate 615 to the first substrate 605. In an embodiment, each interconnection level 661/662 may include a variable interconnection structure. For example, the first interconnection level 661 includes a first interconnection 691 and a second interconnection 692, and the second interconnection level 662 includes a third interconnection 681 and a fourth interconnection 682. The first interconnect 691 is different from the second interconnect 692, and the third interconnect 681 is different from the fourth interconnect 682.In a particular embodiment, the third substrate 620 is a semiconductor die, the second substrate 615 is a package substrate, and the first substrate 605 is a board. However, it will be understood that the variable solder ball/interconnect architecture can be used for any interconnect architecture in many different electronic systems. For example, die-to-package substrate attachment (e.g., first level interconnection (FLI), such as first interconnection level 661), die-to-chip attachment (e.g., package on interposer) interposer (PoINT) architecture), die-to-die attachment (for example, logic-to-memory interconnect (LMI) or memory-to-memory interconnect (MMI)), or die-to-interposer attachment (for example, sometimes called It is a "2.5D stacking" architecture) using a variable solder ball/interconnect architecture. Although a list of different architectures for which this type of variable solder ball/interconnect architecture may be beneficial is provided, it will be understood that this type of interconnect architecture is suitable for many different electronic system architectures that include solder balls or can be modified to use solder balls .In the above description, the cored solder ball will be described as having a non-melting core on which reflowable solder is provided. However, it will be understood that in some embodiments, the reflowable solder may be modified. For example, during one or more reflow processes, the solder around the core may be transformed into an intermetallic compound (IMC) by diffusion with the core and/or pad. In such embodiments, the core may be covered by low temperature solder (LTS) (for example, Sn-Bi, Sn-In) or standard lead-free solder (for example, SAC, SnAg or SnCu), where all the solder will be attached to the ball It is completely converted to IMC after reflow or during the initial SMT process. Alternative embodiments may include high temperature solder (eg, Sn-Sb) that will form a joint through solid state diffusion, but will not melt through 260°C reflow. The rigid IMC-Cu structure (or high melting point solder) will keep the package substrate firmly attached to the board, and the package shape will also be synchronized with the board shape. At high temperatures (eg, 200°C to 260°C), the IMC covered joints will prevent the package from bending away from the board into a concave shape, thereby preventing NCO defects in the corner areas and SBB defects in the center. In this way, the yield is improved.FIG. 7 shows a computing device 700 according to an implementation of the present invention. The computing device 700 houses a board 702. The board 702 may include multiple components, including but not limited to a processor 704 and at least one communication chip 706. The processor 704 is physically and electrically coupled to the board 702. In some implementations, at least one communication chip 706 is also physically and electrically coupled to the board 702. In a further implementation, the communication chip 706 is part of the processor 704.These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, graphics processors, digital signal processors, encryption processors, chipsets, antennas, Displays, touch screen displays, touch screen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices (such as Hard disk drive, compact disk (CD), digital versatile disk (DVD), etc.).The communication chip 706 implements wireless communication in order to transmit data to and from the computing device 700. The term "wireless" and its derivatives can be used to describe circuits, devices, systems, methods, technologies, communication channels, etc. that can transmit data through non-solid media by using modulated electromagnetic radiation. The term does not mean that the associated devices do not contain any wires, although in some embodiments they may not contain wires. The communication chip 706 can implement any of a variety of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev- DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, their derivatives, and any other wireless protocols designated as 3G, 4G, 5G and above. The computing device 700 may include a plurality of communication chips 706. For example, the first communication chip 706 may be dedicated to shorter-range wireless communication, such as Wi-Fi and Bluetooth, and the second communication chip 706 may be dedicated to longer-range wireless communication, such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE , Ev-DO, etc.The processor 704 of the computing device 700 includes an integrated circuit die packaged in the processor 704. In some implementations of the invention, the integrated circuit die of the processor 704 may be part of an electronic package including a variable interconnect architecture according to the embodiments described herein. The term "processor" may refer to any device or part of a device that processes electronic data from a register and/or memory to transform the electronic data into other electronic data that can be stored in the register and/or memory.The communication chip 706 also includes an integrated circuit die packaged in the communication chip 706. According to another implementation of the present invention, the integrated circuit die of the communication chip 706 may be part of an electronic package including a variable interconnect architecture according to the embodiments described herein.The above description of the illustrated implementation of the invention, including what is described in the abstract, is not intended to be exhaustive or to limit the invention to the precise form disclosed. Although specific implementations and examples of the present invention are described herein for illustrative purposes, as those skilled in the relevant art will appreciate, various equivalent modifications are possible within the scope of the present invention.These modifications can be made to the present invention in light of the above detailed description. The terms used in the appended claims should not be construed to limit the invention to the specific implementations disclosed in the specification and claims. Rather, the scope of the present invention is to be completely determined by the appended claims, and the appended claims are to be interpreted in accordance with the established principles of claim interpretation.Example 1: An electronic package comprising: a first substrate; a second substrate; and an array of interconnects electrically coupling the first substrate to the second substrate, wherein the array of interconnects includes: a first interconnect , Wherein the first interconnect has a first volume and a first material composition; and a second interconnect, wherein the second interconnect has a second volume and a second material composition, and wherein the first volume is different from the second volume, and/ Or the composition of the first material is different from the composition of the second material.Example 2: The electronic package of Example 1, wherein the number of first interconnections is greater than the number of second interconnections.Example 3: The electronic package of Example 2, wherein the number of first interconnections is at least one hundred times the number of second interconnections.Example 4: The electronic package of Examples 1-3, wherein one or more corner positions in the interconnected array are each filled with a second interconnect.Example 5: The electronic package of Examples 1-4, wherein one or more center positions in the interconnected array are each filled with a second interconnect.Example 6: The electronic package of Examples 1-5, wherein the second interconnection is located at the non-functionally critical (NCTF) pin, redundant power supply pin, redundant ground pin, or non-connection location of the electronic package.Example 7: The electronic package of Examples 1-6, wherein the second interconnections each include: a core; and solder surrounding the core.Example 8: The electronic package of Example 7, wherein the core is a metal material.Example 9: The electronic package of Example 8, wherein the core includes copper or nickel.Example 10: The electronic package of Example 7, wherein the core is a polymer material.Example 11: The electronic package of Examples 1-10, wherein the first substrate is a package substrate, and wherein the second substrate is a board.Example 12: The electronic package of Examples 1-10, wherein the first substrate is a package substrate, and wherein the second substrate is an interposer.Example 13: The electronic package of Examples 1-10, wherein the first substrate is a die, and wherein the second substrate is a package substrate.Example 14: An electronic package comprising: a package substrate having a first surface and a second surface opposite to the first surface; a die attached to the first surface of the package substrate; An array of pads on the surface; and a plurality of solder balls, wherein each solder ball is on one of the pads in the array of pads, and wherein the plurality of solder balls includes: a first solder ball; and A second solder ball with a different solder ball.Example 15: The electronic package of Example 14, wherein the second solder balls each include: a core; and solder around the core.Example 16: The electronic package of Example 15, wherein the thickness of the solder around the core is not uniform.Example 17: The electronic package of Example 16, wherein the solder around the core includes fillet.Example 18: The electronic package of Examples 14-17, wherein the height of the first solder ball is greater than the height of the second solder ball.Example 19: The electronic package of Examples 14-18, wherein one or more corner pads of the array of pads are each covered by one of the second solder balls.Example 20: The electronic package of Examples 14-19, wherein one or more center pads of the array of pads are each covered by one of the second solder balls.Example 21: The electronic package of Examples 14-20, wherein the first solder ball has a first volume and a first material composition, wherein the second solder ball has a second volume and a second material composition, and wherein the first volume is different from the second The volume, and/or the first material composition is different from the second material composition.Example 22: An electronic system, comprising: a first substrate; a second substrate attached to the first substrate through an array of interconnections at the first level; and an array attached to the first substrate through an array of interconnections at the second level The third substrate of the second substrate, wherein at least one of the first-level interconnection array and the second-level interconnection array includes: a first interconnection; and a second interconnection, wherein the number of the first interconnection Greater than the number of second interconnections.Example 23: The electronic system of Example 22, wherein the first substrate is a die, wherein the second substrate is a package substrate, and wherein the third substrate is a printed circuit board (PCB).Example 24: The electronic system of Example 22 or Example 23, wherein the second interconnection each includes: a core; and solder around the core.Example 25: The electronic system of Examples 22-23, wherein the second interconnection is located at an angular position and/or a center position of the array of the first level interconnection and/or the array of the second level interconnection. |
Bifurcated memory management for memory elements techniques are disclosed. In one aspect, a memory element includes a self-managed portion and a portion that is managed by a remote host. Software that needs low latency access may be stored in the portion of the memory element that is managed by the remote host and other software may be stored in the portion of the memory element that is managed by the memory element. By providing such bifurcated memory management of the memory element, a relatively inexpensive memory element may be used to store software while at the same time allowing low latency (albeit at low throughputs) access to sensitive software elements with minimal bus logic. |
1.A method of controlling a memory element, comprising:At a host, a first instruction for a managed portion of a memory element;Associating the first instruction with the logical address of the memory element;Using a bus protocol, the first instruction to the managed portion of the memory element over a communications bus;At the host, a second instruction for an unmanaged portion of the memory element;Associating the second instruction with the physical address of the memory element; andUsing the bus protocol, the second instruction to the unmanaged portion of the memory element over the communications bus.2.The method of claim 1, further comprising: executing a flash translation layer (FTL) at the host for the unmanaged portion of the memory element.3.The method of claim 1, further comprising: allowing the memory element to execute an FTL for the managed portion of the memory element.4.The method of claim 1, wherein generating the first instruction for the managed portion of the memory element comprises: generating a second instruction for registering with the A command.5.The method of claim 1, further comprising: storing a range of physical addresses of the unmanaged portion of the memory element at the host.6.The method of claim 1, further comprising: storing a range of logical addresses of the managed portion of the memory element at the host.7.The method of claim 1, wherein generating the first instruction at the host comprises: generating a read or write access instruction.8.A method for operation of a memory element includes:Receive a first instruction from a host using a bus protocol from a communications bus, the first instruction including a logical address of a managed portion of a memory element; andReceive a second instruction from the host using the bus protocol from the communications bus, the second instruction including a physical address of an unmanaged portion of the memory element.9.The method of claim 8, wherein the memory element comprises a NAND flash memory element.10.The method of claim 8, further comprising: executing a flash memory translation layer (FTL) at the memory element.11.The method of claim 8, wherein receiving the second instruction comprises receiving from the host an FTL command for the unmanaged portion of the memory element.12.The method of claim 8, wherein the first instruction comprises a read or a write access instruction.13.A memory element comprising:ControllerA first storage space configured to be managed by the controller; andThe second storage space is configured to be managed by a host remote from the memory element.14.The memory device of claim 13, wherein the memory device is a NAND flash memory device.15.The memory element of claim 13, further comprising a bus interface configured to couple to a communications bus.16.The memory element of claim 15, wherein the bus interface is configured to: pass instructions from the host directly to the second storage space.17.The memory element of claim 13, wherein the controller is configured to: receive a first instruction from the host bus from a communications bus using a bus protocol, the first instruction including the first storage space The logical address.18.The memory element of claim 17, wherein the controller is further configured to: execute a flash translation layer (FTL) for the first storage space.19.The memory element of claim 17, wherein the controller is further configured to translate the logical address into a physical address of the first memory space of the memory element.20.A host comprising:A bus interface configured to be coupled to a communication bus;A transceiver operatively coupled to the bus interface; andA controller operatively coupled to the transceiver, the controller configured to:Generate a first instruction for a managed portion of a memory element;Associating the first instruction with the logical address of the memory element;Instruct the transceiver to use the bus protocol to communicate the first instruction to the managed portion of the memory element over the communications bus;Generate a second instruction for an unmanaged portion of the memory element;Associating the second instruction with the physical address of the memory element; andInstructing the transceiver to use the bus protocol to communicate the second instruction to the unmanaged portion of the memory element over the communications bus.21.The host of claim 20, wherein the controller is further configured to: execute a flash translation layer (FTL) for the second instruction.22.The host of claim 20, wherein the first instruction comprises one of a read or a write access instruction.23.The host of claim 20, wherein the controller is further configured to: allow the memory element to execute an FTL for the managed portion of the memory element.24.The host of claim 20, wherein the first instructions for the managed portion of the memory element include instructions for NAND flash memory elements. |
Forked memory management for memory elementsPriority claimThis application claims priority to U.S. Patent Application Serial No. 14 / 621,874, filed on February 13, 2015, entitled & quot; Bifurcated MEMORY MANAGEMENT FORMEMORY ELEMENTS & quot ;, which is incorporated by reference herein All included here.backgroundI. Public domainThe techniques of this disclosure generally relate to memory elements, and more particularly to the management of memory elements.II. BackgroundComputing devices rely on memory and the software stored therein to perform many functions. A mobile computing device, such as a smart phone, is one example of a computing device that uses software stored in memory to perform many functions. One such feature is the control of wireless modems that enable wireless communication. Although this functionality can be strictly implemented in hardware, this implementation may be unnecessarily complicated, hard to upgrade, hard to test, and space-tolerant. As such, a certain amount of functionality will be instantiated by software, and the device must have the appropriate memory to store the software.As mentioned above, one such functionality, typically implemented in software, is the functionality of the wireless modem (s) of the mobile computing device. Although different parts of the industry may refer to such software in different terms, as used herein, such software is referred to as a modem subsystem (MSS) code. The MSS code is relatively large and must be accessed with low latency for some regularity. In some devices, the MSS code may be stored in direct random access memory (DRAM). However, DRAM is relatively expensive and this expense makes the device commercially unattractive in the highly competitive mobile computing device market.Some designers can move MSS codes to remote memory elements, such as NAND memory elements. However, NAND storage elements often have too much latency to be impractical. As a result, designers need improved techniques to allow low latency, relatively inexpensive access to software such as MSS code.Public overviewAspects disclosed in the detailed description include bifurcated memory management for memory elements. In particular, various exemplary aspects of the present disclosure propose a memory element that includes a self-managing portion and a portion that is managed by a remote host. Software that requires low latency access may be stored in a portion of the memory element that is managed by the remote host and other software and / or additional files (eg, media files, user settings) may be stored in the memory portion of the memory element In the management section. By providing such bifurcated memory management of the memory elements, relatively inexpensive memory elements can be used to store the software while allowing low latency (albeit at low throughput) to sensitive software elements with minimal bus logic .In this regard, in one aspect, a method of controlling a memory element is disclosed. The method includes generating, at a host, a first instruction for a managed portion of a memory element. The method further includes associating the first instruction with a logical address of the memory element. The method also includes communicating the first instruction to the managed portion of the memory element over a communications bus using a bus protocol. The method also includes generating, at the host, a second instruction for an unmanaged portion of the memory element. The method further includes associating the second instruction with a physical address of the memory element. The method also includes using the bus protocol to communicate the second instruction to the unmanaged portion of the memory element over the communications bus.In another aspect, a method for operation of a memory element is disclosed. The method includes receiving a first instruction from a host using a bus protocol from a communication bus, the first instruction including a logical address of a managed portion of a memory element. The method also includes receiving a second instruction from the host using the bus protocol from the communications bus, the second instruction including a physical address of an unmanaged portion of the memory element.In another aspect, a memory element is disclosed. The memory element includes a controller. The memory element also includes a first storage space configured to be managed by the controller. The memory element also includes a second storage space configured to be managed by a host remote from the memory element.In another aspect, a host is disclosed. The host includes a bus interface that is configured to couple to a communications bus. The host further includes a transceiver operatively coupled to the bus interface. The host also includes a controller operatively coupled to the transceiver. The controller is configured to generate a first instruction for a managed portion of a memory element. The controller is also configured to associate the first instruction with a logical address of the memory element. The controller is further configured to instruct the transceiver to use the bus protocol to communicate the first instruction to the managed portion of the memory element over the communications bus. The controller is also configured to generate a second instruction for the unmanaged portion of the memory element. The controller is also configured to associate the second instruction with a physical address of the memory element. The controller is further configured to instruct the transceiver to transmit the second instruction to the unmanaged portion of the memory element over the communications bus using the bus protocol.Brief Description of the DrawingsFigure 1 is a block diagram of a system with a host and memory system in accordance with an exemplary aspect of the present disclosure;Figure 2 is a signal flow diagram between the host and memory system of Figure 1;Figure 3 is a flow chart illustrating an exemplary process for using a managed portion of a memory system in accordance with an exemplary aspect of the present disclosure;Figure 4 is a flow chart illustrating an exemplary process for using an unmanaged portion of a memory system in accordance with an exemplary aspect of the present disclosure; and5 is a block diagram of an example processor-based system that may include the host and memory system of FIG.A detailed descriptionReferring now to the drawings, several exemplary aspects of the disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.Aspects disclosed in this Detailed Description include bifurcated memory management for memory elements. In particular, various exemplary aspects of the present disclosure propose a memory element that includes a self-managing portion and a portion that is managed by a remote host. Software that requires low latency access may be stored in a portion of the memory element that is managed by the remote host and other software and / or additional files (eg, media files, user settings) may be stored in the memory portion of the memory element In the management section. By providing such bifurcated memory management of the memory elements, relatively inexpensive memory elements can be used to store the software while allowing low latency (albeit at low throughput) to sensitive software elements with minimal bus logic .In this regard, FIG. 1 is a block diagram of a system 10 with a host 12 and a memory system 14 according to an exemplary aspect of the present disclosure. The host 12 may communicate with the memory system 14 via the communication bus 16. In an exemplary aspect, memory system 14 is a flash memory device. Memory system 14 may include a random access memory (RAM) portion 18 and a memory element 20. In an exemplary aspect, the memory element 20 is a NAND memory element and has two parts. The first part is the managed part 22, and the second part is the unmanaged part 24 (also labeled un in Figure 1). The managed portion 22 is managed by a microprocessor (uP) 26 within the memory system 14 and unmanaged portion 24 is managed by the host 12. In contrast to a fully-internally managed memory system, various exemplary aspects of the present disclosure provide the unmanaged portion 24 with low latency. As such, in contrast to a fully host-managed memory system, various exemplary aspects of the present disclosure provide high throughput and device management for mass storage space. In addition, the burden on the host 12 is limited by only distributing the unmanaged portion 24.With continued reference to FIG. 1, in an exemplary aspect, the size and parameters of the unmanaged portion 24 are set when the memory system 14 is integrated into a device (eg, a mobile phone) that includes the host 12 and the memory system 14. In an exemplary aspect, managed portion 22 is larger than unmanaged portion 24. In another exemplary aspect, the unmanaged portion 24 is only large enough to accommodate modem system software (MSS) codes. Thus, most of the memory system 14 is managed by the microprocessor 26 and is unaffected by the unmanaged portion 24. In addition, the host 12 is responsible for managing the unmanaged portion 24 through flash translation layer (FTL) software running thereon. In use, there are two types of read / write access commands. When the host 12 wishes to read from or write to the unmanaged portion 24, the host 12 uses the physical address, which helps ensure low latency. When host 12 wishes to read from or writes to managed portion 22 from managed portion 22, host 12 uses the logical address and microprocessor 26 provides address translation to translate the logical address received from host 12 to managed portion 22 Physical address. In an exemplary aspect, host 12 may encode a command on communication bus 16 in a manner that tells memory system 14 whether the command uses a logical address or a physical address. In addition, the microprocessor 26 executes the FTL for the managed section 22.FIG. 2 is a signal flow diagram between the host 12 and the memory system 14 of FIG. 1. In this regard, the host 12 generates a first instruction 30 having a logical address and sends it to the memory system 14 via the communications bus 16 using the bus protocol. The first instruction 30 and the logical address are received by the microprocessor 26. Microprocessor 26 needs the conversion of the logical address, so microprocessor 26 sends address lookup 32 to conversion table 28, which responds to microprocessor 26 with physical address 34. Microprocessor 26 provides instructions 36 to managed portion 22 using the obtained physical address. Note that the instruction can be a read or write instruction. The managed portion 22 provides the output 38 to the microprocessor 26, which uses the same bus protocol as the first instruction 30 to communicate the output 40 back to the host 12 using the communication bus 16. When microprocessor 26 and managed portion 22 are not busy servicing read / write requests, microprocessor 26 may send an FTL function (FUNC) 42 to managed portion 22 using a known physical address. The managed part responds with an FTL FUNC response (RES) 44.With continued reference to FIG. 2, the host 12 generates a second instruction having a physical address. Note that the first and second are used herein to assist in distinguishing between instructions and are not intended to suggest a specific chronological order. Like the first instruction 30, the second instruction may be an instruction for reading or writing from the memory system 14 to the memory system 14, although the second instruction is to read from or write to the unmanaged portion 24 Not managed part 22. The host 12 sends the instruction 46 to the unmanaged portion 24 and the unmanaged portion 24 responds with the output 48. It is to be noted that the instruction 46 is transmitted on the same communication bus 16 using the same bus protocol as the first instruction 30. When host 12 and unmanaged 24 are not busy servicing read / write requests, host 12 may send FTL FUNC 50 to unmanaged 24. Unmanaged section 24 responds with FTLFUNC RES 52.3 is a flow chart illustrating an exemplary process 60 for using the managed portion 22 of the memory system 14 of FIG. 1 in accordance with an exemplary aspect of the present disclosure. Process 60 begins with the host 12 storing the logical address of the managed portion 22 (block 62). Note that this storage can be done when the device is integrated. The host 12 then generates an instruction (eg, a first instruction) for the managed portion 22 of the memory element 20 (block 64). Host 12 associates the instruction with the appropriate logical address (block 66). The host 12 then uses the bus protocol to communicate instructions to the microprocessor 26 using the communications bus 16 (block 68). Memory element 20, in particular microprocessor 26, receives an instruction having a logical address of managed portion 22 (block 70). The microprocessor 26 converts the logical address to a physical address by referring to the conversion table 28 of FIG. 1 (block 72). The microprocessor 26 then sends the first instruction to the managed portion 22 using the obtained physical address and the instruction is executed (block 74). When microprocessor 26 and managed portion 22 are not busy executing read / write instructions, the microprocessor runs the FTL as needed (block 76). Process 60 allows for high throughput and device management for high-capacity storage space while imposing relatively small administrative requirements on the host 12. However, these advantages are at the cost of relatively high latency.Similarly, FIG. 4 is a flowchart illustrating an exemplary process 80 for using the unmanaged portion 24 of the memory system 14 of FIG. 1 according to an exemplary aspect of the present disclosure. Process 80 begins with storing the physical address of unmanaged portion 24 at host 12 (block 82). Note that this storage can be done when the device is integrated. The host 12 generates an instruction (eg, a second instruction) for the unmanaged portion 24 of the memory element 20 (block 84). The host 12 associates the physical address of the unmanaged portion 24 with the instruction (block 86). The host 12 uses the same bus protocol as the instructions for sending to the managed portion 22, using the same communications bus 16 to transfer the instructions to the unmanaged portion 24 (block 88). Memory element 20 receives an instruction having a physical address of unmanaged portion 24 (block 90). The unmanaged portion 24 executes the instruction (block 92). When host 12 and unmanaged 24 are not executing instructions, host 12 may run the FTL as needed (block 94).Forked memory management for memory elements according to aspects disclosed herein may be provided in any processor-based device or integrated into any processor-based device. Non-limiting examples include set-top boxes, entertainment units, navigation devices, communication devices, fixed location data units, mobile location data units, mobile phones, cell phones, computers, laptops, desktops, personal digital assistants (PDAs), monitors , Computer monitors, televisions, tuners, radios, satellite radios, music players, digital music players, portable music players, digital video players, video players, digital video disc (DVD) players, and portable Digital video player.In this regard, FIG. 5 illustrates an example of a processor-based system 100 that may employ the host 12 and memory system 14 illustrated in FIG. 1. In this example, the processor-based system 100 includes one or more central processing units (CPUs) 102 that each include one or more processors 104. The CPU (s) 102 may be a host 12. The CPU (s) 102 may have a cache memory 106 coupled to the processor (s) 104 for fast access to temporarily stored data and / or storage of logical and physical addresses. The CPU (s) 102 are coupled to the system bus 108 and may couple devices included in the processor-based system 100 to one another. The system bus 108 may be or may include the communication bus 16 illustrated in FIG. 1. As is well known, the CPU (s) 102 communicate with these other devices by exchanging addresses, controls, and data information on the system bus 108. For example, the CPU (s) 102 may communicate instructions to the memory system 110, which may be the memory system 14 of FIG. 1.Other devices may be connected to system bus 108 or other buses. As illustrated in FIG. 5, these devices may include, by way of example, a memory system 110, one or more input devices 112, one or more output devices 114, one or more network interface devices 116, and one or more displays Controller 118. The input device (s) 112 may include any type of input device including, but not limited to, input keys, switches, voice processors, and the like. The output device (s) 114 may include any type of output device, including but not limited to audio, video, other visual indicators, and the like. The network interface device (s) 116 may be any device that is configured to allow data exchange with and from the network 120. Network 120 may be any type of network including but not limited to wired or wireless networks, private or public networks, local area networks (LANs), wireless local area networks (WLANs), wide area networks (WANs), Bluetooth ™ networks, and the Internet. The network interface device (s) 116 may be configured to support any type of communication protocol as desired.The CPU (s) 102 may also be configured to access the display controller (s) 118 on the system bus 108 to control the information sent to the one or more displays 122. The display controller (s) 118 send the information to be displayed to the display (s) 122 via the one or more video processors 124 and the video processor 124 processes the information to be displayed into a format suitable for the display (s) 122. The display (s) 122 may include any type of display including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display,Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, in a memory, or in another computer-readable medium and represented by Instructions executed by a processor or other processing device, or a combination of both. As an example, the devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip. The memory disclosed herein may be any type and size of memory, and may be configured to store any type of information as desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is to be achieved will depend on the particular application, design choice, and / or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented in processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), and the like configured to perform the functions described herein A programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. The processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller or state machine. The processor may also be implemented as a combination of computing devices (eg, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in synergy with a DSP core, or any other such configuration).The various aspects disclosed herein may be embodied in the form of hardware and instructions stored in hardware and may reside within any device, such as a random access memory (RAM), flash memory, read only memory (ROM), electrically programmable ROM Electrically erasable programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor to enable the processor to read / write information from / to the storage medium. In the alternative, the storage medium may be integrated into the processor. The processor and storage medium may reside in an ASIC. The ASIC can reside in a remote station. In the alternative, the processor and storage medium may reside as discrete components in remote stations, base stations or servers.It is also noted that the operational steps described in any of the exemplary aspects herein are described for the purpose of providing examples and discussions. The operations described may be performed in a multitude of different orders besides the illustrated order. In addition, the operations described in a single step of operation may actually be performed in a number of different steps. In addition, one or more of the operational steps discussed in the exemplary aspects may be combined. It should be understood that many different modifications may be made to the operational steps illustrated in the flowcharts as would be readily apparent to one skilled in the art. Those skilled in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referred to throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, optical fields or particles of light, To represent.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples and designs described herein, but rather should be given the broadest scope consistent with the principles and novel features disclosed herein. |
Embodiments of the disclosure are in the field of advanced integrated circuit structure fabrication and, in particular, 10 nanometer node and smaller integrated circuit structure fabrication and the resulting structures. In an example, an integrated circuit structure includes a fin. An insulating structure is directly adjacent sidewalls of the lower fin portion of the fin. A first gate electrode is over the upper fin portion and over a first portion of the insulating structure. A second gate electrode is over the upper fin portion and over a second portion of the insulating structure. A firstdielectric spacer is along a sidewall of the first gate electrode. A second dielectric spacer is along a sidewall of the second gate electrode, the second dielectric spacer continuous with the first dielectric spacer over a third portion of the insulating structure between the first gate electrode and the second gate electrode. |
1.An integrated circuit structure comprising:a fin comprising silicon having a lower fin portion and an upper fin portion;An insulating structure directly adjacent to a sidewall of the lower fin portion of the fin;a first gate electrode over the upper fin portion and over the first portion of the insulating structure;a second gate electrode over the upper fin portion and over the second portion of the insulating structure;a first dielectric spacer along a sidewall of the first gate electrode;a second dielectric spacer along a sidewall of the second gate electrode, the second dielectric spacer being between the first gate electrode and the second gate electrode of the insulating structure Above the third portion is continuous with the first dielectric spacer.2.The integrated circuit structure of claim 1 wherein said first dielectric spacer and said second dielectric spacer comprise silicon and nitrogen.3.The integrated circuit structure of claim 1 further comprising:An embedded source or drain structure on the opposite side of the first gate electrode and on the opposite side of the second gate electrode.4.The integrated circuit structure according to claim 1, wherein said insulating structure comprises a first insulating layer, a second insulating layer directly on said first insulating layer, and a direct lateral direction (directly in the lateral direction) in said first A dielectric filling material on the second insulating layer.5.The integrated circuit structure of claim 4 wherein said first insulating layer is an undoped insulating layer comprising nitrogen and oxygen.6.The integrated circuit structure of claim 4 wherein said second insulating layer comprises silicon and nitrogen.7.The integrated circuit structure of claim 4 wherein said dielectric fill material comprises silicon and oxygen.8.An integrated circuit structure comprising:a first fin comprising silicon, the first fin having a lower fin portion and an upper fin portion;a second fin comprising silicon, the second fin having a lower fin portion and an upper fin portion;An insulating structure directly adjacent to a sidewall of a lower fin portion of the first fin and directly adjacent a sidewall of a lower fin portion of the second fin;a gate electrode over the upper fin portion of the first fin, above the upper fin portion of the second fin and over the first portion of the insulating structure;a first dielectric spacer along a sidewall of the upper fin portion of the first fin;a second dielectric spacer along a sidewall of the upper fin portion of the second fin, the second dielectric spacer being at the first fin and the second of the insulating structure Above the second portion between the fins is continuous with the first dielectric spacer.9.The integrated circuit structure of claim 8 wherein said first dielectric spacer and said second dielectric spacer comprise silicon and nitrogen.10.The integrated circuit structure of claim 8 further comprising:An embedded source or drain structure on an opposite side of the gate electrode, the embedded source or drain structure having an upper fin along the first fin and the second fin a sidewall of the body portion, a bottom surface lower than a top surface of the first dielectric spacer and the second dielectric spacer, and the embedded source or drain structure has a first fin shape And a sidewall of the upper fin portion of the second fin, higher than a top surface of the top surface of the first dielectric spacer and the second dielectric spacer.11.The integrated circuit structure according to claim 8, wherein said insulating structure comprises a first insulating layer, a second insulating layer directly on said first insulating layer, and a direct lateral direction (directly in the lateral direction) in said first A dielectric filling material on the second insulating layer.12.The integrated circuit structure of claim 11 wherein said first insulating layer is an undoped insulating layer comprising nitrogen and oxygen.13.The integrated circuit structure of claim 11 wherein said second insulating layer comprises silicon and nitrogen.14.The integrated circuit structure of claim 11 wherein said dielectric fill material comprises silicon and oxygen.15.A method of fabricating an integrated circuit structure, the method comprising:Forming a fin comprising silicon having a lower fin portion and an upper fin portion;Forming an insulating structure directly adjacent to a sidewall of the lower fin portion of the fin;Forming a first gate structure and a second gate structure over the upper fin portion and over the first portion and the second portion of the insulating structure, respectively;Forming the upper fin portion of the fin conformal, conforming to the first gate structure and the second gate structure, and the first gate of the insulating structure a third portion of the conformal dielectric material between the pole structure and the second gate structure;Forming a hard mask material over the dielectric material;The hard mask material is recessed to expose the dielectric material conformal to the upper fin portion of the fin and conformal to the first gate structure and the second gate structure a portion of the recessed hard mask material covering a portion of the dielectric material conforming to the third portion of the insulating structure between the first gate structure and the second gate structure; as well asEtching the dielectric material anisotropically and subsequently removing the recessed hard mask material to form a first dielectric spacer along a sidewall of the first gate structure and along the second gate structure a sidewall forming a second dielectric spacer, the second dielectric spacer being over the third portion of the insulating structure between the first gate structure and the second gate structure The first dielectric spacer is continuous.16.The method of claim 15 wherein recessing the hard mask material comprises wet etching the hard mask material.17.The method of claim 15 wherein recessing the hard mask material comprises using an ashing, dry etching or plasma etching process.18.The method of claim 15 wherein forming the hard mask material comprises forming a carbon based hard mask material.19.The method of claim 15, wherein the first gate structure and the second gate structure are dummy gate structures, the method further comprising:The first gate structure and the second gate structure are replaced with a permanent gate dielectric and a gate electrode stack.20.The method of claim 15 further comprising:An embedded source or drain structure is formed on the opposite side of the first gate structure and on the opposite side of the second gate structure. |
Continuous gate and fin spacers for advanced integrated circuit structure fabricationCross-reference to related applicationsThe present application claims the benefit of U.S. Provisional Application Serial No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No.Technical fieldEmbodiments of the present disclosure are in the field of advanced integrated circuit structure fabrication, and in particular, the field of 10 nanometer nodes and smaller integrated circuit structures fabricated and resulting structures.Background techniqueThe scaling of features in integrated circuits has been the driving force behind the growing semiconductor industry over the past few decades. Scaling to smaller and smaller features can increase the density of functional units on a limited area of the semiconductor chip. For example, shrinking the transistor size allows for the incorporation of an increased number of memories or logic devices on the chip, resulting in the fabrication of products with greater capacity. However, there is no problem with drivers of increasing capacity. The need to optimize the performance of each device becomes more and more important.The variability in conventional and currently known manufacturing processes may limit the possibility of further extending them to the 10 nm node or sub-10 nm node range. Therefore, the fabrication of functional components required for future technology nodes may require the introduction of new methods or integration of new technologies in current manufacturing processes, or the replacement of current manufacturing processes.DRAWINGSFigure 1A shows a cross-sectional view of the starting structure after deposition of a layer of hard mask material formed over an interlayer dielectric (ILD) layer, but prior to its patterning.Fig. 1B shows a cross-sectional view of the structure of Fig. 1A after patterning the hard mask layer by half the pitch.2A is a schematic illustration of a pitch quadrant for fabricating a semiconductor fin, in accordance with an embodiment of the present disclosure.2B shows a cross-sectional view of a semiconductor fin fabricated using a pitch quadrant approach, in accordance with an embodiment of the present disclosure.3A is a schematic illustration of a fused fin pitch quadrant for fabricating a semiconductor fin, in accordance with an embodiment of the present disclosure.FIG. 3B illustrates a cross-sectional view of a semiconductor fin fabricated using a fused fin pitch quadrant in accordance with an embodiment of the present disclosure.4A-4C illustrate cross-sectional views showing various operations in a method of fabricating a plurality of semiconductor fins, in accordance with an embodiment of the present disclosure.FIG. 5A illustrates a cross-sectional view of a pair of semiconductor fins separated by a three-layer trench isolation structure, in accordance with an embodiment of the present disclosure.Figure 5B illustrates a cross-sectional view of another pair of semiconductor fins separated by another three-layer trench isolation structure, in accordance with another embodiment of the present disclosure.6A-6D illustrate cross-sectional views of various operations in fabricating a three-layer trench isolation structure in accordance with an embodiment of the present disclosure.7A-7E illustrate oblique three-dimensional cross-sectional views of various operations in a method of fabricating an integrated circuit structure, in accordance with an embodiment of the present disclosure.8A-8F illustrate slightly projected cross-sectional views taken along line a-a' of FIG. 7E for various operations in a method of fabricating an integrated circuit structure, in accordance with an embodiment of the present disclosure.9A shows a slightly projected cross-sectional view taken along line a-a' of FIG. 7E for an integrated circuit structure including a permanent gate stack and an epitaxial source or drain region, in accordance with an embodiment of the present disclosure.9B illustrates a cross-sectional view taken along line b-b' of FIG. 7E for an integrated circuit structure including an epitaxial source or drain region and a multi-layer trench isolation structure, in accordance with an embodiment of the present disclosure.FIG. 10 illustrates a cross-sectional view of an integrated circuit structure taken at a source or drain location, in accordance with an embodiment of the present disclosure.Figure 11 illustrates a cross-sectional view of another integrated circuit structure taken at a source or drain location, in accordance with an embodiment of the present disclosure.12A-12D illustrate cross-sectional views of various operations in a method of fabricating an integrated circuit structure at a source or drain location and representing the fabrication of an integrated circuit structure, in accordance with an embodiment of the present disclosure.13A and 13B illustrate plan views representing various operations in a method of patterning a fin having multiple gate spacers for forming a partial isolation structure, in accordance with an embodiment of the present disclosure.14A-14D are plan views showing various operations in a method of patterning a fin having a single gate spacer for forming a partial isolation structure, in accordance with another embodiment of the present disclosure.Figure 15 illustrates a cross-sectional view of an integrated circuit structure having fins with multiple gate spacers for partial isolation, in accordance with an embodiment of the present disclosure.16A shows a cross-sectional view of an integrated circuit structure with single gate spaced fins for partial isolation, in accordance with another embodiment of the present disclosure.FIG. 16B illustrates a cross-sectional view showing a position at which a fin isolation structure may be formed to replace a gate electrode, according to an embodiment of the present disclosure.17A-17C illustrate various depth possibilities for fin cuts made using fin trim isolation, in accordance with an embodiment of the present disclosure.18 illustrates a plan view of possible options for displaying a depth of a local position of a fin cut within a fin versus a depth of a wider position and a corresponding intercept along the a-a' axis, in accordance with an embodiment of the present disclosure. Sectional view.19A and 19B illustrate cross-sectional views of various operations in a method of selecting a fin end stress source location at an end of a fin having a wide slit, in accordance with an embodiment of the present disclosure.20A and 20B illustrate cross-sectional views of various operations in a method of selecting a fin end stress source location at an end of a fin having a partial slit, in accordance with an embodiment of the present disclosure.21A-21M illustrate cross-sectional views of various operations in a method of fabricating an integrated circuit structure having a differentiated fin end dielectric plug, in accordance with an embodiment of the present disclosure.22A-22D illustrate cross-sectional views of an exemplary structure of a PMOS fin end stressor dielectric plug, in accordance with an embodiment of the present disclosure.23A shows a cross-sectional view of another semiconductor structure having fin end stress-inducing features in accordance with another embodiment of the present disclosure.23B shows a cross-sectional view of another semiconductor structure having fin end stress-inducing features in accordance with another embodiment of the present disclosure.24A shows a perspective view of a fin having uniaxial tensile stress in accordance with an embodiment of the present disclosure.Figure 24B shows a perspective view of a fin with uniaxial compressive stress in accordance with an embodiment of the present disclosure.25A and 25B illustrate various operations in a method of patterning a fin having a single gate spacer for forming a partial isolation structure at a selected gate line cut location, in accordance with an embodiment of the present disclosure. Floor plan.26A-26C illustrate dielectrics for multiple cut and fin trim isolation (FTI) partial fin cut locations and only multiple cut locations for various regions of the structure of FIG. 25B, in accordance with an embodiment of the present disclosure. A cross-sectional view of the various possibilities of the plug.27A shows a plan view and corresponding cross-sectional views of an integrated circuit structure having a gate line cut with a dielectric plug extending into a dielectric spacer of a gate line, in accordance with an embodiment of the present disclosure.Figure 27B shows a plan view and corresponding cross-sectional view of an integrated circuit structure having gate line cuts with dielectric plugs extending beyond the dielectric spacers of the gate lines, in accordance with another embodiment of the present disclosure.28A-28F illustrate cross-sectional views of various operations in a method of fabricating an integrated circuit structure having a gate line cutout with a dielectric plug, having a dielectric plug, in accordance with another embodiment of the present disclosure. An upper portion extending beyond the dielectric spacer of the gate line and a lower portion extending into the gate line dielectric spacer.29A-29C illustrate plan and corresponding cross-sectional views of an integrated circuit structure having residual dummy gate material at portions of the bottom of the permanent gate stack, in accordance with an embodiment of the present disclosure.30A-30D illustrate cross-sectional views of various operations in a method of fabricating an integrated circuit structure having residual dummy gate material at portions of the bottom of a permanent gate stack, in accordance with another embodiment of the present disclosure.31A shows a cross-sectional view of a semiconductor device having a ferroelectric or antiferroelectric gate dielectric structure in accordance with an embodiment of the present disclosure.31B shows a cross-sectional view of another semiconductor device having a ferroelectric or antiferroelectric gate dielectric structure in accordance with another embodiment of the present disclosure.Figure 32A shows a plan view of a plurality of gate lines over a pair of semiconductor fins, in accordance with an embodiment of the present disclosure.Figure 32B illustrates a cross-sectional view taken along the a-a' axis of Figure 32A, in accordance with an embodiment of the present disclosure.33A shows a cross-sectional view of an NMOS device pair having a differentiated voltage threshold based on modulated doping, the PMOS device pair having a modulated doping based on a modulated doping, in accordance with an embodiment of the present disclosure. Differentiate the voltage threshold.33B shows a cross-sectional view of an NMOS device pair having a differentiated voltage threshold based on a differentiated gate electrode structure with a differentiated gate based on a differentiated gate, in accordance with another embodiment of the present disclosure. The differentiated voltage threshold of the pole electrode structure.34A shows a cross-sectional view of three NMOS devices with differentiated voltage thresholds based on differentiated gate electrode structures and modulated doping, three in accordance with an embodiment of the present disclosure, The PMOS devices have differentiated voltage thresholds based on the differentiated gate electrode structure and the modulated doping.34B shows a cross-sectional view of three NMOS devices with differentiated voltage thresholds based on differentiated gate electrode structures and modulated doping, in accordance with another embodiment of the present disclosure. The three PMOS devices have differentiated voltage thresholds based on the differentiated gate electrode structure and the modulated doping.35A-35D illustrate cross-sectional views of various operations in a method of fabricating an NMOS device having a differentiated voltage threshold based on a differentiated gate electrode structure, in accordance with an embodiment of the present disclosure.36A-36D illustrate cross-sectional views of various operations in a method of fabricating a PMOS device having a differentiated voltage threshold based on a differentiated gate electrode structure, in accordance with an embodiment of the present disclosure.FIG. 37 shows a cross-sectional view of an integrated circuit structure having a P/N junction in accordance with an embodiment of the present disclosure.38A-38H illustrate cross-sectional views of various operations in a method of fabricating an integrated circuit structure using a dual metal gate replacement gate process flow, in accordance with an embodiment of the present disclosure.39A-39H illustrate cross-sectional views representing various operations in a method of fabricating a dual silicide-based integrated circuit, in accordance with an embodiment of the present disclosure.Figure 40A illustrates a cross-sectional view of an integrated circuit structure having trench contacts for NMOS devices, in accordance with an embodiment of the present disclosure.Figure 40B illustrates a cross-sectional view of an integrated circuit structure having trench contacts for PMOS devices, in accordance with another embodiment of the present disclosure.41A shows a cross-sectional view of a semiconductor device having conductive contacts on a source or drain region, in accordance with an embodiment of the present disclosure.41B shows a cross-sectional view of another semiconductor device having conductive contacts on an elevated source or drain region, in accordance with an embodiment of the present disclosure.Figure 42 shows a plan view of a plurality of gate lines over a pair of semiconductor fins in accordance with an embodiment of the present disclosure.43A-43C illustrate cross-sectional views taken along line a-a' of Fig. 42 for various operations in a method of fabricating an integrated circuit structure, in accordance with an embodiment of the present disclosure.FIG. 44 illustrates a cross-sectional view taken along line b-b' of FIG. 42 for an integrated circuit structure in accordance with an embodiment of the present disclosure.45A and 45B show plan and corresponding cross-sectional views, respectively, of an integrated circuit structure including trench contact plugs having a hard mask material thereon, in accordance with an embodiment of the present disclosure.46A-46D illustrate cross-sectional views showing various operations in a method of fabricating an integrated circuit structure including a trench contact plug having a hard mask material thereon, in accordance with an embodiment of the present disclosure.Figure 47A shows a plan view of a semiconductor device having a gate contact disposed over an inactive portion of a gate electrode. Figure 47B shows a cross-sectional view of a non-planar semiconductor device having a gate contact disposed over a non-active portion of a gate electrode.48A shows a plan view of a semiconductor device having gate contact vias disposed over an active portion of a gate electrode, in accordance with an embodiment of the present disclosure. Figure 48B illustrates a cross-sectional view of a non-planar semiconductor device having gate contact vias disposed over an active portion of a gate electrode, in accordance with an embodiment of the present disclosure.49A-49D illustrate cross-sectional views showing various operations in a method of fabricating a semiconductor structure having a gate contact structure disposed over an active portion of a gate, in accordance with an embodiment of the present disclosure.Figure 50 illustrates a plan view and corresponding cross-sectional views of an integrated circuit structure having trench contacts including overlying insulating cap layers in accordance with an embodiment of the present disclosure.51A-51F illustrate cross-sectional views of various integrated circuit structures having trench contacts including an overlying insulating cap layer and having an overlying insulating cap layer, in accordance with an embodiment of the present disclosure. Gate stack.52A shows a plan view of another semiconductor device having a gate contact via disposed over an active portion of a gate, in accordance with another embodiment of the present disclosure.Figure 52B illustrates a plan view of another semiconductor device having trench contact vias that couple pairs of trench contacts, in accordance with another embodiment of the present disclosure.53A-53E illustrate cross-sectional views showing various operations in a method of fabricating an integrated circuit structure with a gate stack having an overlying insulating cap layer, in accordance with an embodiment of the present disclosure.Figure 54 is a schematic illustration of a pitch quadrant of trenches used to fabricate interconnect structures, in accordance with an embodiment of the present disclosure.Figure 55A shows a cross-sectional view of a metallization layer fabricated using a pitch quadrant scheme, in accordance with an embodiment of the present disclosure.Figure 55B illustrates a cross-sectional view of a metallization layer fabricated using a pitch halving scheme over a metallization layer fabricated using a pitch quadruple scheme, in accordance with an embodiment of the present disclosure.Figure 56A illustrates a cross-sectional view of an integrated circuit structure in which a metallization layer having a metal line component is over a metallization layer having different metal line components, in accordance with an embodiment of the present disclosure.Figure 56B illustrates a cross-sectional view of an integrated circuit structure in which a metallization layer having a metal line component is coupled to a metallization layer having a different metal line composition, in accordance with an embodiment of the present disclosure.57A-57C illustrate cross-sectional views of individual interconnect lines having various liners and conductive cap structure arrangements, in accordance with an embodiment of the present disclosure.58 illustrates an integrated circuit structure in which four metallization layers having one metal line composition and pitch are over two metallization layers having different metal line compositions and smaller pitches, in accordance with an embodiment of the present disclosure. Sectional view.59A-59D illustrate cross-sectional views of various interconnect and via arrangements with a bottom conductive layer, in accordance with an embodiment of the present disclosure.60A-60D illustrate cross-sectional views of a structural arrangement of a recessed line topography for a BEOL metallization layer, in accordance with an embodiment of the present disclosure.61A-61D illustrate cross-sectional views of a structural arrangement of a step ladder topology for a BEOL metallization layer, in accordance with an embodiment of the present disclosure.Figure 62A shows a plan view and corresponding cross-sectional views taken along the a-a' axis of a plan view of a metallization layer, in accordance with an embodiment of the present disclosure.62B shows a cross-sectional view of a wire end or plug in accordance with an embodiment of the present disclosure.62C shows another cross-sectional view of a wire end or plug, in accordance with an embodiment of the present disclosure.63A-63F show plan and corresponding cross-sectional views showing various operations in the final processing scheme of the plug, in accordance with an embodiment of the present disclosure.Figure 64A shows a cross-sectional view of a conductive wire plug having a seam therein in accordance with an embodiment of the present disclosure.Figure 64B illustrates a cross-sectional view of a stack of metallization layers including conductive line plugs at lower metal line locations, in accordance with an embodiment of the present disclosure.Figure 65 shows a first view of a cell layout for a memory cell.Figure 66 shows a first view of a cell layout for a memory cell with internal node jumpers, in accordance with an embodiment of the present disclosure.Figure 67 shows a second view of the cell layout for a memory cell.Figure 68 shows a second view of a cell layout for a memory cell with internal node jumpers, in accordance with an embodiment of the present disclosure.Figure 69 shows a third view of the cell layout for a memory cell.Figure 70 illustrates a third view of a cell layout for a memory cell with internal node jumpers, in accordance with an embodiment of the present disclosure.71A and 71B illustrate bit cell layout and schematic diagrams for a six transistor (6T) static random access memory (SRAM), respectively, in accordance with an embodiment of the present disclosure.Figure 72 shows a cross-sectional view of two different layouts for the same standard cell, in accordance with an embodiment of the present disclosure.Figure 73 shows a plan view showing four different cell arrangements for even (E) or odd (O) designations, in accordance with an embodiment of the present disclosure.Figure 74 shows a plan view of a block level multi-grid in accordance with an embodiment of the present disclosure.Figure 75 illustrates an exemplary acceptable (pass) layout based on standard cells having different versions, in accordance with an embodiment of the present disclosure.Figure 76 illustrates an exemplary unacceptable (failed) layout based on standard cells having different versions, in accordance with an embodiment of the present disclosure.Figure 77 illustrates another exemplary acceptable (pass) layout based on standard cells having different versions, in accordance with an embodiment of the present disclosure.78 shows a partially cut plan view and a corresponding cross-sectional view of a fin-based thin film resistor structure in accordance with an embodiment of the present disclosure, wherein the cross-sectional view is taken along the a-a' axis of the partially cut plan view.Figures 79-83 illustrate plan and corresponding cross-sectional views representing various operations in a method of fabricating a fin-based thin film resistor structure, in accordance with an embodiment of the present disclosure.Figure 84 shows a plan view of a fin-based thin film resistor structure having various exemplary locations for anode or cathode electrode contacts, in accordance with an embodiment of the present disclosure.85A-85D illustrate plan views of various fin geometries for fabricating fin-based precision resistors in accordance with an embodiment of the present disclosure.FIG. 86 shows a cross-sectional view of a lithographic mask structure in accordance with an embodiment of the present disclosure.FIG. 87 shows a computing device in accordance with an embodiment of the present disclosure.FIG. 88 illustrates an interpolator that includes one or more embodiments of the present disclosure.Figure 89 is an isometric view of a mobile computing platform employing an IC fabricated or including one or more of the features described herein in accordance with one or more processes described herein, in accordance with an embodiment of the present disclosure.Figure 90 shows a cross-sectional view of a flip chip mounted die in accordance with an embodiment of the present disclosure.Detailed waysThe fabrication of advanced integrated circuit structures is described. In the following description, numerous specific details are set forth, such as the specific details It will be apparent to those skilled in the art that the embodiments of the present disclosure may be practiced without the specific details. In other instances, well-known features such as integrated circuit design layout have not been described in detail to avoid unnecessarily obscuring embodiments of the present disclosure. In addition, it should be understood that the various embodiments are shown in the drawingsThe following specific embodiments are merely illustrative in nature and are not intended to limit the embodiments of the subject matter or the application and use of such embodiments. As used herein, the term "exemplary" means "serving as an example, instance, or illustration." Any embodiment described herein as exemplary is not necessarily to be construed as preferred or advantageous over other embodiments. Furthermore, there is no intention to be bound by the spirit of the invention or the invention.This description includes references to "one embodiment" or "an embodiment". The appearances of the phrase "in one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with the present disclosure.the term. The following paragraphs provide definitions or contexts for terms found in this disclosure (including the appended claims):"include". The term is open-ended. This term does not exclude additional structures or operations, as used in the appended claims."Configured to". Various units or components may be described or claimed as being "configured to" perform one or more tasks. In this context, "configured" is used to imply structure by indicating that the unit or component includes structures that perform one or more of those tasks during operation. Thus, even when a given unit or component is not currently operating (e.g., not turned on or active), the unit or component can be said to be configured to perform a task. DETAILED DESCRIPTION A unit or circuit or component "configured to" perform one or more tasks is specifically intended not to invoke the unit or component 35 U.S.C. § 112, paragraph 6."First", "Second", etc. As used herein, these terms are used as a mark of a noun after it, and do not imply any type of order (e.g., space, time, logic, etc.)."Coupled" - The following description refers to elements or nodes or features that are "coupled" together. "Coupling", as used herein, means that one element or node or feature is directly or indirectly connected to another element or node or feature (either directly or indirectly communicated with it) and is not necessarily mechanical.In addition, some of the terms are used in the following description for reference purposes only, and thus these terms are not intended to be limiting. For example, terms such as "upper", "lower", "above" and "below" refer to the directions in which the reference is provided in the drawings. Terms such as "front", "back", "back", "side", "outside" and "inside" are used to describe the orientation or position of a portion of a component within a consistent but arbitrary reference frame, or both, which may be referred to by reference. A clear description of the text of the components in question and associated drawings is provided. Such terms may include the words specifically mentioned above, their derivatives, and words of similar meaning."Inhibition" - As used herein, inhibition is used to describe or minimize the effects. When a component or feature is described as inhibiting behavior, motion, or condition, it can completely prevent the outcome or consequence or future state. In addition, "inhibiting" may also mean reducing or reducing the consequences, manifestations or effects that may otherwise occur. Thus, when a component, component or feature is referred to as a suppression result or state, it does not necessarily completely prevent or eliminate the result or state.Embodiments described herein may relate to front-end process (FEOL) semiconductor processing and structures. FEOL is the first part of integrated circuit (IC) fabrication in which individual devices (e.g., transistors, capacitors, resistors, etc.) are patterned in a semiconductor substrate or layer. FEOL typically covers every item up to, but not including, the deposition of a metal interconnect layer. After the final FEOL operation, the result is typically a wafer with isolated transistors (eg, without any circuitry).Embodiments described herein may be directed to back end of the line (BEOL) semiconductor processing and structures. BEOL is the second part of IC fabrication in which individual devices (e.g., transistors, capacitors, resistors, etc.) are interconnected using circuitry such as one or more metallization layers on the wafer. BEOL includes contacts, insulation (dielectric), metal levels, and junctions for chip-to-package connections. In the BEOL portion of the fabrication stage, contacts (pads), interconnects, vias, and dielectric structures are formed. For modern IC processes, more than 10 metal layers can be added to the BEOL.The embodiments described below can be applied to both FEOL processing and structure, BEOL processing and structure, or both FEOL and BEOL processing and structure. In particular, although an exemplary processing scheme can be exemplified using a FEOL processing scenario, such an approach can also be applied to BEOL processing. Also, although an exemplary processing scheme can be exemplified using a BEOL processing scenario, such an approach can also be applied to FEOL processing.The pitch division process and patterning scheme can be implemented to implement the embodiments described herein, or can be included as part of the embodiments described herein. Pitch division patterning typically refers to halving the pitch, spacing four points, and the like. The pitch division scheme can be applied to both FEOL processing, BEOL processing, or both FEOL (device) and BEOL (metallization) processing. In accordance with one or more embodiments described herein, lithography is first performed to print a unidirectional line (e.g., strictly unidirectional or predominantly unidirectional) at a predefined pitch. The pitch division process is then implemented as a technique for increasing the line density.In an embodiment, the term "grid structure" for a fin, gate line, metal line, ILD line, or hard mask line is used herein to refer to a closely spaced grid structure. In one such embodiment, the tight spacing cannot be achieved directly by selected lithography. For example, a pattern based on selected lithography may be formed first, but the pitch may be halved using a spacer mask patterning, as is known in the art. Furthermore, the initial pitch can be quadrupled by a second round of spacer mask patterning. Thus, the grid-like patterns described herein can have metal lines, ILD lines, or hard mask lines that are spaced apart at substantially uniform spacing and have a substantially uniform width. For example, in some embodiments, the pitch variation will be within ten percent, the width variation will be within ten percent, and in some embodiments, the pitch variation will be within five percent and the width variation will be Within five points. The pattern can be created by halving the pitch or spacing four or more. In an embodiment, the grid is not necessarily a single pitch.In a first example, the pitch halving can be implemented to double the line density of the fabricated grid structure. 1A shows a cross-sectional view of a starting structure after deposition of a layer of hard mask material formed over an interlayer dielectric (ILD) layer, but prior to its patterning. Figure 1B shows a cross-sectional view of the structure of Figure 1A after patterning the hard mask layer by half the pitch.Referring to FIG. 1A, the starting structure 100 has a hard mask material layer 104 formed on an interlayer dielectric (ILD) layer 102. A patterned mask 106 is disposed over the hard mask material layer 104. The patterned mask 106 has spacers 108 formed along the sidewalls of its features (lines) on the hard mask material layer 104.Referring to FIG. 1B, the hard mask material layer 104 is patterned using a pitch halving approach. Specifically, the patterned mask 106 is first removed. The resulting pattern of spacers 108 has doubled the density of mask 106 or halved the pitch or features of mask 106. For example, the pattern of spacers 108 is transferred to the hard mask material layer 104 by an etching process to form a patterned hard mask 110, as shown in FIG. 1B. In one such embodiment, the patterned hard mask 110 is formed to have a grid pattern having unidirectional lines. The grid pattern of the patterned hard mask 110 may be a closely spaced grid pattern. For example, it may not be possible to achieve tight spacing directly through selected lithographic techniques. Further, although not shown, the initial pitch can be divided by four points by the second wheel spacer mask patterning. Accordingly, the grid-like pattern of the patterned hard mask 110 of FIG. 1B may have hard mask lines spaced at constant pitch with respect to each other and having a constant width. The size achieved can be much smaller than the critical dimensions of the lithography used.Thus, for front-end process (FEOL) or back-end process (BEOL) or both, the blanket film can be patterned using photolithography and etching processes, which can involve, for example, spacer-based double patterning (SBDP). Or halve the pitch, or based on the four-time patterning (SBQP) of the spacer or four points apart. It should be appreciated that other spacing divisions can also be implemented. In any event, in an embodiment, the gridded layout can be fabricated by a selected lithographic pattern, such as 193 nm immersion lithography (193i). Pitch division can be implemented to increase the density of lines in the gridded layout by a factor of n. The meshed layout formation using 193i lithography plus "n" times the pitch division can be specified as 193i+P/n pitch division. In one such embodiment, the 193 nm immersion scaling can be continued for many generations with cost-effective pitch division.In the fabrication of integrated circuit devices, multi-gate transistors such as tri-gate transistors have become more popular as the size of the period continues to shrink. Tri-gate transistors are typically fabricated on bulk silicon substrates or on silicon-on-insulator substrates. In some instances, bulk silicon substrates are preferred because of their lower cost and compatibility with existing high yield bulk silicon substrate infrastructure.However, scaling a multi-gate transistor is not without its consequences. As the size of these basic building blocks of microelectronic circuits decreases and as the absolute number of basic building blocks fabricated in a given area increases, the constraints on the semiconductor process used to fabricate these building blocks have become overwhelming .In accordance with one or more embodiments of the present disclosure, a pitch quadrant approach is implemented for patterning a semiconductor layer to form a semiconductor fin. In one or more embodiments, a fused fin pitch quadrant is implemented.2A is a schematic illustration of a pitch quadrant 200 for fabricating a semiconductor fin, in accordance with an embodiment of the present disclosure. 2B shows a cross-sectional view of a semiconductor fin fabricated using a pitch quadrangle approach, in accordance with an embodiment of the present disclosure.Referring to Figure 2A, in operation (a), a photoresist layer (PR) is patterned to form a photoresist feature 202. Photoresist features 202 may be patterned using standard lithographic processing techniques such as 193 immersion lithography. In operation (b), a layer of material, such as an insulating layer or dielectric hard mask layer, is patterned using photoresist features 202 to form first backbone (BB1) features 204. A first spacer (SP1) feature 206 is then formed adjacent the sidewall of the first backbone feature 204. In operation (c), the first backbone feature 204 is removed to leave only the first spacer feature 206. Prior to or during removal of the first backbone feature 204, the first spacer feature 206 can be thinned to form a thinned first spacer feature 206&apos;, as shown in Figure 2A. Depending on the spacing and size required for the BB2 feature (208, as described below), the thinning can be performed (as shown) before or after BB1 (feature 204) is removed. In operation (d), the first spacer feature 206 or the thinned first spacer feature 206' is used to pattern a layer of material, such as an insulating layer or dielectric hard mask layer, to form a second backbone (BB2) Feature 208. A second spacer (SP2) feature 210 is then formed adjacent the sidewall of the second backbone feature 208. In operation (e), the second backbone feature 208 is removed to leave only the second spacer feature 210. The semiconductor layer can then be patterned using the remaining second spacer features 210 to provide a plurality of semiconductor fins having a pitch four quarters relative to the initial patterned photoresist features 202. As an example, referring to FIG. 2B, a plurality of semiconductor fins 250 are formed using a second spacer feature 210 as a mask for patterning (eg, dry or plasma etch patterning), such as formed of a bulk silicon layer. Silicon fins. In the example of Figure 2B, the plurality of semiconductor fins 250 all have substantially the same pitch and spacing.It will be appreciated that the spacing between the initial patterned photoresist features can be modified to alter the structural results of the inter-distance quadruple process. In an example, FIG. 3A is a schematic illustration of a fused fin pitch quadrant 300 for fabricating a semiconductor fin, in accordance with an embodiment of the present disclosure. Figure 3B illustrates a cross-sectional view of a semiconductor fin fabricated using a fused fin pitch quadrangle approach in accordance with an embodiment of the present disclosure.Referring to FIG. 3A, in operation (a), a photoresist layer (PR) is patterned to form a photoresist feature 302. Patterned light can be patterned using standard lithographic processing techniques such as 193 immersion lithography, but with intervals that may eventually conflict with the design rules required to produce a uniform spacing multiple pattern (eg, the spacing referred to as the sub-design rule space) Resist feature 302. In operation (b), a layer of material, such as an insulating layer or dielectric hard mask layer, is patterned using photoresist features 302 to form first backbone (BB1) features 304. A first spacer (SP1) feature 306 is then formed adjacent the sidewall of the first backbone feature 304. However, in contrast to the scheme illustrated in Figure 2A, some of the adjacent first spacer features 306 are fused spacer features due to the tighter photoresist features 302. In operation (c), the first backbone feature 304 is removed to leave only the first spacer feature 306. Some of the first spacer features 306 may be thinned to form a thinned first spacer feature 306' before or after the first backbone feature 304 is removed, as shown in Figure 3A. At operation (d), a material layer, such as an insulating layer or dielectric hard mask layer, is patterned using the first spacer feature 306 and the thinned first spacer feature 306' to form a second backbone (BB2) feature 308. A second spacer (SP2) feature 310 is then formed adjacent the sidewall of the second backbone feature 308. However, where the BB2 feature 308 is a fusion feature, such as at the center BB2 feature 308 of Figure 3A, a second spacer is not formed. At operation (e), the second backbone feature 308 is removed to leave only the second spacer feature 310. The semiconductor layer can then be patterned using the remaining second spacer features 310 to provide a plurality of semiconductor fins having a size that is four minutes apart from the initial patterned photoresist features 302.As an example, referring to FIG. 3B, a plurality of semiconductor fins 350 are formed using a second spacer feature 310 as a mask for patterning (eg, dry or plasma etch patterning), such as formed of a bulk silicon layer. Silicon fins. However, in the example of Figure 3B, the plurality of semiconductor fins 350 have varying pitches and spacings. Such a fused fin spacer patterning approach can be implemented to substantially eliminate the presence of fins in certain locations of the pattern of multiple fins. Thus, merging the first spacer features 306 in certain locations allows for the fabrication of six or four fins based on the two first backbone features 304, which typically produce eight fins based on the two first backbone features 304, As described in connection with Figures 2A and 2B. In one example, the fins have a tighter pitch in the panel than would normally be allowed by creating the fins at even pitches and then cutting the unwanted fins, although still according to embodiments described herein Implement the latter approach.In an exemplary embodiment, referring to FIG. 3B, the integrated circuit structure, the first plurality of semiconductor fins 352 have the longest dimension along the first direction (y, entering the page). Adjacent individual semiconductor fins 353 of the first plurality of semiconductor fins 352 are spaced apart from each other by a first amount (S1) in a second direction (x) orthogonal to the first direction. The second plurality of semiconductor fins 354 have the longest dimension in the first direction y. Adjacent individual semiconductor fins 355 of the second plurality of semiconductor fins 354 are spaced apart from each other by a first amount (S1) in the second direction. The nearest semiconductor fins 356 and 357 of the first plurality of semiconductor fins 352 and the second plurality of semiconductor fins 354 are respectively spaced apart from each other by a second amount (S2) in the second direction x. In an embodiment, the second amount S2 is greater than the first amount S1 but less than twice the first amount S1. In another embodiment, the second amount S2 exceeds twice the first amount S1.In one embodiment, the first plurality of semiconductor fins 352 and the second plurality of semiconductor fins 354 comprise silicon. In one embodiment, the first plurality of semiconductor fins 352 and the second plurality of semiconductor fins 354 are continuous with the underlying single crystal silicon substrate. In one embodiment, the individual fins of the first plurality of semiconductor fins 352 and the second plurality of semiconductor fins 354 have from the first plurality of semiconductor fins 352 and the second in the second direction x A sidewall of the plurality of semiconductor fins 354 that tapers outwardly from the top to the bottom of the fin. In one embodiment, the first plurality of semiconductor fins 352 have exactly five semiconductor fins and the second plurality of semiconductor fins 354 have exactly five semiconductor fins.In another exemplary embodiment, referring to Figures 3A and 3B, a method of fabricating an integrated circuit structure includes forming a first primary backbone structure 304 (left BB1) and a second primary backbone structure 304 (right BB1). A primary spacer structure 306 is formed adjacent the sidewalls of the first primary backbone structure 304 (left BB1) and the second primary backbone structure 304 (right BB1). A primary spacer structure 306 is merged between the first primary backbone structure 304 (left BB1) and the second primary backbone structure 304 (right BB1). The first primary backbone structure (left BB1) and the second primary backbone structure (right BB1) are removed and the first, second, third and fourth secondary backbone structures 308 are provided. The second and third secondary backbone structures are fused (e.g., the middle pair of secondary backbone structures 308). A secondary spacer structure 310 is formed adjacent the sidewalls of the first, second, third, and fourth secondary backbone structures 308. The first, second, third, and fourth secondary backbone structures 308 are then removed. The semiconductor material is then patterned using the secondary spacer structure 310 to form the semiconductor fins 350 in the semiconductor material.In one embodiment, the first primary backbone structure and the second primary backbone structure are patterned using sub-design regular intervals between the first primary backbone structure 304 (left BB1) and the second primary backbone structure 304 (right BB1) . In one embodiment, the semiconductor material comprises silicon. In one embodiment, the individual semiconductor fins in the semiconductor fins 350 have sidewalls that taper outwardly from the top to the bottom of the individual semiconductor fins in the semiconductor fins 350 in the second direction x. In one embodiment, the semiconductor fins 350 are continuous with the underlying single crystal silicon substrate. In one embodiment, patterning the semiconductor material with the secondary spacer structure 310 includes forming a first plurality of semiconductor fins 352 having the longest dimension in the first direction y, wherein the first plurality of semiconductor fins Adjacent individual semiconductor fins in 352 are spaced apart from each other by a first amount S1 in a second direction x orthogonal to the first direction y. Forming a second plurality of semiconductor fins 354 having the longest dimension in the first direction y, wherein adjacent individual semiconductor fins of the second plurality of semiconductor fins 354 are spaced apart from each other in the second direction x A quantity of S1. The nearest semiconductor fins 356 and 357 of the first plurality of semiconductor fins 352 and the second plurality of semiconductor fins 354 are respectively spaced apart from each other by a second amount S2 in the second direction x. In an embodiment, the second amount S2 is greater than the first amount S1. In one such embodiment, the second amount S2 is less than twice the first amount S1. In another such embodiment, the second amount S2 is greater than twice the first amount S1 but less than three times the first amount S1. In an embodiment, as shown in Figure 3B, the first plurality of semiconductor fins 352 have exactly five semiconductor fins, and the second plurality of semiconductor fins 354 have exactly five semiconductor fins.In another aspect, a fin trimming process should be recognized in which fin removal is performed as an alternative to the fused fin approach, which can be trimmed (removed) during hard mask patterning or by physically removing the fins Fin. As an example of the latter manner, FIGS. 4A-4C illustrate cross-sectional views showing various operations in a method of fabricating a plurality of semiconductor fins according to an embodiment of the present disclosure.Referring to FIG. 4A, a patterned hard mask layer 402 is formed over a semiconductor layer 464 such as a bulk single crystal silicon layer. Referring to FIG. 4B, fins 406 are then formed in semiconductor layer 404 by, for example, a dry or plasma etch process. Referring to Figure 4C, selected fins 406 are removed, for example, using a masking and etching process. In the illustrated example, one of the fins 406 is removed and a residual fin pile 408 can be left. In such a "fin fin trim last" mode, the hard mask 402 is patterned as a whole to provide a grid structure without removing or modifying individual features. The total number of fins is not modified until the fins are made. OneIn another aspect, a multi-layer trench isolation region can be implemented between the semiconductor fins, which can be referred to as a shallow trench isolation (STI) structure. In an embodiment, a multilayer STI structure is formed between the silicon fins formed in the bulk silicon substrate to define sub-fin regions of the silicon fins.It may be desirable to use bulk silicon for fin- or tri-gate based transistors. However, it is a concern that the area under the active silicon fin portion of the device (sub-fin) (e.g., gate control region, or HSi) is eliminated or not gate controlled. Thus, if the source or drain region is at or below the HSi point, there may be a leak path through the sub-fin region. It may be the case that the leak path in the sub-fin region should be controlled for proper operation of the device.One way to solve the above problem involves using a well implant operation in which the sub-fin region is heavily doped (eg, much larger than 2E18/cm3), thus cutting off the sub-fin leakage, but also in the fin Significant doping. The addition of a halo implant further increases the doping of the fin such that the ends of the wire fin are doped at a high level (e.g., greater than about 1E18/cm3).Another approach involves doping provided by sub-fin doping without having to deliver the same level of doping to the HSi portion of the fin. The process may involve selective doping of a sub-fin region of a tri-gate or FinFET transistor fabricated on a bulk silicon wafer by, for example, out-diffusion of a tri-gate doped glass fin. For example, selective doping of the sub-fin region of a tri-gate or FinFET transistor can alleviate sub-fin leakage while keeping the fin doping low. Incorporating a solid doped source (eg, p-type and n-type doped oxide, nitride, or carbide) into the transistor process stream (after recessing from the sidewalls of the fin) transports the well into the sub-fin Doping while maintaining the fin body relatively undoped.Thus, the process scheme can include the use of a solid source doped layer (e.g., a boron doped oxide) deposited on the fin after the fin etch. Later, after trench filling and polishing, the doped layer is recessed together with the trench fill material to define the fin height (HSi) for the device. This operation removes the doped layer from the sidewalls of the fin above the HSi. Therefore, the doped layer exists only along the fin sidewalls in the sub-fin region, thus ensuring precise control of doping placement. After drive-in annealing, high doping is limited to the sub-fin region, rapidly transitioning to low doping in adjacent regions of the fin above the HSi (so forming the channel region of the transistor). Typically, borosilicate glass (BSG) is applied for NMOS fin doping, and a phosphosilicate (PSG) or arsenate silicate glass (AsSG) layer is implemented for PMOS fin doping. In one example, the P-type solid dopant source layer is a BSG layer having a boron concentration in the range of from about 0.1% to about 10% by weight. In another example, the N-type solid dopant source layer is a PSG layer or an AsSG layer having a phosphorus or arsenic concentration in the range of about 0.1 to 10% by weight, respectively. A silicon nitride cap layer may be included on the doped layer, and then a silicon dioxide or silicon oxide fill material may be included on the silicon nitride cap layer.According to another embodiment of the present disclosure, for relatively thinner fins (eg, fins having a width less than about 20 nanometers), the sub-fin leaks are sufficiently low, wherein the fins are formed directly adjacent to the fins. Doped or lightly doped silicon oxide or silicon dioxide film, forming a silicon nitride layer on an undoped or lightly doped silicon oxide or silicon dioxide film, and including silicon dioxide or oxide on the silicon nitride cap layer Silicon filled material. It will be appreciated that the doping of the sub-fin regions, such as halo doping, may also be performed using such structures.FIG. 5A illustrates a cross-sectional view of a pair of semiconductor fins separated by a three-layer trench isolation structure, in accordance with an embodiment of the present disclosure.Referring to Figure 5A, the integrated circuit structure includes fins 502, such as silicon fins. The fin 502 has a lower fin portion (sub-fin) 502A and an upper fin portion 502B (HSi). The first insulating layer 504 is directly on the sidewall of the lower fin portion 502A of the fin 502. The second insulating layer 506 is directly on the first insulating layer 504, and the first insulating layer 504 is directly on the sidewall of the lower fin portion 502A of the fin 502. The dielectric fill material 508 is directly adjacent in the lateral direction to the second insulating layer 506 directly on the first insulating layer 504, which is directly on the sidewall of the lower fin portion 502A of the fin 502.In an embodiment, the first insulating layer 504 is an undoped insulating layer comprising silicon and oxygen, such as a silicon oxide or silicon dioxide insulating layer. In an embodiment, the first insulating layer 504 comprises silicon and oxygen and has no other atomic species having an atomic concentration greater than 1E15 atoms per cubic centimeter. In an embodiment, the first insulating layer 504 has a thickness in the range of 0.5-2 nm.In an embodiment, the second insulating layer 506 comprises silicon and nitrogen, such as a stoichiometric Si3N4 silicon nitride insulating layer, a silicon-rich silicon nitride insulating layer, or a silicon-depleted silicon nitride insulating layer. In an embodiment, the second insulating layer 506 has a thickness in the range of 2-5 nanometers.In an embodiment, dielectric fill material 508 comprises silicon and oxygen, such as a silicon oxide or silicon dioxide insulating layer. In an embodiment, the gate electrode is ultimately formed on top of the sidewall of the upper fin portion 502B of the fin 502 and laterally adjacent thereto.It will be appreciated that during processing, the upper fin portion of the semiconductor fin may be corroded or consumed. Moreover, the trench isolation structures between the fins may also be etched to have a non-planar topography, or may be formed to have a non-planar topography at the time of manufacture. As an example, Figure 5B illustrates a cross-sectional view of another pair of semiconductor fins separated by another three-layer trench isolation structure, in accordance with another embodiment of the present disclosure.Referring to FIG. 5B, the integrated circuit structure includes a first fin 552, such as a silicon fin. The first fin 552 has a lower fin portion 552A and an upper fin portion 552B, and a shoulder feature 554 at a region between the lower fin portion 552A and the upper fin portion 552B. A second fin 562, such as a second silicon fin, has a lower fin portion 562A and an upper fin portion 562B, and a shoulder at a region between the lower fin portion 562A and the upper fin portion 562B Feature 564. The first insulating layer 574 is directly on the sidewall of the lower fin portion 552A of the first fin 552 and directly on the sidewall of the lower fin portion 562A of the second fin 562. The first insulating layer 574 has a first end 574A that is substantially coplanar with the shoulder feature 554 of the first fin 552, and the first insulating layer 574 also has substantially the same shoulder features 564 as the second fin 562. The second end 574B of the face. The second insulating layer 576 is directly on the first insulating layer 574, and the first insulating layer 574 is directly on the sidewall of the lower fin portion 552A of the first fin 552 and directly on the lower fin of the second fin 562. On the side wall of the portion 562A.The dielectric fill material 578 is laterally adjacent to the second insulating layer 576 directly on the first insulating layer 574, and the first insulating layer 574 is directly on the sidewall of the lower fin portion 552A of the first fin 552 and directly The sidewall of the lower fin portion 562A of the second fin 562. In an embodiment, the dielectric fill material 578 has an upper surface 578A, wherein a portion of the upper surface 578A of the dielectric fill material 578 is below at least one of the shoulder features 554 of the first fin 552 and at the second fin 562 Below at least one of the shoulder features 564, as shown in Figure 5B.In an embodiment, the first insulating layer 574 is an undoped insulating layer comprising silicon and oxygen, such as a silicon oxide or silicon dioxide insulating layer. In an embodiment, the first insulating layer 574 includes silicon and oxygen, and there are no other atomic species having an atomic concentration greater than 1E15 atoms per cubic centimeter. In an embodiment, the first insulating layer 574 has a thickness in the range of 0.5-2 nm.In an embodiment, the second insulating layer 576 comprises silicon and nitrogen, such as a stoichiometric Si3N4 silicon nitride insulating layer, a silicon-rich silicon nitride insulating layer, or a silicon-depleted silicon nitride insulating layer. In an embodiment, the second insulating layer 576 has a thickness in the range of 2-5 nanometers.In an embodiment, dielectric fill material 578 comprises silicon and oxygen, such as a silicon oxide or silicon dioxide insulating layer. In an embodiment, the gate electrode is ultimately formed over the top of the sidewall of the upper fin portion 552B of the first fin 552 and laterally adjacent the sidewall of the upper fin portion 552B, and in the second The top of the sidewall of the upper fin portion 562B of the fin 562 is laterally adjacent to the sidewall of the upper fin portion 562B. The gate electrode is also over the dielectric fill material 578 between the first fin 552 and the second fin 562.6A-6D illustrate cross-sectional views of various operations in fabricating a three-layer trench isolation structure in accordance with an embodiment of the present disclosure.Referring to Figure 6A, a method of fabricating an integrated circuit structure includes forming fins 602, such as silicon fins. The first insulating layer 604 is formed directly on the fin 602 and conforms to the fin 602 as shown in Figure 6B. In an embodiment, the first insulating layer 604 comprises silicon and oxygen and has no other atomic species having an atomic concentration greater than 1E15 atoms per cubic centimeter.Referring to FIG. 6C, a second insulating layer 606 is formed directly on the first insulating layer 604 and conformal to the first insulating layer 604. In an embodiment, the second insulating layer 606 includes silicon and nitrogen. A dielectric fill material 608 is formed directly on the second insulating layer 606 as shown in Figure 6D.In an embodiment, the method further involves recessing the dielectric fill material 608, the first insulating layer 604, and the second insulating layer 606 to provide an exposed upper fin portion 602A (eg, the upper fin of FIGS. 5A and 5B) Fin 602 of the portion 502B, 552B or 562B). The resulting structure can be as described in connection with Figure 5A or Figure 5B. In one embodiment, recessing the dielectric fill material 608, the first insulating layer 604, and the second insulating layer 606 involves using a wet etch process. In another embodiment, recessing the dielectric fill material 608, the first insulating layer 604, and the second insulating layer 606 involves using a plasma etch or a dry etch process.In an embodiment, the first insulating layer 604 is formed using a chemical vapor deposition process. In an embodiment, the term chemical vapor deposition process forms a second insulating layer 606. In an embodiment, a dielectric fill material 608 is formed using a spin coating process. In one such embodiment, the dielectric fill material 608 is a spin-on material and is exposed to steam treatment, for example, prior to or after the recess etch process to provide a cured material comprising silicon and oxygen. In an embodiment, the gate electrode is ultimately formed over the top of the sidewall of the upper fin portion of fin 602 and laterally adjacent the sidewall of the upper fin portion of fin 602.In another aspect, the gate sidewall spacer material can remain over a particular trench isolation region as a protection against corrosion of the trench isolation region during subsequent processing operations. For example, Figures 7A-7E illustrate oblique three-dimensional cross-sectional views of various operations in a method of fabricating an integrated circuit structure in accordance with an embodiment of the present disclosure.Referring to Figure 7A, a method of fabricating an integrated circuit structure includes forming fins 702, such as silicon fins. The fin 702 has a lower fin portion 702A and an upper fin portion 702B. The insulating structure 704 is formed directly adjacent to the sidewall of the lower fin portion 702A of the fin 702. A gate structure 706 is formed over the upper fin portion 702B and over the insulating structure 704. In an embodiment, the gate structure is a placeholder or dummy gate structure comprising a sacrificial gate dielectric layer 706A, a sacrificial gate 706B, and a hard mask 706C. Dielectric material 708 is formed conformal to upper fin portion 702B of fin 702, conformal to gate structure 706, and conformal to insulating structure 704.Referring to FIG. 7B, a hard mask material 710 is formed over the dielectric material 708. In an embodiment, the hard mask material 710 is a carbon-based hard mask material formed using a spin coating process.Referring to FIG. 7C, the hard mask material 710 is recessed to form a recessed hard mask material 712 and expose portions of the dielectric material 708 that are conformal to the upper fin portion 702B of the fin 702 and conformal to the gate structure 706. . The recessed hard mask material 712 covers portions of the dielectric material 708 that are conformal to the insulating structure 704. In an embodiment, the hard mask material 710 is recessed using a wet etch process. In another embodiment, the hard mask material 710 is recessed using an ashing, dry etching or plasma etching process.Referring to FIG. 7D, the dielectric material 708 is anisotropically etched along the sidewalls of the gate structure 706 (as the dielectric spacer 714A), along the sidewalls of the upper fin portion 702B of the fin 702, and A patterned dielectric material 714 is formed over the insulating structure 704.Referring to Figure 7E, the recessed hard mask material 712 is removed from the structure of Figure 7D. In an embodiment, the gate structure 706 is a dummy gate structure, and subsequent processing includes replacing the gate structure 706 with a permanent gate dielectric and a gate electrode stack. In an embodiment, further processing includes forming an embedded source or drain structure on opposite sides of the gate structure 706, as described in more detail below.Referring again to FIG. 7E, in an embodiment, integrated circuit structure 700 includes a first fin (left 702), such as a first silicon fin, the first fin having a lower fin portion 702A and an upper fin Section 702B. The integrated circuit structure also includes a second fin (right 702), such as a second silicon fin, the second fin having a lower fin portion 702A and an upper fin portion 702B. The insulating structure 704 is directly adjacent the sidewall of the lower fin portion 702A of the first fin and is directly adjacent the sidewall of the lower fin portion 702A of the second fin. The gate electrode 706 is over the upper fin portion 702B of the first fin (left 702), above the upper fin portion 702B of the second fin (right 702), and at the insulating structure 704 The first part is above 704A. The first dielectric spacer 714A is along the sidewall of the upper fin portion 702B of the first fin (left 702) and the second dielectric spacer 702C is along the upper fin portion of the second fin (right 702) Side wall of 702B. Second dielectric spacer 714C is continuous with first dielectric spacer 714B over second portion 704B of insulating structure 704 between first fin (left 702) and second fin (right 702) .In an embodiment, the first and second dielectric spacers 714B and 714C comprise silicon and nitrogen, such as a stoichiometric Si3N4 silicon nitride material, a silicon-rich silicon nitride material, or a silicon-depleted silicon nitride material.In an embodiment, the integrated circuit structure 700 further includes an embedded source or drain structure on opposite sides of the gate electrode 706 having the edge along the first and second fins 702 a bottom surface below the top surface of the first and second dielectric spacers 714B and 714C of the sidewall of the upper fin portion 702B, and the source or drain structure has a portion along the first and second fins 702 The top surface above the top surfaces of the first and second dielectric spacers 714B and 714C of the sidewalls of the upper fin portion 702B, as described below in connection with FIG. 9B. In an embodiment, the insulating structure 704 includes a first insulating layer, a second insulating layer directly on the first insulating layer, and a dielectric fill material laterally directly on the second insulating layer, as also described below in connection with FIG. 9B. .8A-8F illustrate slightly projected cross-sectional views taken along line a-a' of FIG. 7E for various operations in a method of fabricating an integrated circuit structure, in accordance with an embodiment of the present disclosure.Referring to Figure 8A, a method of fabricating an integrated circuit structure includes forming fins 702, such as silicon fins. Fin 702 has a lower fin portion (not visible in Figure 8A) and an upper fin portion 702B. The insulating structure 704 is formed to be directly adjacent to the sidewall of the lower fin portion 702A of the fin 702. A pair of gate structures 706 are formed over the upper fin portion 702B and over the insulating structure 704. It will be appreciated that the perspective views shown in Figures 8A-8F are slightly projected to show the front of the upper fin portion 702B (outside the page) of the gate structure 706 and portions of the insulating structure, with the upper fin portion Slightly enter the page. In an embodiment, gate structure 706 is a placeholder or dummy gate structure that includes sacrificial gate dielectric layer 706A, sacrificial gate 706B, and hard mask 706C.Referring to FIG. 8B, which corresponds to the process operation described in connection with FIG. 7A, dielectric material 708 is formed conformal to upper fin portion 702B of fin 702, conformal to gate structure 706, and exposed to insulating structure 704. Partially conformal.Referring to Figure 8C, which corresponds to the process operation described in connection with Figure 7B, a hard mask material 710 is formed over the dielectric material 708. In an embodiment, hard mask material 710 is a carbon-based hard mask material formed using a spin coating process.Referring to FIG. 8D, which corresponds to the process operation described in connection with FIG. 7C, the hard mask material 710 is recessed to form a recessed hard mask material 712 and exposes the upper fin portion 702B of the dielectric material 708 and the fin 702. A portion that conforms to and conforms to the gate structure 706. The recessed hard mask material 712 covers a portion of the dielectric material 708 that is conformal to the insulating structure 704. In an embodiment, the hard mask material 710 is recessed using a wet etch process. In another embodiment, the hard mask material 710 is recessed using an ashing, dry etching, or plasma etch process.Referring to FIG. 8E, which corresponds to the process operation described in connection with FIG. 7D, the dielectric material 708 is anisotropically etched along the sidewalls of the gate structure 706 (as portion 714A) along the upper fin of the fin 702. A portion of the sidewall of the object portion 702B and a patterned dielectric material 714 are formed over the insulating structure 704.Referring to Figure 8F, which corresponds to the process operation described in connection with Figure 7E, the recessed hard mask material 712 is removed from the structure of Figure 8E. In an embodiment, gate structure 706 is a dummy gate structure, and processing includes replacing gate structure 706 with a permanent gate dielectric and a gate electrode stack. In an embodiment, further processing includes forming an embedded source or drain structure on the opposite side of the gate structure 706, as described in more detail below.Referring again to FIG. 8F, in an embodiment, integrated circuit structure 700 includes fins 702, such as silicon fins, fin 702 having a lower fin portion (not seen in FIG. 8F) and an upper fin portion 702B. The insulating structure 704 is directly adjacent to the sidewall of the lower fin portion of the fin 702. The first gate electrode (left 706) is over the upper fin portion 702B and above the first portion 704A of the insulating structure 704. The second gate electrode (right 706) is over the upper fin portion 702B and over the second portion 704A' of the insulating structure 704. The first dielectric spacer (right 714A of left 706) is along the sidewall of the first gate electrode (left 706) and the second dielectric spacer (left 714A of right 706) is along the second gate electrode (right 706) a sidewall, a second dielectric spacer over the third portion 704A" of the insulating structure 704 between the first gate electrode (left 706) and the second gate electrode (right 706) and the first dielectric spacer continuously.9A shows a slightly projected cross-sectional view taken along line a-a' of FIG. 7E for an integrated circuit structure including a permanent gate stack and an epitaxial source or drain region, in accordance with an embodiment of the present disclosure. Figure 9B illustrates a cross-sectional view taken along line b-b' of Figure 7E for an integrated circuit structure including an epitaxial source or drain region and a multi-layer trench isolation structure, in accordance with an embodiment of the present disclosure.Referring to Figures 9A and 9B, in an embodiment, the integrated circuit structure includes an embedded source or drain structure 910 on the opposite side of gate electrode 706. The embedded source or drain structure 910 has a bottom below the top surface 990 of the first and second dielectric spacers 714B and 714C along the sidewalls of the upper fin portion 702B of the first and second fins 702 Surface 910A. The embedded source or drain structure 910 has a top surface over the top surfaces of the first and second dielectric spacers 714B and 714C along the sidewalls of the upper fin portion 702B of the first and second fins 702 910B.In an embodiment, gate stack 706 is a permanent gate stack 920. In one such embodiment, permanent gate stack 920 includes a gate dielectric layer 922, a first gate layer 924, such as a work function gate layer, and a gate fill material 926, as shown in Figure 9A. In one embodiment in which the permanent gate structure 920 is over the insulating structure 704, the permanent gate structure 920 is formed on the residual polysilicon portion 930, which may be a residual of the replacement gate process involving the sacrificial polysilicon gate electrode. Things.In an embodiment, the insulating structure 704 includes a first insulating layer 902, a second insulating layer 904 directly over the first insulating layer 902, and a dielectric fill material 906 that is laterally directly over the second insulating layer 904. In one embodiment, the first insulating layer 902 is an undoped insulating layer comprising silicon and oxygen. In one embodiment, the second insulating layer 904 includes silicon and nitrogen. In one embodiment, dielectric fill material 906 comprises silicon and oxygen.In another aspect, the epitaxial embedded source or drain region is implemented as a source or drain structure for a semiconductor fin. By way of example, FIG. 10 illustrates a cross-sectional view of an integrated circuit structure taken at a source or drain location, in accordance with an embodiment of the present disclosure.Referring to Figure 10, integrated circuit structure 1000 includes a P-type device, such as a P-type metal oxide semiconductor (PMOS) device. Integrated circuit structure 1000 also includes N-type devices, such as N-type metal oxide semiconductor (PMOS) devices.The PMOS device of Figure 10 includes a first plurality of semiconductor fins 1002, such as silicon fins formed from a bulk silicon substrate 1001. At the source or drain location, the upper portion of the fin 1002 has been removed and the same or different semiconductor material is grown to form the source or drain structure 1004. It will be appreciated that at the cross-sectional views taken on either side of the gate electrode, the source or drain structures 1004 will look the same, for example, they will look substantially the same on the source side as on the drain side. of. In an embodiment, as described, the source or drain structure 1004 has a portion below the upper surface of the insulating structure 1006 and a portion above it. In an embodiment, as shown, the source or drain structure 1004 has a strong facet. In an embodiment, conductive contact 1008 is formed over source or drain structure 1004. However, in one such embodiment, having a strong facet and a wider growth of the source or drain structure 1004 at least to some extent inhibits good coverage of the conductive contacts 1008.The NMOS device of Figure 10 includes a second plurality of semiconductor fins 1052, such as silicon fins formed from a bulk silicon substrate 1001. At the source or drain locations, the upper portion of the fins 1052 has been removed and the same or different semiconductor materials have been grown to form the source or drain structures 1054. It will be appreciated that at the cross-sectional views taken on either side of the gate electrode, the source or drain structures 1054 will look the same, for example, they will look substantially the same on the source side and on the drain side. of. In an embodiment, as described above, the source or drain structure 1054 has a portion below the upper surface of the insulating structure 1006 and a portion above it. In an embodiment, as shown, the source or drain structure 1054 has a weaker facet relative to the source or drain structure 1004. In an embodiment, conductive contact 1058 is formed over source or drain structure 1054. In one such embodiment, a narrower growth with a weaker facet and resulting source or drain structure 1054 (compared to source or drain structure 1004) enhances good coverage of conductive contact 1058 .The shape of the source or drain structure of the PMOS device can be altered to improve the contact area with the overlying contact. For example, Figure 11 illustrates a cross-sectional view of another integrated circuit structure taken at a source or drain location, in accordance with an embodiment of the present disclosure.Referring to Figure 11, integrated circuit structure 1100 includes a P-type semiconductor (e.g., PMOS) device. The PMOS device includes a first fin 1102, such as a silicon fin. A first epitaxial source or drain structure 1104 is embedded in the first fin 1102. In one embodiment, although not shown, the first epitaxial source or drain structure 1104 is on the first side of the first gate electrode (which may be formed on the upper fin portion of the channel portion such as fin 1102) Above), and a second epitaxial source or drain structure is embedded in the first fin 1102 at a second side of the first gate electrode opposite the first side. In an embodiment, the first epitaxial source or drain structure 1104 and the second epitaxial source or drain structure comprise silicon and germanium and have a profile 1105. In one embodiment, the outline is a matchstick profile, as shown in FIG. The first conductive electrode 1108 is over the first epitaxial source or drain structure 1104.Referring again to Figure 11, in an embodiment, integrated circuit structure 1100 also includes an N-type semiconductor (e.g., NMOS) device. The NMOS device includes a second fin 1152 such as a silicon fin. A third epitaxial source or drain structure 1154 is embedded in the second fin 1152. In one embodiment, although not shown, the third epitaxial source or drain structure 1154 is on the first side of the second gate electrode (which may be formed on the upper fin portion of the channel portion such as fin 1152) Above), and a fourth epitaxial source or drain structure is embedded in the second fin 1152 at a second side of the second gate electrode opposite the first side. In an embodiment, the third epitaxial source or drain structure 1154 and the fourth epitaxial source or drain structure comprise silicon and have substantially the same profile 1105 as the first and second epitaxial source or drain structures 1004. Outline. The second conductive electrode 1158 is over the third epitaxial source or drain structure 1154.In an embodiment, the first epitaxial source or drain structure 1104 has a weaker facet. In an embodiment, the first epitaxial source or drain structure 1104 has a height of approximately 50 nanometers and has a width in the range of 30-35 nanometers. In one such embodiment, the third epitaxial source or drain structure 1154 has a height of approximately 50 nanometers and a width in the range of 30-35 nanometers.In an embodiment, the first epitaxial source or drain structure 1104 is varied to a first epitaxial source or drain structure 1104 at a substantially 20% germanium concentration gradient at the bottom 1104A of the first epitaxial source or drain structure 1104. Approximately 45% of the erbium concentration at the top 1104B. In an embodiment, the first epitaxial source or drain structure 1104 is doped with boron atoms. In one such embodiment, the third epitaxial source or drain structure 1154 is doped with a phosphorus or arsenic atom.12A-12D illustrate cross-sectional views taken at a source or drain location and representing various operations in fabricating an integrated circuit structure, in accordance with an embodiment of the present disclosure.Referring to Figure 12A, a method of fabricating an integrated circuit structure includes forming a fin, such as a silicon fin formed from a silicon substrate 1201. Fin 1202 has a lower fin portion 1202A and an upper fin portion 1202B. In an embodiment, although not shown, at the location into the page, a gate electrode is formed over a portion of the upper fin portion 1202B of the fin 1202. Such a gate electrode has a first side opposite the second side and defines a source or drain location on the first and second sides. For example, for illustrative purposes, the cross-sectional position of the views of Figures 12A-12D is taken at one of the source or drain locations at one of the sides of the gate electrode.Referring to Figure 12B, the source or drain locations of the fins 1202 are recessed to form recessed fin portions 1206. The recessed source or drain location of the fin 1202 can be on one side of the gate electrode and the second side of the gate electrode. Referring to both Figures 12A and 12B, in an embodiment, dielectric spacer 1204 is formed along a sidewall of a portion of fin 1202, such as on one side of the gate structure. In one such embodiment, recessing the fins 1202 involves recessing the fins 1202 below the top surface 1204A of the dielectric spacers 1204.Referring to FIG. 12C, an epitaxial source or drain structure 1208 is formed over the recessed fins 1206, for example, so that it can be formed on one side of the gate electrode. In one such embodiment, a second epitaxial source or drain structure is formed on the second portion of the recessed fin 1206 on the second side of such a gate electrode. In an embodiment, the epitaxial source or drain structure 1208 includes silicon and germanium and has a matchstick profile as shown in Figure 12C. In an embodiment, dielectric spacer 1204 is included and along lower portion 1208A of the sidewall of epitaxial source or drain structure 1208, as shown.Referring to FIG. 12D, a conductive electrode 1210 is formed over the epitaxial source or drain structure 1208. In an embodiment, conductive electrode 1210 includes a conductive barrier layer 1210A and a conductive fill material 1201B. In one embodiment, conductive electrode 1210 follows the contour of epitaxial source or drain structure 1208, as shown. In other embodiments, the upper portion of epitaxial source or drain structure 1208 is etched during fabrication of conductive electrode 1210.In another aspect, fin trim isolation (FTI) and single gate spacing for isolated fins are described. A non-planar transistor utilizing a fin of a semiconductor material protruding from a surface of the substrate employs a gate electrode that wraps two, three or even all sides of the fin (ie, double gate, triple gate Pole, nanowire transistor). Typically, the source and drain regions are then formed in the fin on either side of the gate electrode or as a regrown portion of the fin. In order to isolate the source or drain regions of the first non-planar transistor from the source or drain regions of adjacent second non-planar transistors, a gap or space may be formed between two adjacent fins. Such isolation gaps typically require some sort of masking etch. Once isolated, the gate stack is then patterned over the individual fins, typically again using some sort of masking etch (e.g., line etch or open etch depending on the particular implementation).One potential problem with the fin isolation technique described above is that the gate is not self-aligned with the end of the fin and the alignment of the gate stack pattern with the semiconductor fin pattern depends on both The overlap of the patterns. As such, lithography overlap tolerances are added to the sizing of the semiconductor fins and isolation gaps, where the fins require a greater length and the isolation gap is greater than the isolation gap for a given level of transistor function. Therefore, device architectures and fabrication techniques that reduce this over-sizing provide a highly advantageous improvement in transistor density.Another potential problem with the fin isolation technique described above is that the stress in the semiconductor fin required to improve carrier mobility can be lost from the channel region of the transistor, where it has been left during manufacturing. The unconstrained fin surface allows the fin strain to relax. Thus, device architectures and fabrication techniques that maintain higher levels of desired fin stress provide advantageous improvements in non-planar transistor performance.Throughout the gate fin isolation architecture and techniques are described herein in accordance with embodiments of the present disclosure. In the illustrated exemplary embodiment, non-planar transistors in a microelectronic device, such as an integrated circuit (IC), are isolated from each other in a manner that is self-aligned to the gate electrode of the transistor. Although embodiments of the present disclosure are applicable to almost any IC employing non-planar transistors, exemplary ICs include, but are not limited to, microprocessor cores including logic and memory (SRAM) portions, RFICs (eg, including digital baseband and analog front ends) Module wireless IC) and power IC.In an embodiment, the isolation regions are used to electrically isolate the two ends of adjacent semiconductor fins from one another, using only one patterned mask level to position the isolation regions relative to the gate electrodes. In an embodiment, a plurality of sacrificial placeholder strips are formed at a fixed pitch using a single mask, the first subset of the strips defining the position or size of the isolation region, and the second sub-section of the footprint strip The set defines the position or size of the gate electrode. In some embodiments, the first subset of the footprint strips are removed and an isolation slit is made into the semiconductor fins in the opening obtained by removing the first subset, while the non-sacrificial gate electrode stack is utilized Replace the second subset of the footprint strips. Since a subset of the spacers for gate electrode replacement are used to form the isolation regions, the method and resulting architecture are referred to herein as "through-gate" isolation. For example, one or more of the through gate isolation embodiments described herein can achieve higher transistor densities and higher levels of favorable transistor channel stress.A larger transistor density can be achieved by the isolation defined after placing or defining the gate electrode, since the fin isolation can be perfectly sized and placed on the field using the gate electrode to make the gate electrode and isolation region Is an integer multiple of the minimum feature spacing of a single masking level. In other embodiments where the semiconductor fins have a lattice mismatch with the substrate on which the fins are disposed, a greater degree of strain is maintained by defining the isolation after placement or delimiting the gate electrode. For such an embodiment, other features of the transistor formed prior to defining the ends of the fin (eg, the gate electrode and the added source or drain material) help after making the isolation slit in the fin The fin strain is maintained mechanically.To provide a further context, transistor scaling can benefit from a more dense package of cells within the chip. Currently, most cells are separated from their adjacent cells by two or more dummy gates, and the dummy gates have buried fins. The cells are isolated by etching the fins below the two or more dummy gates, which connect one cell to another. If the number of dummy gates separating adjacent cells can be reduced from two or more to one, scaling can be significant. As mentioned above, one solution requires two or more dummy gates. The fins below the two or more dummy gates are etched during fin patterning. A potential problem with this approach is that the dummy gate consumes space on the chip that can be used for the cell. In an embodiment, the manner described herein enables the separation of adjacent cells using only a single dummy gate.In an embodiment, the fin trim isolation mode is implemented as a self-aligned patterning scheme. Here, the fin below the single gate is etched away. Thus, adjacent cells can be separated by a single dummy gate. Advantages of this approach can include saving space on the chip and allowing for greater computational power on a given area. This approach may also allow for fin pruning to be performed at sub-fin pitch distances.13A and 13B illustrate plan views representing various operations in a method of patterning a fin having multiple gate spacers for forming a partial isolation structure, in accordance with an embodiment of the present disclosure.Referring to FIG. 13A, a plurality of fins 1302 are shown having a length along a first direction 1304. A grid 1306 is shown along a second direction 1308 orthogonal to the first direction 1304 with a spacing 1307 between the grids defining a location for ultimately forming a plurality of gate lines.Referring to Figure 13B, a portion of the plurality of fins 1302 is cut (e.g., by an etching process) to leave a fin 1310 having a slit 1312 therein. Thus, the isolation structure ultimately formed in the slit 1312 has a size that exceeds a single gate line, such as the size of three gate lines 1306. Accordingly, a gate structure formed at a position finally along the gate line 1306 is formed at least partially over the isolation structure formed in the slit 1312. Thus, the slit 1312 is a relatively wide fin cut.14A-14D are plan views showing various operations in a method of patterning a fin having a single gate spacer for forming a partial isolation structure, in accordance with another embodiment of the present disclosure.Referring to Figure 14A, a method of fabricating an integrated circuit structure includes forming a plurality of fins 1402, the individual fins of the plurality of fins 1402 having the longest dimension in a first direction 1404. A plurality of gate structures 1406 are over the plurality of fins 1402, and the individual gate structures in the gate structures 1406 have the longest dimension in a second direction 1408 that is orthogonal to the first direction 1404. In an embodiment, the gate structure 1406 is a sacrificial or dummy gate line fabricated, for example, from polysilicon. In one embodiment, the plurality of fins 1402 are silicon fins and are continuous with a portion of the underlying silicon substrate.Referring to Figure 14B, a dielectric material structure 1410 is formed between adjacent ones of the plurality of gate structures 1406.Referring to Figure 14C, portion 1412 of one of the plurality of gate structures 1406 is removed to expose portions 1414 of each of the plurality of fins 1402. In an embodiment, the portion 1412 of removing one of the plurality of gate structures 1406 involves the use of a lithography window 1416 that is wider than the width 1418 of the portion 1412 of one of the plurality of gate structures 1406.Referring to Figure 14D, the exposed portion 1414 of each of the plurality of fins 1402 is removed to form a kerf region 1420. In an embodiment, the exposed portion 1414 of each of the plurality of fins 1402 is removed using a dry or plasma etch process. In an embodiment, removing the exposed portion 1414 of each of the plurality of fins 1402 involves etching to a depth that is less than the height of the plurality of fins 1402. In one such embodiment, the depth is greater than the depth of the source or drain regions of the plurality of fins 1402. In an embodiment, the depth is deeper than the depth of the active portion of the plurality of fins 1402 to provide an isolation margin. In an embodiment, the exposed portion 1414 of each of the plurality of fins 1402 is removed without etching or substantially etching the source or drain regions of the plurality of fins 1402 (eg, epitaxial source or drain) Area). In one such embodiment, the exposed portions 1414 of each of the plurality of fins 1402 are removed without laterally etching or substantially laterally etching the source or drain regions of the plurality of fins 1402 (eg, epitaxial Source or drain region).In an embodiment, the kerf region 1420 is ultimately filled with an insulating layer, for example, in the location of the removed portion 1414 of each of the plurality of fins 1402. Exemplary insulating layers or "multi-cut" or "plug" structures are described below. However, in other embodiments, the nicking region 1420 is partially filled with only the insulating layer, and then a conductive structure is formed therein. Conductive structures can be used as local interconnects. In an embodiment, a portion of the solid source dopant layer may be passed through the kerf region 1420 to a portion of the one or more fins prior to filling the kerf region 1420 with an insulating layer or with an insulating layer that houses the local interconnect structure. The dopant is implanted or transported.Figure 15 illustrates a cross-sectional view of an integrated circuit structure having fins with multiple gate spacers for partial isolation, in accordance with an embodiment of the present disclosure.Referring to Figure 15, the silicon fin 1502 has a first fin portion 1504 laterally adjacent to the second fin portion 1506. The first fin portion 1504 is separated from the second fin portion 1506 by a wider slit 1508, for example, as described in connection with Figures 13A and 13B, the wider slit 1508 has a width X. Dielectric fill material 1510 is formed in wider slit 1508 and electrically isolates first fin portion 1504 from second fin portion 1506. A plurality of gate lines 1512 are over the silicon fins 1502, wherein each of the gate lines can include a gate dielectric and gate electrode stack 1514, a dielectric cap layer 1516, and sidewall spacers 1518. Two gate lines (two gate lines 1512 on the left) occupy a wider slit 1508, such that the first fin portion 1504 and the second fin are effectively passed through two dummy gates or passive gates The body portions 1506 are separated.In contrast, the fin portions can be separated by a single gate distance. By way of example, FIG. 16A illustrates a cross-sectional view of an integrated circuit structure having fins with single gate spacing for local isolation, in accordance with another embodiment of the present disclosure.Referring to Figure 16A, silicon fin 1602 has a first fin portion 1604 laterally adjacent to second fin portion 1606. The first fin portion 1604 is separated from the second fin portion 1606 by a narrower slit 1608, such as described in connection with Figures 14A-14D, the narrower slit 1608 has a width Y, where Y is less than the X of Figure 15 . Dielectric fill material 1610 is formed in narrower slit 1608 and electrically isolates first fin portion 1604 from second fin portion 1606. A plurality of gate lines 1612 are over the silicon fins 1602, wherein each of the gate lines can include a gate dielectric and gate electrode stack 1614, a dielectric cap layer 1616, and sidewall spacers 1618. Dielectric fill material 1610 occupies where the single gate line was previously located such that first fin portion 1604 is separated from second fin portion 1606 by a single "plug" gate line. In one embodiment, the residual spacer material 1620 remains on the side walls of the location of the removed gate line portions, as shown. It will be appreciated that the fins 1602 may be made up of two or more passive gate lines (areas 1622 with three passive gate lines) fabricated by an earlier, wider fin-cutting process. Other areas are isolated from each other as described below.Referring again to FIG. 16A, integrated circuit structure 1600 includes fins 1602, such as silicon fins. The fin 1602 has the longest dimension in the first direction 1650. The isolation structure 1610 separates the first upper portion 1604 of the fin 1602 from the second upper portion 1606 of the fin 1602 in a first direction 1650. The isolation structure 1610 has a center 1611 along the first direction 1650.The first gate structure 1612A is over the first upper portion 1604 of the fin 1602, and the first gate structure 1612A has the longest dimension in a second direction 1652 (e.g., into the page) that is orthogonal to the first direction 1650. The center 1613A of the first gate structure 1612A is spaced apart from the center 1611 of the isolation structure 1610 by a distance in the first direction 1650. The second gate structure 1612B is over the first upper portion 1604 of the fin and the second gate structure 1612B has the longest dimension in the second direction 1652. The center 1613B of the second gate structure 1612B is spaced apart from the center 1613A of the first gate structure 1612A by a pitch in the first direction 1650. The third gate structure 1612C is above the second upper portion 1606 of the fin 1602, and the third gate structure 1612C has the longest dimension along the second direction 1652. The center 1613C of the third gate structure 1612C is spaced apart from the center 1611 of the isolation structure 1610 by a distance in the first direction 1650. In an embodiment, isolation structure 1610 has a top portion that is substantially coplanar with the top of first gate structure 1612A, the top of second gate structure 1612B, and the top of third gate structure 1612C, as shown.In an embodiment, each of the first gate structure 1612A, the second gate structure 1612B, and the third gate structure 1612C includes a gate electrode 1660 on and between sidewalls of the high-k gate dielectric layer 1662, As shown for the exemplary third gate structure 1612C. In one such embodiment, each of the first gate structure 1612A, the second gate structure 1612B, and the third gate structure 1612C further includes sidewalls on the gate electrode 1660 and the high-k gate dielectric layer 1662 Insulating cap 1616.In an embodiment, the integrated circuit structure 1600 further includes a first epitaxial semiconductor region 1664A of the fin 1602 on the first upper portion 1604 between the first gate structure 1612A and the isolation structure 1610. Second epitaxial semiconductor region 1664B is on first upper portion 1604 of fin 1602 between first gate structure 1612A and second gate structure 1612B. A third epitaxial semiconductor region 1664C is on the second upper portion 1606 of the fin 1602 between the third gate structure 1612C and the isolation structure 1610. In one embodiment, the first 1664A, second 1664B, and third 1664C epitaxial semiconductor regions comprise silicon and germanium. In another embodiment, the first 1664A, second 1664B, and third 1664C epitaxial semiconductor regions comprise silicon.In an embodiment, the isolation structure 1610 induces stress on the first upper portion 1604 of the fin 1602 and the second upper portion 1606 of the fin 1602. In one embodiment, the stress is a compressive stress. In one embodiment, the stress is a tensile stress. In other embodiments, the isolation structure 1610 is a partially filled insulating layer and then a conductive structure is formed therein. Conductive structures can be used as local interconnects. In an embodiment, the solid source dopant layer is implanted or transported into the partially cut portion of the one or more fins prior to forming the isolation structure 1610 with an insulating layer or with an insulating layer that houses the local interconnect structure. Miscellaneous.In another aspect, it will be appreciated that instead of an active gate electrode at a localized location of the fin cut or at a wider location of the fin cut, an isolation structure, such as the isolation structure 1610 described above, may be formed. Moreover, the depth of such partial or wider locations of the fin cuts can be formed to a depth that varies relative to each other within the fins. In a first example, FIG. 16B illustrates a cross-sectional view showing a position at which a fin isolation structure may be formed to replace a gate electrode, according to an embodiment of the present disclosure.Referring to FIG. 16B, a fin 1680, such as a silicon fin, is formed over the substrate 1682 and may be continuous with the substrate 1682. The fin 1680 has a fin end or a wide fin cut 1684, which may be formed, for example, in the final pattern of fin trimming as described above, when the fin is patterned. The fin 1680 also has a partial cut 1686 in which portions of the fin 1680 are removed, for example, using a fin trim isolation in which the dummy gate is replaced with a dielectric plug, as described above. Active gate electrode 1688 is formed over the fin and, for illustrative purposes, is shown slightly in front of fin 1680, and fin 1680 is in the background, where the dashed line represents the overlay in the front view region. A dielectric plug 1690 can be formed at the fin end or wide fin cut 1684 instead of using an active gate at such a location. Additionally or alternatively, a dielectric plug 1692 can be formed at the partial cut 1686 instead of using an active gate at such a location. It will be appreciated that the epitaxial source or drain region 1694 is also shown at the location of the fin 1680 between the active gate electrode 1688 and the plug 1690 or 1692. Moreover, in an embodiment, the surface roughness of the end of the fin at the partial slit 1686 is rougher than the end of the fin at the wide slit position, as shown in Fig. 16B.17A-17C illustrate various depth possibilities for fin cuts made using fin trim isolation, in accordance with an embodiment of the present disclosure.Referring to FIG. 17A, a semiconductor fin 1700 such as a silicon fin is formed over the lower substrate 1702 and may be continuous with the lower substrate 1702. The fin 1700 has a lower fin portion 1700A and an upper fin portion 1700B as defined by the height of the insulating structure 1704 relative to the fin 1700. The partial fin isolation slit 1706A divides the fin 1700 into a first fin portion 1710 and a second fin portion 1712. In the example of Fig. 17A, the depth of the partial fin isolation slit 1706A is the entire depth of the fin 1700 to the substrate 1702 as indicated along the a-a' axis.Referring to Figure 17B, in a second example, as shown along the a-a' axis, the depth of the partial fin isolation slit 1706B is deeper than the entire depth of the fin 1700 to the substrate 1702. That is, the slit 1706B extends into the lower substrate 1702.Referring to Figure 17C, in a third example, as shown along the a-a' axis, the depth of the partial fin isolation slit 1706C is less than the entire depth of the fin 1700, but deeper than the upper surface of the isolation structure 1704. Referring again to FIG. 17C, in a fourth example, as shown along the a-a' axis, the depth of the partial fin isolation slit 1706D is less than the entire depth of the fin 1700 and is substantially coextensive with the upper surface of the isolation structure 1704. The level of the face.18 illustrates a plan view of possible options for displaying a depth of a local position of a fin cut within a fin versus a depth of a wider position and a corresponding intercept along the a-a' axis, in accordance with an embodiment of the present disclosure. Sectional view.Referring to Figure 18, first and second semiconductor fins 1800 and 1802, such as silicon fins, have upper fin portions 1800B and 1802B extending over insulating structure 1804. Both fins 1800 and 1802 have fin ends or wide fin cuts 1806, which may be formed, for example, in the final pattern of fin trimming described above, when the fins are patterned. Both fins 1800 and 1802 also have partial cuts 1808 in which a portion of fins 1800 or 1802 is removed, for example, using a fin trim isolation in which a dummy plug is replaced with a dielectric plug, as described above. In an embodiment, the surface roughness of the ends of the fins 1800 and 1802 at the partial cut 1808 is rougher than the end of the fin at the location of 1806, as shown in FIG.Referring to the cross-sectional view of Fig. 18, lower fin portions 1800A and 1802A can be seen below the height of insulating structure 1804. Moreover, what is seen in the cross-sectional view is the residual portion 1810 of the fin that was removed at the final process of fin trimming prior to forming the insulating structure 1804, as described above. Although shown as protruding above the substrate, the residual portion 1810 can also be at the level of the substrate or into the substrate as shown by the additional exemplary wide slit depth 1820. It will be appreciated that the wide slit 1806 of the fins 1800 and 1802 can also be at the level described for the kerf depth 1820, an example of which is shown. The partial cut 1808 can have an exemplary depth corresponding to the depth described with respect to Figures 17A-17C, as shown.Referring collectively to FIGS. 16A, 16B, 17A-17C, and 18, in accordance with an embodiment of the present disclosure, an integrated circuit structure includes a fin including silicon having a top and sidewalls, wherein The top has the longest dimension in the first direction. The first isolation structure separates the first end of the first portion of the fin from the first end portion of the second portion of the fin in the first direction. The first isolation structure has a width in the first direction. The first end of the first portion of the fin has a surface roughness. The gate structure includes a gate electrode over the top of the region of the first portion of the fin and laterally adjacent the sidewall of the region. The gate structure has a width in the first direction, and a center of the gate structure is spaced apart from a center of the first isolation structure by a distance in the first direction. The second isolation structure is above the second end of the first portion of the fin, the second end being opposite the first end. The second isolation structure has a width in the first direction, and the second end of the first portion of the fin has a surface roughness that is less than the surface roughness of the first end of the first portion of the fin. The center of the second isolation structure is spaced apart from the center of the gate structure by a distance in the first direction.In one embodiment, the first end of the first portion of the fin has a sector shape as shown in Figure 16B. In one embodiment, the first epitaxial semiconductor region is on the first portion of the fin between the gate structure and the first isolation structure. A second epitaxial semiconductor region is on the first portion of the fin between the gate structure and the second isolation structure. In one embodiment, the first and second epitaxial semiconductor regions have a width in a second direction orthogonal to the first direction, and a width in the second direction is greater than a first portion of the fin below the gate structure in a second direction The width is wider, for example, as described in conjunction with the epitaxial features shown in Figures 11 and 12D, which have a wider width than the fin portion in which the epitaxial features are grown, for example, in the perspective views shown in Figures 11 and 12D. . In one embodiment, the gate structure further includes a high-k dielectric layer between the gate electrode and the first portion of the fin and along the sidewall of the gate electrode.Referring collectively to FIGS. 16A, 16B, 17A-17C, and 18, in accordance with another embodiment of the present disclosure, an integrated circuit structure includes a fin including silicon having a top and sidewalls Where the top has the longest dimension in one direction. The first isolation structure separates the first end of the first portion of the fin from the first end portion of the second portion of the fin in the direction. The first end of the first portion of the fin has a depth. The gate structure includes a gate electrode over the top of the region of the first portion of the fin and laterally adjacent the sidewall of the region. The second isolation structure is above the second end of the first portion of the fin, the second end being opposite the first end. The second end of the first portion of the fin has a different depth than the depth of the first end of the first portion of the fin.In one embodiment, the second end of the first portion of the fin has a depth that is less than the depth of the first end of the first portion of the fin. In one embodiment, the second end of the first portion of the fin has a depth that is greater than the depth of the first end of the first portion of the fin. In one embodiment, the first isolation structure has a width in the direction and the gate structure has a width along the direction. The second isolation structure has a width in the direction. In one embodiment, the center of the gate structure is spaced apart from the center of the first isolation structure by a spacing in the direction, and the center of the second isolation structure is spaced apart from the center of the gate structure by the spacing.Referring collectively to FIGS. 16A, 16B, 17A-17C, and 18, in accordance with another embodiment of the present disclosure, an integrated circuit structure includes a first fin, the first fin including silicon, the first fin The article has a top portion and a side wall, wherein the top portion has the longest dimension in one direction, and the discontinuity is along the direction of the first end of the first portion of the first fin and the second portion of the fin One end is partially open. The first portion of the first fin has a second end opposite the first end, and the first end of the first portion of the fin has a depth. The integrated circuit structure further includes a second fin comprising silicon, the second fin having a top and a sidewall, wherein the top has the longest dimension in the direction. The integrated circuit structure also includes remaining or residual fin portions between the first fin and the second fin. The residual fin portion has a top and a side wall, wherein the top has the longest dimension in the direction and the top is not coplanar with the depth of the first end of the first portion of the fin.In one embodiment, the first end of the first portion of the fin has a depth that is lower than the top of the remaining or residual fin portion. In one embodiment, the second end of the first portion of the fin has a depth that is coplanar with the depth of the first end of the first portion of the fin. In one embodiment, the second end of the first portion of the fin has a depth that is lower than the depth of the first end of the first portion of the fin. In one embodiment, the second end of the first portion of the fin has a depth that is greater than the depth of the first end of the first portion of the fin. In one embodiment, the first end of the first portion of the fin has a depth that is higher than the top of the remaining or residual fin portion. In one embodiment, the second end of the first portion of the fin has a depth that is coplanar with the depth of the first end of the first portion of the fin. In one embodiment, the second end of the first portion of the fin has a depth that is lower than the depth of the first end of the first portion of the fin. In one embodiment, the second end of the first portion of the fin has a depth that is higher than the depth of the first end of the first portion of the fin. In one embodiment, the second end of the first portion of the fin has a depth that is coplanar with the top of the residual fin portion. In one embodiment, the second end of the first portion of the fin has a depth that is lower than the top of the residual fin portion. In one embodiment, the second end of the first portion of the fin has a depth that is higher than the top of the residual fin portion.In another aspect, the dielectric plug formed in the location of the partial fin cut or wide fin cut can be adjusted to provide a particular stress to the fin or fin portion. In such an embodiment, the dielectric plug can be referred to as a fin end stress source.One or more embodiments relate to the fabrication of fin-based semiconductor devices. Performance improvements for such devices can be made by channel stress induced by the multi-plug fill process. Embodiments can include inducing mechanical stress in a metal oxide semiconductor field effect transistor (MOSFET) channel using material properties in a multi-plug fill process. As a result, the induced stress can increase the mobility and drive current of the transistor. Moreover, the method of plug filling described herein may allow for the elimination of any seam or void formation during deposition.To provide context, the unique material properties of the plug fill that manipulates the adjacent fins can induce stress in the channel. In accordance with one or more embodiments, the stress in the channel is modulated to benefit both NMOS and PMOS transistors by adjusting the composition, deposition, and post-processing conditions of the plug fill material. Moreover, such plugs can be deeper in the fin substrate as compared to other common stressor techniques such as epitaxial source or drain. The nature of the plug fill to achieve this effect also eliminates seams or voids during deposition and alleviates some of the defect patterns during the process.In order to provide more context, there is currently no artificial stress engineering for gate (multi) plugs. Stress enhancement from conventional stressors such as epitaxial source or drain, dummy multi-gate removal, stress liners, etc. unfortunately tends to decrease as device spacing shrinks. In order to address one or more of the above problems, in accordance with one or more embodiments of the present disclosure, an additional source of stress is incorporated into the transistor structure. Another possible benefit of this process may be the elimination of seams or voids in the plug, which is common for other chemical vapor deposition processes.19A and 19B illustrate a fin end stress source location at an end of a fin having a wide slit selected as a final process of fin trimming, such as described above, in accordance with an embodiment of the present disclosure. A cross-sectional view of various operations in a partial method.Referring to FIG. 19A, a fin 1900 such as a silicon fin is formed over the substrate 1902 and may be continuous with the substrate 1902. The fins 1900 have fin ends or wide fin cuts 1904, which may be formed, for example, in the final pattern of fin trimming described above, when the fins are patterned. Active gate electrode location 1908 and dummy gate electrode location 1908 are formed over fin 1900 and, for illustrative purposes, are shown slightly in front of fin 1900, and fin 1900 is in the background , where the dashed line represents the area covered in the front view. It will be appreciated that the epitaxial source or drain region 1910 is also shown at the location of the fins 1900 between the gate locations 1906 and 1908. In addition, an interlayer dielectric material 1912 is included at the location of the fins 1900 between the gate locations 1906 and 1908.Referring to Figure 19B, the gate or dummy gate locations 1908 are removed to expose the fin ends and the wide fin cuts 1904. This removal creates an opening 1920 in which a dielectric plug, such as a fin end stress source dielectric plug, can ultimately be formed.20A and 20B illustrate a fin end stress location at a fin end having a partial cut as part of a fin trim isolation process such as described above, in accordance with an embodiment of the present disclosure. A cross-sectional view of the various operations in the method.Referring to FIG. 20A, a fin 2000 such as a silicon fin is formed over the substrate 2002 and may be continuous with the substrate 2002. The fin 2000 has a partial slit 2004 in which the portion of the fin 2000 is removed, for example, using a fin trim isolation that removes the dummy gate and etches the fin in a localized position, as described above. The active gate electrode location 2006 and the dummy gate electrode location 2008 are formed over the fins 2000 and are shown slightly in front of the fins 2000 for illustrative purposes, and the fins 2000 are in the background , where the dashed line represents the area covered in the front view. It will be appreciated that the epitaxial source or drain region 2010 is also shown at the location of the fins 2000 between the gate locations 2006 and 2008. Furthermore, an interlayer dielectric material 2012 is included at the location of the fins 2000 between the gate locations 2006 and 2008.Referring to Figure 20B, the gate-pad structure or dummy gate electrode location 2008 is removed to expose the fin ends with the partial slits 2004. This removal creates an opening 2020 in which a dielectric plug, such as a fin end stress source dielectric plug, can ultimately be formed.21A-21M illustrate cross-sectional views of various operations in a method of fabricating an integrated circuit structure having a differentiated fin end dielectric plug, in accordance with an embodiment of the present disclosure.Referring to FIG. 21A, the starting structure 2100 includes an NMOS region and a PMOS region. The NMOS region of the starting structure 2100 includes a first fin 2102, such as a first silicon fin, formed over the substrate 2104 and contiguous with the substrate 2104. The first fin 2102 has a fin end 2106 that may be formed by a partial or wide fin cut. A first active gate electrode location 2108 and a first dummy gate electrode location 2110 are formed over the first fin 2102 and, for illustrative purposes, are shown slightly in front of the first fin 2102, And the first fin 2102 is in the background, wherein the dashed line represents the area covered in the front view. Also shown at the location of the first fin 2102 between the gate locations 2108 and 2110 is an epitaxial N-type source or drain region 2112, such as an epitaxial silicon source or drain structure. Additionally, an interlayer dielectric material 2114 is included at the location of the first fin 2102 between the gate locations 2108 and 2110.The PMOS region of the starting structure 2100 includes a second fin 2122, such as a second silicon fin, formed over the substrate 2104 and contiguous with the substrate 2104. The second fin 2122 has a fin end 2126 that may be formed by a partial or wide fin cut. A second active gate electrode location 2128 and a second dummy gate electrode location 2130 are formed over the second fin 2122 and, for illustrative purposes, are shown slightly in front of the second fin 2122, And the second fin 2122 is in the background, wherein the dashed line represents the area covered in the front view. Also shown at the location of the second fin 2122 between the gate locations 2128 and 2130 is an epitaxial P-type source or drain region 2132, such as an epitaxial silicon germanium source or drain structure. Additionally, an interlayer dielectric material 2134 is included at the location of the second fin 2122 between the gate locations 2128 and 2130.Referring to Figure 21B, the first and second dummy gate electrodes at locations 2110 and 2130, respectively, are removed. Upon removal, the fin ends 2106 of the first fin 2102 and the fin ends 2126 of the second fin 2122 are exposed. This removal also creates openings 2116 and 2136, respectively, wherein a dielectric plug, such as a fin end stress source dielectric plug, may ultimately be formed.Referring to Figure 21C, a material liner 2140 is conformally formed with the structure of Figure 21B. In an embodiment, the material liner comprises silicon and nitrogen, such as a silicon nitride material liner.Referring to Fig. 21D, a protective canopy 2142 such as a metal nitride layer is formed on the structure of Fig. 21C.Referring to Figure 21E, a hard mask material 2144, such as a carbon-based hard mask material, is formed over the structure of Figure 21D. A lithographic mask or mask stack 2146 is formed over the hard mask material 2144.Referring to Figure 21F, portions of the hard mask material 2144 and portions of the protective canopy 2142 in the PMOS region are removed from the structure of Figure 21E. The lithography mask or mask stack 2146 is also removed.Referring to Figure 21G, a second material liner 2148 is conformally formed with the structure of Figure 21F. In an embodiment, the second material liner comprises silicon and nitrogen, such as a second silicon nitride material liner. In an embodiment, the second material liner 2148 has different stress states to adjust the stress in the exposed plug.Referring to Figure 21H, a second hard mask material 2150, such as a second carbon-based hard mask material, is formed over the structure of Figure 21G and then recessed into the opening 2136 of the PMOS region of the structure.Referring to Figure 21I, a second material liner 2148 is etched away from the structure of Figure 21H to remove the second material liner 2148 from the NMOS region and recess the second material liner 2148 in the PMOS region of the structure.Referring to Figure 21J, hard mask material 2144, protective canopy 2142 and second hard mask material 2150 are removed from the structure of Figure 21I. This removal leaves two different filling structures for the opening 2116, respectively, compared to the opening 2136.Referring to Figure 21K, insulating fill material 2152 is formed in openings 2116 and 2136 of the structure of Figure 21J and planarized. In an embodiment, the insulating fill material 2152 is a flowable oxide material, such as a flowable silicon oxide or silicon dioxide material.Referring to Figure 21L, insulating fill material 2152 is recessed into openings 2116 and 2136 of the structure of Figure 21K to form recessed insulating fill material 2154. In an embodiment, a steam oxidation process is performed as part of the recess process, or a vapor oxidation process is performed after the recess process to cure the recessed insulating fill material 2154. In one such embodiment, the recessed insulating fill material 2154 contracts to induce tensile stress on the fins 2102 and 2122. However, the PMOS region has less tensile stress inducing material than in the NMOS region.Referring to Figure 21M, a third material liner 2156 is over the structure of Figure 21L. In an embodiment, the third material liner 2156 comprises silicon and nitrogen, such as a third silicon nitride material liner. In an embodiment, the third material liner 2156 prevents the recessed insulating fill material 2154 from being etched away during subsequent source or drain contact etch.22A-22D illustrate cross-sectional views of an exemplary structure of a PMOS fin end stressor dielectric plug, in accordance with an embodiment of the present disclosure.Referring to Figure 22A, opening 2136 in the PMOS region of structure 2100 includes a material liner 2140 along the sidewall of opening 2136. The second material liner 2148 is conformal to the lower portion of the material liner 2140, but is recessed relative to the upper portion of the material liner 2140. The recessed insulating filler material 2154 is within the second material liner 2148 and has an upper surface that is coplanar with the upper surface of the second material liner 2148. A third material liner 2156 is in the upper portion of the material liner 2140 and on the upper surface of the insulating fill material 2154 and the upper surface of the second material liner 2148. The third material liner 2156 has a seam 2157, for example, as an artifact for the deposition process used to form the third material liner 2156.Referring to Figure 22B, opening 2136 in the PMOS region of structure 2100 includes a material liner 2140 along the sidewall of opening 2136. The second material liner 2148 is conformal to the lower portion of the material liner 2140, but is recessed relative to the upper portion of the material liner 2140. The recessed insulating filler material 2154 is within the second material liner 2148 and has an upper surface that is coplanar with the upper surface of the second material liner 2148. A third material liner 2156 is in the upper portion of the material liner 2140 and on the upper surface of the insulating fill material 2154 and the upper surface of the second material liner 2148. The third material liner 2156 has no seams.Referring to Figure 22C, the opening 2136 in the PMOS region of the structure 2100 includes a material liner 2140 along the sidewall of the opening 2136. The second material liner 2148 is conformal to the lower portion of the material liner 2140, but is recessed relative to the upper portion of the material liner 2140. The recessed insulating filler material 2154 is in and over the second material liner 2148 and has an upper surface above the upper surface of the second material liner 2148. A third material liner 2156 is in the upper portion of the material liner 2140 and on the upper surface of the insulating fill material 2154. The third material liner 2156 is shown without seams, but in other embodiments, the third material liner 2156 has seams.Referring to FIG. 22D, the opening 2136 in the PMOS region of the structure 2100 includes a material liner 2140 along the sidewall of the opening 2136. The second material liner 2148 is conformal to the lower portion of the material liner 2140, but is recessed relative to the upper portion of the material liner 2140. The recessed insulative fill material 2154 is within the second material liner 2148 and has an upper surface that is recessed below the upper surface of the second material liner 2148. A third material liner 2156 is in the upper portion of the material liner 2140 and on the upper surface of the insulating fill material 2154 and the upper surface of the second material liner 2148. The third material liner 2156 is shown without seams, but in other embodiments, the third material liner 2156 has seams.Referring collectively to FIGS. 19A, 19B, 20A, 20B, 21A-21M, and 22A-22D, an integrated circuit structure includes a fin, such as silicon, having a top, in accordance with an embodiment of the present disclosure. And side walls. The top has the longest dimension in one direction. The first isolation structure is above the first end of the fin. The gate structure includes a gate electrode over the top of the region of the fin and laterally adjacent the sidewall of the region. The gate structure is spaced apart from the first isolation structure in this direction. The second isolation structure is above the second end of the fin, the second end being opposite the first end. The second isolation structure is spaced apart from the gate structure in this direction. Both the first isolation structure and the second isolation structure comprise a first dielectric material (e.g., material liner 2140) laterally surrounding a recessed second dielectric material (e.g., second material liner 2148) that is different from the first dielectric material. The recessed second dielectric material laterally surrounds at least a portion of the third dielectric material (e.g., recessed insulating fill material 2154) that is different than the first and second dielectric materials.In one embodiment, both the first isolation structure and the second isolation structure further comprise a fourth dielectric material (eg, a third material liner 2156) laterally surrounded by an upper portion of the first dielectric material, the fourth dielectric material being On the upper surface of the three dielectric materials. In one such embodiment, the fourth dielectric material is further on the upper surface of the second dielectric material. In another such embodiment, the fourth dielectric material has a generally vertical center seam. In another such embodiment, the fourth dielectric material has no seams.In one embodiment, the third dielectric material has an upper surface that is coplanar with the upper surface of the second dielectric material. In one embodiment, the third dielectric material has an upper surface below the upper surface of the second dielectric material. In one embodiment, the third dielectric material has an upper surface above the upper surface of the second dielectric material, and the third dielectric material is further over the upper surface of the second dielectric material. In one embodiment, the first and second isolation structures induce compressive stress on the fins. In one such embodiment, the gate electrode is a P-type gate electrode.In one embodiment, the first isolation structure has a width in the direction, the gate structure has a width in the direction, and the second isolation structure has a width along the direction. In one such embodiment, the center of the gate structure is spaced apart from the center of the first isolation structure by a spacing in the direction, and the center of the second isolation structure is spaced apart from the center of the gate structure by the spacing. In one embodiment, both the first and second isolation structures are in corresponding trenches in the interlayer dielectric layer.In one such embodiment, the first source or drain region is between the gate structure and the first isolation structure. The second source or drain region is between the gate structure and the second isolation structure. In one such embodiment, the first and second source or drain regions are embedded source or drain regions comprising silicon and germanium. In one such embodiment, the gate structure further includes a high-k dielectric layer between the gate electrode and the fin and along the sidewall of the gate electrode.In another aspect, the depth of the individual dielectric plugs can vary within the structure formed within the semiconductor structure or on the common substrate. As an example, FIG. 23A illustrates a cross-sectional view of another semiconductor structure having fin end stress inducing features in accordance with another embodiment of the present disclosure. Referring to Figure 23A, a shallow dielectric plug 2308A and deep dielectric plug pairs 2308B and 2308C are included. In one such embodiment, as shown, the shallow dielectric plug 2308C is at a depth substantially equal to the depth of the semiconductor fin 2302 within the substrate 2304, while the deep dielectric plug pairs 2308B and 2308C are below the substrate. The depth of the depth of the semiconductor fin 2302 within 2304.Referring again to Figure 23A, such an arrangement can achieve stress amplification on a fin trim isolation (FTI) device in a trench that is etched deeper into the substrate 2304 to provide isolation between adjacent fins 2302. Such an approach can be implemented to increase the density of transistors on the chip. In an embodiment, the effect of stress induced on the transistor from the plug is amplified in the FTI transistor because stress transfer occurs in the fin and in the substrate or just below the transistor.In another aspect, the tensile stress induced oxide layer width or amount included in the dielectric plug can be varied within the semiconductor structure or within the architecture formed on the common substrate, for example, depending on whether the device is a PMOS device or NMOS device. By way of example, Figure 23B illustrates a cross-sectional view of another semiconductor structure having fin end stress inducing features in accordance with another embodiment of the present disclosure. Referring to Figure 23B, in a particular embodiment, the NMOS device includes more tensile stress inducing oxide layer 2350 than the corresponding PMOS device.Referring again to Figure 23B, in an embodiment, a differential plug fill is implemented to induce an appropriate stress in the NMOS and PMOS. For example, NMOS plugs 2308D and 2308E have a larger volume and greater width of tensile stress inducing oxide layer 2350 than PMOS plugs 2308F and 2308G. The plug fill can be patterned to induce different stresses in the NMOS and PMOS devices. For example, PMOS devices can be turned on using lithographic patterning (eg, widening dielectric plug trenches for PMOS devices), at which different fill options can be implemented to differentiate NMOS devices with respect to plug fills in PMOS devices. The plug is filled in. In an exemplary embodiment, reducing the volume of flowable oxide in the plug on the PMOS device can reduce the induced tensile stress. In one such embodiment, the compressive stress may be primarily derived from, for example, compressive stress source and drain regions. In other embodiments, the use of different plug linings or different filler materials provides adjustable stress control.As noted above, it will be appreciated that multi-plug stress effects can benefit NMOS transistors (e.g., stretch channel stress) and PMOS transistors (e.g., compressive channel stress). According to an embodiment of the present disclosure, the semiconductor fin is a uniaxial stress semiconductor fin. Uniaxial stress semiconductor fins can be stressed on a single axis using tensile stress or using compressive stress. For example, according to one or more embodiments of the present disclosure, FIG. 24A shows an oblique view of a fin having a tensile uniaxial stress, and FIG. 24B shows an oblique view of a fin having a compressed uniaxial stress. .Referring to FIG. 24A, the semiconductor fin 2400 has a discrete channel region (C) disposed therein. A source region (S) and a drain region (D) are disposed in the semiconductor fin 2400 on either side of the channel region (C). The discrete channel regions of the semiconductor fin 2400 have currents in the direction of uniaxial tensile stress (arrows pointing away from each other and toward the ends 2402 and 2404) from the source region (S) to the drain region (D) Flow direction.Referring to Figure 24B, the semiconductor fin 2450 has a discrete channel region (C) disposed therein. A source region (S) and a drain region (D) are disposed in the semiconductor fin 2450 on either side of the channel region (C). The discrete channel regions of the semiconductor fins 2450 have current flow directions from the source region (S) to the drain region (D) in the direction of uniaxial compressive stress (arrows pointing toward each other and from the ends 2452 and 2454) from the source region (S) to the drain region (D). . Thus, the embodiments described herein can be implemented to improve transistor mobility and drive current, allowing for faster execution of circuits and chips.In another aspect, there may be a relationship between where the gate line cuts (multiple cuts) are made and where the fin trim isolation (FTI) partial fin cuts are made. In an embodiment, the FTI partial cut is made only in the location where the multiple cuts are made. However, in one such embodiment, the FTI cut is not necessarily made at each location where the multi-cut is made.25A and 25B illustrate various ways of representing a patterning of fins having a single gate spacer for forming a partial isolation structure in a select gate line nick location, in accordance with an embodiment of the present disclosure. Plan of operation.Referring to Figure 25A, a method of fabricating an integrated circuit structure includes forming a plurality of fins 2502, the individual fins of the plurality of fins 2502 having the longest dimension in the first direction 2504. A plurality of gate structures 2506 are over the plurality of fins 2502, and the individual fins in the gate structure 2506 have the longest dimension in a second direction 2508 that is orthogonal to the first direction 2504. In an embodiment, the gate structure 2506 is a sacrificial or dummy gate line fabricated, for example, from polysilicon. In one embodiment, the plurality of fins 2502 are silicon fins and are continuous with a portion of the underlying silicon substrate.Referring again to Figure 25A, a dielectric material structure 2510 is formed between adjacent ones of the plurality of gate structures 2506. Portions 2512 and 2513 of the two gate structures in the plurality of gate structures 2506 are removed to expose portions of each of the plurality of fins 2502. In an embodiment, portions 2512 and 2513 of the two gate structures removed in gate structure 2506 involve the use of a lithographic window that is wider than the width of each of portions 2512 and 2513 of gate structure 2506. The exposed portions of each of the plurality of fins 2502 at location 2512 are removed to form a kerf region 2520. In an embodiment, the exposed portion of each of the plurality of fins 2502 is removed using a dry or plasma etch process. However, the exposed portion of each of the plurality of fins 2502 at location 2513 is masked from being removed. In an embodiment, region 2512/2520 represents both a multi-incision and an FTI partial fin incision. However, position 2513 represents only a multi-cut.Referring to Figure 25B, the position 2512/2520 of the multi-cut and FTI partial fin cut and the position 2513 of the multi-cut are filled with an insulating structure 2530 such as a dielectric plug. Exemplary insulating structures or "multi-cut" or "plug" structures are described below.26A-26C illustrate various possibilities for dielectric plugs for multiple cut and FTI partial fin cut locations and only multiple cut locations for various regions of the structure of FIG. 25B, in accordance with an embodiment of the present disclosure. Sectional view.Referring to Figure 26A, a cross-sectional view of portion 2600A of dielectric plug 2530 at location 2513 is shown along the a-a' axis of the structure of Figure 25B. Portion 2600A of dielectric plug 2530 is shown on uncut fin 2502 and between dielectric material structures 2510.Referring to Figure 26B, a cross-sectional view of portion 2600B of dielectric plug 2530 at location 2512 is shown along the b-b' axis of the structure of Figure 25B. Portion 2600B of dielectric plug 2530 is shown at cutting fin location 2520 and between dielectric material structures 2510.Referring to Figure 26C, a cross-sectional view of portion 2600C of dielectric plug 2530 at location 2512 is shown along the c-c' axis of the structure of Figure 25B. Portion 2600C of dielectric plug 2530 is shown on trench isolation structure 2602 between fins 2502 and dielectric material structure 2510. In the embodiment of the example described above, the trench isolation structure 2602 includes a first insulating layer 2602A, a second insulating layer 2602B, and an insulating fill material 2602C on the second insulating layer 2602B.Referring collectively to FIGS. 25A, 25B, and 26A- 26C, a method of fabricating an integrated circuit structure includes forming a plurality of fins, individual fins of the plurality of fins in a first direction, in accordance with an embodiment of the present disclosure . A plurality of gate structures are formed over the plurality of fins, the individual gate structures in the gate structure being in a second direction orthogonal to the first direction. A dielectric material structure is formed between adjacent ones of the plurality of gate structures. A portion of the first gate structure of the plurality of gate structures is removed to expose a first portion of each of the plurality of fins. Portions of the second gate structure of the plurality of gate structures are removed to expose a second portion of each of the plurality of fins. The exposed first portion of each of the plurality of fins is removed, but the exposed second portion of each of the plurality of fins is not removed. A first insulating structure is formed in a position of the removed first portion of the plurality of fins. The second insulating structure is formed in a position of the removed portion of the second one of the plurality of gate structures.In one embodiment, removing portions of the first and second gate structures of the plurality of gate structures involves using a width of each of the portions of the first and second gate structures of the plurality of gate structures A wider lithography window. In one embodiment, removing the exposed first portion of each of the plurality of fins involves etching to a depth that is less than the height of the plurality of fins. In one such embodiment, the depth is greater than the depth of the source or drain regions of the plurality of fins. In one embodiment, the plurality of fins comprise silicon fins and are continuous with a portion of the underlying silicon substrate.Referring collectively to FIGS. 16A, 25A, 25B, and 26A-26C, in accordance with another embodiment of the present disclosure, an integrated circuit structure includes a fin including silicon, the fin having a first direction The longest size. The isolation structure is above the upper portion of the fin and the isolation structure has a center in the first direction. The first gate structure is over the upper portion of the fin, the first gate structure having the longest dimension in a second direction orthogonal to the first direction. The center of the first gate structure is spaced apart from the center of the isolation structure by a distance in the first direction. The second gate structure is over the upper portion of the fin and the second gate structure has the longest dimension in the second direction. The center of the second gate structure is spaced apart from the center of the first gate structure in the first direction by the pitch. The third gate structure is over the upper portion of the fin of the isolation structure opposite the first and second gate structures, the third gate structure having the longest dimension in the second direction. The center of the third gate structure is spaced apart from the center of the isolation structure in a first direction by the pitch.In one embodiment, each of the first gate structure, the second gate structure, and the third gate structure comprises a gate electrode on and between sidewalls of the high-k gate dielectric layer. In one such embodiment, each of the first gate structure, the second gate structure, and the third gate structure further includes an insulating cap on the gate electrode and on a sidewall of the high-k gate dielectric layer .In one embodiment, the first epitaxial semiconductor region is on the upper portion of the fin between the first gate structure and the isolation structure. The second epitaxial semiconductor region is on an upper portion of the fin between the first gate structure and the second gate structure. The third epitaxial semiconductor region is on the upper portion of the fin between the third gate structure and the isolation structure. In one such embodiment, the first, second, and third epitaxial semiconductor regions comprise silicon and germanium. In another such embodiment, the first, second, and third epitaxial semiconductor regions comprise silicon.Referring collectively to FIGS. 16A, 25A, 25B, and 26A- 26C, in accordance with another embodiment of the present disclosure, an integrated circuit structure includes a shallow trench isolation (STI) structure between pairs of semiconductor fins, the STI The structure has the longest dimension in the first direction. The isolation structure is on the STI structure and the isolation structure has a center in the first direction. The first gate structure is on the STI structure, and the first gate structure has the longest dimension along a second direction orthogonal to the first direction. The center of the first gate structure is spaced apart from the center of the isolation structure by a distance in the first direction. The second gate structure is on the STI structure and the second gate structure has the longest dimension in the second direction. The center of the second gate structure is spaced apart from the center of the first gate structure in the first direction by the pitch. The third gate structure is on the STI structure, on the side of the isolation structure opposite the first and second gate structures, the third gate structure having the longest dimension in the second direction. The center of the third gate structure is spaced apart from the center of the isolation structure by a first interval in the first direction.In one embodiment, each of the first gate structure, the second gate structure, and the third gate structure comprises a gate electrode on and between sidewalls of the high-k gate dielectric layer. In one such embodiment, each of the first gate structure, the second gate structure, and the third gate structure further includes an insulating cap on the gate electrode and on a sidewall of the high-k gate dielectric layer . In one embodiment, the pair of semiconductor fins is a pair of silicon fins.In another aspect, whether it is a multi-cut and FTI partial fin cut together or only multiple cuts, the insulating structure or dielectric plug for filling the slit location can extend laterally into the dielectric spacer corresponding to the slit gate line. Even exceeding the dielectric spacers corresponding to the gate lines of the slits.In a first example where the shape of the trench contact is not affected by the multi-cut dielectric plug, FIG. 27A illustrates a dielectric plug having a dielectric spacer with an extension to the gate line, in accordance with an embodiment of the present disclosure. A plan view and corresponding cross-sectional view of an integrated circuit structure of a gate line cut.Referring to Figure 27A, integrated circuit structure 2700A includes a first silicon fin 2702 having a longest dimension along a first direction 2703. The second silicon fin 2704 has the longest dimension along the first direction 2703. Insulator material 2706 is between first silicon fin 2702 and second silicon fin 2704. The gate line 2708 is above the first silicon fin 2702 and above the second silicon fin 2704 in a second direction 2709, the second direction 2709 being orthogonal to the first direction 2703. Gate line 2708 has a first side 2708A and a second side 2708B and has a first end 2708C and a second end 2708D. Gate line 2708 has a discontinuity 2710 over insulator material 2706 between first end 2708C and second end 2708D of gate line 2708. The discontinuity 2710 is filled by a dielectric plug 2712.The trench contact 2714 is over the first silicon fin 2702 and over the second silicon fin 2704 in the second direction 2709 at the first side 2708A of the gate line 2708. The trench contact 2714 is continuous over the insulator material 2706 at a location 2715 laterally adjacent to the dielectric plug 2712. Dielectric spacers 2716 are laterally interposed between trench contact 2714 and first side 2708A of gate line 2708. Dielectric spacers 2716 are continuous along first side 2708A of gate line 2708 and dielectric plug 2712. Dielectric spacer 2716 has a width (W2) that is laterally adjacent to dielectric plug 2712 that is thinner than a width (W1) that is laterally adjacent to first side 2708A of gate line 2708.In one embodiment, the second trench contact 2718 is over the first silicon fin 2702 and above the second silicon fin 2704 in the second direction 2709 at the second side 2708B of the gate line 2708. The second trench contact 2718 is continuous over the insulator material 2706 at a location 2719 laterally adjacent the dielectric plug 2712. In one such embodiment, the second dielectric spacer 2720 is laterally interposed between the second trench contact 2718 and the second side 2708B of the gate line 2708. The second dielectric spacer 2720 is continuous along the second side 2708B of the gate line 2708 and the dielectric plug 2712. The second dielectric spacer has a width laterally adjacent to the dielectric plug 2712 that is thinner than a width laterally adjacent the second side 2708B of the gate line 2708.In one embodiment, gate line 2708 includes a high-k gate dielectric layer 2722, a gate electrode 2724, and a dielectric cap layer 2726. In one embodiment, dielectric plug 2712 includes the same material as dielectric spacer 2714 but is separate from dielectric spacer 2714. In one embodiment, dielectric plug 2712 includes a different material than dielectric spacer 2714.In a second example where the shape of the trench contact is affected by a multi-cut dielectric plug, FIG. 27B illustrates a dielectric plug with a dielectric spacer extending to the gate line in accordance with another embodiment of the present disclosure. A plan view and corresponding cross-sectional view of an integrated circuit structure of a gate line cut of a plug.Referring to Figure 27B, integrated circuit structure 2700B includes a first silicon fin 2752 having a longest dimension along a first direction 2753. The second silicon fin 2754 has the longest dimension along the first direction 2753. Insulator material 2756 is between first silicon fin 2752 and second silicon fin 2754. The gate line 2758 is above the first silicon fin 2752 and the second silicon fin 2754 in a second direction 2759, the second direction 2759 being orthogonal to the first direction 2753. Gate line 2758 has a first side 2758A and a second side 2758B and has a first end 2758C and a second end 2758D. Gate line 2758 has a discontinuity 2760 over insulator material 2756 between first end 2758C and second end 2758D of gate line 2758. The discontinuity 2760 is filled with a dielectric plug 2762.Trench contact 2764 is over first silicon fin 2752 and second silicon fin 2754 in a second direction 2759 at first side 2758A of gate line 2758. The trench contact 2764 is continuous over the insulator material 2756 at a location 2765 that is laterally adjacent to the dielectric plug 2762. Dielectric spacer 2766 is laterally interposed between trench contact 2764 and first side 2758A of gate line 2758. Dielectric spacer 2766 is along first side 2758A of gate line 2758 but not along dielectric plug 2762, resulting in a discontinuous dielectric spacer 2766. The trench contact portion 2764 has a width (W1) laterally adjacent to the dielectric plug 2762 that is thinner than a width (W2) laterally adjacent to the dielectric spacer 2766.In one embodiment, the second trench contact 2768 is over the first silicon fin 2752 and above the second silicon fin 2754 in the second direction 2759 at the second side 2758B of the gate line 2758. . The second trench contact 2768 is continuous over the insulator material 2756 at a location 2769 laterally adjacent the dielectric plug 2762. In one such embodiment, the second dielectric spacer 2770 is laterally interposed between the second trench contact 2768 and the second side 2758B of the gate line 2758. The second dielectric spacer 2770 is along the second side 2758B of the gate line 2758 but not along the dielectric plug 2762, resulting in a discontinuous dielectric spacer 2770. The second trench contact portion 2768 has a width laterally adjacent to the dielectric plug 2762 that is thinner than a width laterally adjacent to the second dielectric spacer 2770.In one embodiment, the gate line 2758 includes a high k dielectric layer 2772, a gate electrode 2774, and a dielectric cap layer 2776. In one embodiment, dielectric plug 2762 includes the same material as dielectric spacer 2764 but is separate from dielectric spacer 2764. In one embodiment, the dielectric plug 2762 includes a different material than the dielectric spacer 2764.In a third example in which the dielectric plug for the multi-cut position tapers from the top of the plug to the bottom of the plug, FIGS. 28A-28F illustrate manufacturing with a belt according to another embodiment of the present disclosure. A cross-sectional view of various operations in a method of integrated circuit structure of a gate line cut of a dielectric plug having an upper portion extending beyond a dielectric spacer of a gate line and extending to a gate line dielectric The lower part of the spacer.Referring to Figure 28A, a plurality of gate lines 2802 are formed over structure 2804, such as over trench isolation structures between semiconductor fins. In one embodiment, each of the gate lines 2802 is a sacrificial or dummy gate line, for example, having a dummy gate electrode 2806 and a dielectric cap 2808. Such portions of the sacrificial or dummy gate lines can be replaced later in the replacement gate process, for example, after the dielectric plugs described below are formed. Dielectric spacer 2810 is along the sidewall of gate line 2802. Dielectric material 2812, such as a dielectric interlayer, is between gate lines 2802. Mask 2814 is formed and photolithographically patterned to expose a portion of one of gate lines 2802.Referring to Figure 28B, the center gate line 2802 is removed using an etch process while the mask 2814 is in place. Mask 2814 is then removed. In an embodiment, the etch process etches portions of the dielectric spacers 2810 of the removed gate lines 2802, thereby forming a reduced dielectric spacer 2816. In addition, the upper portion of the dielectric material 2812 exposed by the mask 2814 is etched during the etching process to form a corroded dielectric material portion 2818. In a particular embodiment, residual dummy gate material 2820, such as residual polysilicon, remains in the structure as an artifact of an unfinished etch process.Referring to Figure 28C, a hard mask 2822 is formed over the structure of Figure 28B. The hard mask 2822 can be conformal to the upper portion of the structure of Figure 28B, particularly to the etched dielectric material portion 2818.Referring to Figure 28D, residual dummy gate material 2820 is removed, e.g., using an etch process, which may be chemically similar to an etch process for removing a center gate line in gate line 2802. In an embodiment, hard mask 2822 protects the etched portion of dielectric material 2818 from further corrosion during removal of residual dummy gate material 2820.Referring to Figure 28E, the hard mask 2822 is removed. In one embodiment, the hard mask 2822 is removed without or substantially eroding the etched dielectric material portion 2818.Referring to Figure 28F, a dielectric plug 2830 is formed in the opening of the structure of Figure 28E. The upper portion of the dielectric plug 2830 over the etched dielectric material portion 2818, for example, effectively exceeds the initial spacer 2810. The lower portion of the dielectric plug 2830 is adjacent to the reduced dielectric spacer 2816, for example, effectively entering but not exceeding the initial spacer 2810. As a result, the dielectric plug 2830 has a tapered profile as shown in Figure 28F. It will be appreciated that the dielectric plug 2830 can be fabricated from the materials and processes described above for other multi-cut or FTI plug or fin end stressors.In another aspect, the portion of the footprint gate structure or the dummy gate structure can be maintained over the trench isolation region under the permanent gate structure, as a protective structure during the replacement gate process to prevent the trench isolation region from being corrosion. For example, Figures 29A-29C illustrate plan and corresponding cross-sectional views of an integrated circuit structure having residual dummy gate material at portions of the bottom of the permanent gate stack, in accordance with an embodiment of the present disclosure.Referring to Figures 29A-29C, the integrated circuit structure includes fins 2902 protruding from a semiconductor substrate 2904, such as silicon fins. Fin 2902 has a lower fin portion 2902B and an upper fin portion 2902A. The upper fin portion 2902A has a top portion 2902C and a side wall 2902D. Isolation structure 2906 surrounds lower fin portion 2902B. The isolation structure 2906 includes an insulating material 2906C having a top surface 2907. Semiconductor material 2908 is on a portion of top surface 2907 of insulating material 2906C. Semiconductor material 2908 is separated from fin 2902.Gate dielectric layer 2910 is over top 2902C of upper fin portion 2902A and laterally adjacent sidewall 2902D of upper fin portion 2902A. Gate dielectric layer 2910 is further over semiconductor material 2908 over portions of top surface 2907 of insulating material 2906C. Intervening additional gate dielectric layer 2911, such as an oxidized portion of fin 2902, may be between gate dielectric layer 2910 over top portion 2902C of upper fin portion 2902A and sidewall 2902D of upper fin portion 2902A and Sidewalls 2902D are laterally adjacent. Gate electrode 2912 is over gate dielectric layer 2910 over top 2902C of upper fin portion 2902A and laterally adjacent sidewall 2902D of upper fin portion 2902A. Gate electrode 2912 is further over gate dielectric layer 2910 on semiconductor material 2908 over portions of top surface 2907 of insulating material 2906C. The first source or drain region 2916 is adjacent to the first side of the gate electrode 2912, and the second source or drain region 2918 is adjacent to the second side of the gate electrode 2912, the second side and the first side relatively. In the exemplary embodiment in which the above is described, the isolation structure 2906 includes a first insulating layer 2906A, a second insulating layer 2906B, and an insulating material 2606C.In one embodiment, the semiconductor material 2908 on a portion of the top surface 2907 of the insulating material 2906C is or includes polysilicon. In one embodiment, the top surface 2907 of the insulating material 2906C has a concave depression, and as shown, the semiconductor material 2908 is depressed in the concave shape. In one embodiment, the isolation structure 2906 includes a second insulating material (both 2906A or 2906B or 2906A/2906B) along the bottom and sidewalls of the insulating material 2906C. In one such embodiment, the portion of the second insulating material (both 2906A or 2906B or 2906A/2906B) along the sidewall of insulating material 2906C has a top surface over the uppermost surface of insulating material 2906, as shown. In one embodiment, the top surface of the second insulating material (both 2906A or 2906B or 2906A/2906B) is higher than or coplanar with the uppermost surface of the semiconductor material 2908.In one embodiment, the semiconductor material 2908 on a portion of the top surface 2907 of the insulating material 2906C does not extend beyond the gate dielectric layer 2910. That is, from the perspective of the plan view, the position of the semiconductor material 2908 is limited to the area covered by the gate stack 2912/2910. In one embodiment, the first dielectric spacer 2920 is along the first side of the gate electrode 2912. The second dielectric spacer 2922 is along the second side of the gate electrode 2912. In one such embodiment, the gate dielectric layer 2910 also extends along the sidewalls of the first dielectric spacer 2920 and the second dielectric spacer 2922, as shown in Figure 29B.In one embodiment, gate electrode 2912 includes a conformal conductive layer 2912A (e.g., a work function layer). In one such embodiment, the work function layer 2912A includes titanium and nitrogen. In another embodiment, work function layer 2912A includes titanium, aluminum, carbon, and nitrogen. In one embodiment, gate electrode 2912 also includes a conductive fill metal layer 2912B over work function layer 2912A. In one such embodiment, the electrically conductive fill metal layer 2912B comprises tungsten. In a particular embodiment, the electrically conductive fill metal layer 2912B comprises 95 or greater atomic percent tungsten and 0.1 to 2 atomic percent fluorine. In one embodiment, the insulative cap 2924 is on the gate electrode 2912 and may extend over the gate dielectric layer 2910, as shown in Figure 29B.30A-30D illustrate cross-sectional views of various operations in a method of fabricating an integrated circuit structure having residual dummy gate material at portions of the bottom of a permanent gate stack, in accordance with another embodiment of the present disclosure. The perspective view is along the portion of the a-a' axis of the structure of Figure 29C.Referring to FIG. 30A, a method of fabricating an integrated circuit structure includes forming a fin 3000 from a semiconductor substrate 3002. The fin 3000 has a lower fin portion 3000A and an upper fin portion 3000B. The upper fin portion 3000B has a top 3000C and a side wall 3000D. The isolation structure 3004 surrounds the lower fin portion 3000A. The isolation structure 3004 includes an insulating material 3004C having a top surface 3005. The footprint gate electrode 3006 is over the top 3000C of the upper fin portion 3000B and laterally adjacent the sidewall 3000D of the upper fin portion 3000B. The footprint gate electrode 3006 includes a semiconductor material.Although not shown from the angle of FIG. 30A (but the position for which is shown in FIG. 29C), the first source or drain region may be formed adjacent to the first side of the footprint gate electrode 3006, and A second source or drain region may be formed adjacent the second side of the footprint gate electrode 3006, the second side being opposite the first side. Additionally, a gate dielectric spacer can be formed along the sidewall of the footprint gate electrode 3006 and an interlayer dielectric (ILD) layer can be formed laterally adjacent to the footprint gate electrode 3006.In one embodiment, the footprint gate electrode 3006 is or includes polysilicon. In one embodiment, the top surface 3005 of the insulating material 3004C of the isolation structure 3004 has a concave depression as shown. A portion of the footprint gate electrode 3006 is depressed in the concave shape. In one embodiment, the isolation structure 3004 includes a second insulating material (both 3004A or 3004B or 3004A/3004B) along the bottom and sidewalls of the insulating material 3004C, as shown. In one such embodiment, the second insulating material (both 3004A or 3004B or 3004A/3004B) has a top surface over at least a portion of the top surface 3005 of the insulating material 3004C along a portion of the sidewall of the insulating material 3004C. In one embodiment, the top surface of the second insulating material (both 3004A or 3004B or 3004A/3004B) is above the lowest surface of a portion of the footprint gate electrode 3006.Referring to Figure 30B, for example, the footprint gate electrode 3006 is etched from the top portion 3000C and the sidewall 3000D of the upper fin portion 3000B in the direction 3008 of Figure 30A. The etching process can be referred to as a replacement gate process. In an embodiment, the etch or replace gate process is not completed and a portion 3012 of the footprint gate electrode 3006 is left on at least a portion of the top surface 3005 of the insulating material 3004C of the isolation structure 3004.Referring to Figures 30A and 30B, in an embodiment, the oxidized portion 3010 of the upper fin portion 3000B formed prior to formation of the footprint gate electrode 3006 is retained during the etching process, as shown. However, in another embodiment, the footprint gate dielectric layer is formed prior to formation of the footprint gate electrode 3006 and the spacer gate dielectric layer is removed after etching the spacer gate electrode.Referring to Figure 30C, a gate dielectric layer 3014 is formed over the top 3000C of the upper fin portion 3000B and laterally adjacent the sidewall 3000D of the upper fin portion 3000B. In one embodiment, the gate dielectric layer 3014 is formed on the oxidized portion 3010 of the upper fin portion 3000B above the top 3000C of the upper fin portion 3000B and laterally to the sidewall 3000D of the upper fin portion 3000B. Adjacent, as shown. In another embodiment, in the case where the oxidized portion 3010 of the upper fin portion 3000B is removed after etching the footprint gate electrode, the gate dielectric layer 3014 is directly formed on the upper fin portion 3000B, on the upper fin Above the top 3000C of the portion 3000B, and laterally adjacent the sidewall 3000D of the upper fin portion 3000B. In either case, in an embodiment, gate dielectric layer 3014 is further formed over portion 3012 of placeholder gate electrode 3006 on a portion of top surface 3005 of insulating material 3004C of isolation structure 3004.Referring to Figure 30D, permanent gate electrode 3016 is formed over gate dielectric layer 3014 over top 3000C of upper fin portion 3000B and laterally adjacent sidewall 3000D of upper fin portion 3000B. The permanent gate electrode 3016 is further over the gate dielectric layer 3014 on the portion 3012 of the footprint gate electrode 3006 over the portion of the top surface 3005 of the insulating material 3004C.In one embodiment, forming permanent gate electrode 3016 includes shape success function layer 3016A. In one such embodiment, the work function layer 3016A includes titanium and nitrogen. In another such embodiment, work function layer 3016A includes titanium, aluminum, carbon, and nitrogen. In one embodiment, forming the permanent gate electrode 3016 further includes forming a conductive fill metal layer 3016B formed over the work function layer 3016A. In one such embodiment, forming the conductive fill metal layer 3016B includes forming a tungsten-containing film using a tungsten hexafluoride (WF6) precursor using atomic layer deposition (ALD). In an embodiment, an insulated gate cap layer 3018 is formed over the permanent gate electrode 3016.In another aspect, some embodiments of the present disclosure include an amorphous high-k layer for a gate electrode in a gate dielectric structure. In other embodiments, a partially or fully crystalline high-k layer for the gate electrode is included in the gate dielectric structure. In one embodiment that includes a portion or all of the crystalline high-k layer, the gate dielectric structure is a ferroelectric (FE) gate dielectric structure. In another embodiment comprising a partially or fully crystalline high k layer, the gate dielectric structure is an antiferroelectric (AFE) gate dielectric structure.In an embodiment, the manner in which the charge in the channel of the device is increased and the subthreshold behavior is improved by the use of ferroelectric or antiferroelectric gate oxides is described herein. Ferroelectric and antiferroelectric gate oxides increase channel charge to achieve higher currents and also enable steeper conduction behavior.In order to provide context, ferroelectric and antiferroelectric (FE or AFE) materials based on hafnium or zirconium (Hf or Zr) are typically much thinner than ferroelectric materials such as lead zirconate titanate (PZT), so that Compatible with highly scalable logic technology. FE or AFE materials have two features that improve the performance of logic transistors: (1) higher charge in the channel achieved by FE or AFE polarization, and (2) steeper due to sharp FE or AFE transitions Conductive behavior. Such properties can improve transistor performance by increasing current and reducing subthreshold swing (SS).31A shows a cross-sectional view of a semiconductor device having a ferroelectric or antiferroelectric gate dielectric structure in accordance with an embodiment of the present disclosure.Referring to FIG. 31A, integrated circuit structure 3100 includes a gate structure 3102 over substrate 3104. In one embodiment, the gate structure 3102 is over or over a semiconductor channel structure 3106 that includes a single crystal material such as single crystal silicon. Gate structure 3102 includes a gate dielectric over semiconductor channel structure 3106 and a gate electrode over the gate dielectric structure. The gate dielectric includes a layer of ferroelectric or antiferroelectric polycrystalline material 3102A. The gate electrode has a conductive layer 3102B on a layer of ferroelectric or antiferroelectric polycrystalline material 3102A. Conductive layer 3102B includes a metal and may be a barrier layer, a work function layer, or a template layer that enhances the crystallinity of the FE or AFE layer. One or more gate fill layers 3102C are on or over conductive layer 3102B. Source region 3108 and drain region 3110 are on opposite sides of gate structure 3102. Source or drain contact 3112 is electrically coupled to source region 3108 and drain region 3110 at location 3149 and is spaced apart from gate structure 3102 by interlayer dielectric layer 3114 or gate dielectric spacer 3116. In the example of FIG. 31A, source region 3108 and drain region 3110 are regions of substrate 3104. In an embodiment, the source or drain contact 3112 includes a barrier layer 3112A and a conductive trench fill material 3112B. In one embodiment, the ferroelectric or antiferroelectric polycrystalline material layer 3102A extends along the dielectric spacer 3116 as shown in Figure 31A.In an embodiment, and throughout the disclosure, the ferroelectric or antiferroelectric polycrystalline material layer 3102A is a layer of ferroelectric polycrystalline material. In one embodiment, the ferroelectric polycrystalline material layer is an oxide comprising Zr and Hf having a Zr:Hr ratio of 50:50 or having more Zr. The ferroelectric effect can increase as the orthorhombic crystallinity increases. In one embodiment, the layer of ferroelectric polycrystalline material has an orthorhombic crystallinity of at least 80%.In an embodiment, and throughout the disclosure, the ferroelectric or antiferroelectric polycrystalline material layer 3102A is an antiferroelectric polycrystalline material layer. In one embodiment, the antiferroelectric polycrystalline material layer is an oxide comprising Zr and Hf having a Zr:Hr ratio of 80:20 or having more Zr, even up to 100% Zr, ZrO2. In one embodiment, the antiferroelectric polycrystalline material layer has at least 80% tetragonal crystallinity.In an embodiment, and throughout the disclosure, the gate dielectric of the gate stack 3102 further includes an amorphous dielectric layer 3103 between the ferroelectric or antiferroelectric polycrystalline material layer 3102A and the semiconductor channel structure 3106. For example, a native silicon oxide layer, a high K dielectric (HfOx, Al2O3, etc.) or a combination of an oxide and a high K. In an embodiment, and throughout the disclosure, the ferroelectric or antiferroelectric polycrystalline material layer 3102A has a thickness in the range of 1 nanometer to 8 nanometers. In an embodiment, and throughout the disclosure, the ferroelectric or antiferroelectric polycrystalline material layer 3102A has a grain size substantially in the range of 20 nanometers or greater.In an embodiment, a layer comprising a metal (eg, a layer) is formed on the ferroelectric or antiferroelectric polycrystalline material layer 3102A after depositing the ferroelectric or antiferroelectric polycrystalline material layer 3102A, eg, by atomic layer deposition (ALD). 3102B, such as 5-10 nm titanium nitride or tantalum nitride or tungsten). Then perform an anti-fire. In one embodiment, the annealing is performed over a period of time ranging from 1 millisecond to 30 minutes. In one embodiment, the annealing is performed at a temperature in the range of 500-1100 degrees Celsius.31B shows a cross-sectional view of another semiconductor device having a ferroelectric or antiferroelectric gate dielectric structure in accordance with another embodiment of the present disclosure.Referring to FIG. 31B, integrated circuit structure 3150 includes a gate structure 3152 over substrate 3154. In one embodiment, the gate structure 3152 is over or over a semiconductor channel structure 3156 that includes a single crystal material such as single crystal silicon. Gate structure 3152 includes a gate dielectric over semiconductor channel structure 3156 and a gate electrode over the gate dielectric structure. The gate dielectric includes a layer of ferroelectric or antiferroelectric polycrystalline material 3152A, and may also include an amorphous oxide layer 3153. The gate electrode has a conductive layer 3152B on a layer 3152A of ferroelectric or antiferroelectric polycrystalline material. Conductive layer 3152B includes a metal and may be a barrier layer or a work function layer. One or more gate fill layers 3152C are on or over conductive layer 3152B. The elevated source region 3158 and the elevated drain region 3160 (e.g., regions of semiconductor material different from the semiconductor channel structure 3156) are on opposite sides of the gate structure 3152. Source or drain contact 3162 is electrically coupled to source region 3158 and drain region 3160 at location 3199 and to either or both of interlayer dielectric layer 3164 or gate dielectric spacer 3166 and gate structure 3152 Interspersed. In an embodiment, the source or drain contact 3162 includes a barrier layer 3162A and a conductive trench fill material 3162B. In one embodiment, the ferroelectric or antiferroelectric polycrystalline material layer 3152A extends along the dielectric spacer 3166, as shown in Figure 31B.Figure 32A shows a plan view of a plurality of gate lines over a pair of semiconductor fins in accordance with another embodiment of the present disclosure.Referring to FIG. 32A, a plurality of active gate lines 3204 are formed over a plurality of semiconductor fins 3200. The dummy gate line 3206 is at the end of the plurality of semiconductor fins 3200. The spacing 3208 between the gate lines 3204/3206 is where the trench contacts can be positioned to provide conductive contacts to the source or drain regions (e.g., source or drain regions 3251, 3252, 3253, and 3254). In an embodiment, the pattern of the plurality of gate lines 3204/3206 or the pattern of the plurality of semiconductor fins 3200 is described as a grid structure. In one embodiment, the grid-like pattern comprises a plurality of gate lines 3204/3206 spaced apart by a constant pitch and having a constant width, or a pattern of a plurality of semiconductor fins 3200, or both.Figure 32B illustrates a cross-sectional view taken along the a-a' axis of Figure 32A, in accordance with an embodiment of the present disclosure.Referring to FIG. 32B, a plurality of active gate lines 3264 are formed over the semiconductor fins 3262 formed over the substrate 3260. The dummy gate line 3266 is at the end of the semiconductor fin 3262. Dielectric layer 3270 is external to dummy gate line 3266. Trench contact material 3297 is between active gate lines 3264 and between dummy gate lines 3266 and active gate lines 3264. The embedded source or drain structure 3268 is in the semiconductor fin 3262 between the active gate lines 3264 and between the dummy gate lines 3266 and the active gate lines 3264.The active gate line 3264 includes a gate dielectric structure 3272, a work function gate electrode portion 3274 and a fill gate electrode portion 3276, and a dielectric cap layer 3278. Dielectric spacers 3280 are arranged along the sidewalls of active gate line 3264 and dummy gate line 3266. In an embodiment, the gate dielectric structure 3272 includes a layer of ferroelectric or antiferroelectric polycrystalline material 3298. In one embodiment, the gate dielectric structure 3272 further includes an amorphous oxide layer 3299.In another aspect, devices of the same conductivity type (e.g., N-type or P-type) can have a differentiated gate electrode stack for the same conductivity type. However, for comparison purposes, devices having the same conductivity type may have a differentiated voltage threshold (VT) based on the modulated doping.33A shows a cross-sectional view of an NMOS device pair having a differentiated voltage threshold based on modulated doping, the PMOS device pair having a modulated doping based on a modulated doping, in accordance with an embodiment of the present disclosure. Differentiate the voltage threshold.Referring to Figure 33A, a first NMOS device 3302 is adjacent to a second NMOS device 3304 over a semiconductor active region 3300 (e.g., over a silicon fin or substrate). The first NMOS device 3302 and the second NMOS device 3304 each include a gate dielectric layer 3306, a first gate electrode conductive layer 3308 (e.g., a work function layer), and a gate electrode conductive fill 3310. In an embodiment, the first NMOS device 3302 and the first gate electrode conductive layer 3308 of the second NMOS device 3304 have the same material and the same thickness to have the same work function. However, the first NMOS device 3302 has a lower VT than the second NMOS device 3304. In one such embodiment, the first NMOS device 3302 is referred to as a "standard VT" device and the second NMOS device 3304 is referred to as a "high VT" device. In an embodiment, the differentiated VT is achieved by using modulated doping or differential implant doping at regions 3312 of the first NMOS device 3302 and the second NMOS device 3304.Referring again to Figure 33A, first PMOS device 3322 is adjacent to second PMOS device 3324 over semiconductor active region 3320 (e.g., over a silicon fin or substrate). The first PMOS device 3322 and the second PMOS device 3324 each include a gate dielectric layer 3326, a first gate electrode conductive layer 3328 (e.g., a work function layer), and a gate electrode conductive fill 3330. In an embodiment, the first gate electrode conductive layer 3328 of the first PMOS device 3322 and the second PMOS device 3324 have the same material and the same thickness to have the same work function. However, the first PMOS device 3322 has a higher VT than the second PMOS device 3324. In one such embodiment, the first PMOS device 3322 is referred to as a "standard VT" device and the second PMOS device 3324 is referred to as a "low VT" device. In an embodiment, the differentiated VT is achieved by using modulated doping or differential implant doping at regions 3332 of the first PMOS device 3322 and the second PMOS device 3324.33B illustrates a cross-sectional view of an NMOS device pair and a PMOS device pair having a differentiated voltage threshold based on a differentiated gate electrode structure, PMOS device, in accordance with another embodiment of the present disclosure, FIG. Pairs have differentiated voltage thresholds based on differentiated gate electrode structures.Referring to Figure 33B, a first NMOS device 3352 is adjacent to a second NMOS device 3354 over a semiconductor active region 3350 (e.g., over a silicon fin or substrate). Both first NMOS device 3352 and second NMOS device 3354 include a gate dielectric layer 3356. However, the first NMOS device 3352 and the second NMOS device 3354 have structurally different gate electrode stacks. In particular, first NMOS device 3322 includes a first gate electrode conductive layer 3358, such as a first work function layer, and a gate electrode conductive fill 3360. The second NMOS device 3354 includes a second gate electrode conductive layer 3359 such as a second work function layer, a first gate electrode conductive layer 3358, and a gate electrode conductive fill 3360. The first NMOS device 3352 has a lower VT than the second NMOS device 3354. In one such embodiment, the first NMOS device 3352 is referred to as a "standard VT" device and the second NMOS device 3354 is referred to as a "high VT" device. In an embodiment, the differentiated VT is achieved by using a differentiated gate stack for the same conductivity type device.Referring again to Figure 33B, first PMOS device 3372 is adjacent to second PMOS device 3374 over semiconductor active region 3370 (e.g., over a silicon fin or substrate). Both the first PMOS device 3372 and the second PMOS device 3374 include a gate dielectric layer 3376. However, the first PMOS device 3372 and the second PMOS device 3374 have structurally different gate electrode stacks. In particular, first PMOS device 3372 includes a gate electrode conductive layer 3378A (e.g., a work function layer) having a first thickness, and a gate electrode conductive fill 3380. The second PMOS device 3374 includes a gate electrode conductive layer 3378B having a second thickness, and a gate electrode conductive fill 3380. In one embodiment, the gate electrode conductive layer 3378A and the gate electrode conductive layer 3378B have the same composition, but the thickness (second thickness) of the gate electrode conductive layer 3378B is greater than the thickness of the gate electrode conductive layer 3378A (first thickness). The first PMOS device 3372 has a higher VT than the second PMOS device 3374. In one such embodiment, the first PMOS device 3372 is referred to as a "standard VT" device and the second PMOS device 3374 is referred to as a "low VT" device. In an embodiment, the differentiated VT is achieved by using a differentiated gate stack for the same conductivity type device.Referring again to FIG. 33B, an integrated circuit structure includes a fin (eg, a silicon fin, such as 3350), in accordance with an embodiment of the present disclosure. It will be appreciated that the fins have a top (as shown) and side walls (into the page and away from the page). Gate dielectric layer 3356 is over the top of the fin and laterally adjacent the sidewall of the fin. The N-type gate electrode of device 3354 is over the gate dielectric layer 3356 above the top of the fin and laterally adjacent the sidewall of the fin. The N-type gate electrode includes a P-type metal layer 3359 on the gate dielectric layer 3356 and an N-type metal layer 3358 on the P-type metal layer 3359. It will be appreciated that the first N-type source or drain region may be adjacent to the first side of the gate electrode (eg, into the page) and the second N-type source or drain region may be the same as the gate electrode The two sides (eg, away from the page) are adjacent, and the second side is opposite the first side.In one embodiment, the P-type metal layer 3359 includes titanium and nitrogen, and the N-type metal layer 3358 includes titanium, aluminum, carbon, and nitrogen. In one embodiment, the P-type metal layer 3359 has a thickness in the range of 2-12 angstroms, and in a particular embodiment, the P-type metal layer 3359 has a thickness in the range of 2-4 angstroms. In one embodiment, the N-type gate electrode further includes a conductive fill metal layer 3360 on the N-type metal layer 3358. In one such embodiment, the electrically conductive fill metal layer 3360 comprises tungsten. In a particular embodiment, the electrically conductive fill metal layer 3360 comprises 95 or greater atomic percent tungsten and 0.1 to 2 atomic percent fluorine.Referring again to FIG. 33B, in accordance with another embodiment of the present disclosure, an integrated circuit structure includes a first N-type device 3352 having a voltage threshold (VT), a first N-type device 3352 having a first gate dielectric layer 3356, and a first A first N-type metal layer 3358 on a gate dielectric layer 3356. Moreover, a second N-type device 3354 having a voltage threshold (VT), a second N-type device 3354 having a second gate dielectric layer 3356, a P-type metal layer 3359 on the second gate dielectric layer 3356, and A second N-type metal layer 3358 on the P-type metal layer 3359.In one embodiment, wherein the VT of the second N-type device 3354 is higher than the VT of the first N-type device 3352. In one embodiment, the first N-type metal layer 3358 and the second N-type metal layer 3358 have the same composition. In one embodiment, the first N-type metal layer 3358 and the second N-type metal layer 3358 have the same thickness. In one embodiment, the N-type metal layer 3358 includes titanium, aluminum, carbon, and nitrogen, and the P-type metal layer 3359 includes titanium and nitrogen.Referring again to FIG. 33B, in accordance with another embodiment of the present disclosure, an integrated circuit structure includes a first P-type device 3372 having a voltage threshold (VT), a first P-type device 3372 having a first gate dielectric layer 3376, and a A first p-type metal layer 3378A on a gate dielectric layer 3376. The first P-type metal layer 3378A has a thickness. The second P-type device 3378 is also included and has a voltage threshold (VT). The second P-type device 3374 has a second gate dielectric layer 3376 and a second P-type metal layer 3378B on the second gate dielectric layer 3376. The second P-type metal layer 3378B has a thickness greater than the thickness of the first P-type metal layer 3378A.In one embodiment, the VT of the second P-type device 3374 is lower than the VT of the first P-type device 3372. In one embodiment, the first P-type metal layer 3378A and the second P-type metal layer 3378B have the same composition. In one embodiment, both the first P-type metal layer 3378A and the second P-type metal layer 3378B comprise titanium and nitrogen. In one embodiment, the first P-type metal layer 3378A has a thickness that is less than the work function saturation thickness of the material of the first P-type metal layer 3378A. In one embodiment, although not shown, the second P-type metal layer 3378B includes a first metal film (eg, from a second deposition) on a second metal film (eg, from the first deposition), and The seam is between the first metal film and the second metal film.Referring again to FIG. 33B, an integrated circuit structure includes a first N-type device 3352 having a first gate dielectric layer 3356 and a first N-type metal layer on the first gate dielectric layer 3356, in accordance with another embodiment of the present disclosure. 3358. The second N-type device 3354 has a second gate dielectric layer 3356, a first P-type metal layer 3359 on the second gate dielectric layer 3356, and a second N-type metal layer 3358 on the first P-type metal layer 3359. The first P-type device 3372 has a third gate dielectric layer 3376 and a second P-type metal layer 3378A on the third gate dielectric layer 3376. The second P-type metal layer 3378A has a thickness. The second P-type device 3374 has a fourth gate dielectric layer 3376 and a third P-type metal layer 3378B on the fourth gate dielectric layer 3376. The third P-type metal layer 3378B has a thickness greater than the thickness of the second P-type metal layer 3378A.In one embodiment, the first N-type device 3352 has a voltage threshold (VT), the second N-type device 3354 has a voltage threshold (VT), and the VT of the second N-type device 3354 is lower than the first N-type device 3352 VT. In one embodiment, the first P-type device 3372 has a voltage threshold (VT), the second P-type device 3374 has a voltage threshold (VT), and the VT of the second P-type device 3374 is lower than the first P-type device 3372 VT. In one embodiment, the third P-type metal layer 3378B includes a first metal film on the second metal film, and the seam is between the first metal film and the second metal film.It will be appreciated that more than two types of VT devices for the same conductivity type may be included in the same structure (e.g., on the same die). In a first example, FIG. 34A illustrates a cross-sectional view of three NMOS devices and three PMOS devices having differentiated gate electrode structures and modulated doping, in accordance with an embodiment of the present disclosure. Differentiating the voltage threshold, the three PMOS devices have differentiated voltage thresholds based on the differentiated gate electrode structure and the modulated doping.Referring to Figure 34A, a first NMOS device 3402 is adjacent to a second NMOS device 3404 and a third NMOS device 3403 over a semiconductor active region 3400 (e.g., over a silicon fin or substrate). The first NMOS device 3402, the second NMOS device 3404, and the third NMOS device 3403 include a gate dielectric layer 3406. The first NMOS device 3402 and the third NMOS device 3403 have structurally identical or similar gate electrode stacks. However, the second NMOS device 3404 has a gate electrode stack that is structurally different from the first NMOS device 3402 and the third NMOS device 3403. In particular, first NMOS device 3402 and third NMOS device 3403 include a first gate electrode conductive layer 3408 (e.g., a first work function layer) and a gate electrode conductive fill 3410. The second NMOS device 3404 includes a second gate electrode conductive layer 3409 (e.g., a second work function layer), a first gate electrode conductive layer 3408, and a gate electrode conductive fill 3410. The first NMOS device 3402 has a lower VT than the second NMOS device 3404. In one such embodiment, the first NMOS device 3402 is referred to as a "standard VT" device and the second NMOS device 3404 is referred to as a "high VT" device. In an embodiment, the differentiated VT is achieved by using a differentiated gate stack for the same conductivity type device. In an embodiment, the third NMOS device 3403 has a different VT than the VT of the first NMOS device 3402 and the second NMOS device 3404, even though the gate electrode structure of the third NMOS device 3403 and the gate electrode of the first NMOS device 3402 The structure is the same. In one embodiment, the VT of the third NMOS device is between the VT of the first NMOS device 3402 and the VT of the second NMOS device 3404. In an embodiment, the differentiated VT between the third NMOS device 3403 and the first NMOS device 3402 is achieved by using modulated doping or differential implant doping at region 3412 of the third NMOS device 3403. In one such embodiment, the third N-type device 3403 has a channel region having a dopant concentration that is different than a dopant concentration of a channel region of the first N-type device 3402.Referring again to Figure 34A, a first PMOS device 3422 is adjacent to the second PMOS device 3424 and the third PMOS device 3423 over the semiconductor active region 3420 (e.g., over a silicon fin or substrate). The first PMOS device 3422, the second PMOS device 3424, and the third PMOS device 3423 include a gate dielectric layer 3426. The first PMOS device 3422 and the third PMOS device 3423 have structurally identical or similar gate electrode stacks. However, the second PMOS device 3424 has a gate electrode stack that is structurally different from the first PMOS device 3422 and the third PMOS device 3423. In particular, first PMOS device 3422 and third PMOS device 3423 include a gate electrode conductive layer 3408A (e.g., a work function layer) having a first thickness, and a gate electrode conductive fill 3430. The second PMOS device 3424 includes a gate electrode conductive layer 3428B having a second thickness, and a gate electrode conductive fill 3430. In one embodiment, the gate electrode conductive layer 3428A and the gate electrode conductive layer 3428B have the same composition, but the thickness (second thickness) of the gate electrode conductive layer 3428B is greater than the thickness of the gate electrode conductive layer 3428A (first thickness). In an embodiment, the first PMOS device 3422 has a higher VT than the second PMOS device 3424. In one such embodiment, the first PMOS device 3422 is referred to as a "standard VT" device and the second PMOS device 3424 is referred to as a "low VT" device. In an embodiment, the differentiated VT is achieved by using a differentiated gate stack for the same conductivity type device. In an embodiment, the third PMOS device 3423 has a different VT than the VT of the first PMOS device 3422 and the second PMOS device 3424, even though the gate electrode structure of the third PMOS device 3423 and the gate of the first PMOS device 3422 The electrode structure is the same. In one embodiment, the VT of the third PMOS device 3423 is between the VT of the first PMOS device 3422 and the VT of the second PMOS device 3424. In an embodiment, the differentiated VT between the third PMOS device 3423 and the first PMOS device 3422 is achieved by using modulated doping or differential implant doping at region 3432 of the third PMOS device 3423. In one such embodiment, the third P-type device 3423 has a channel region having a dopant concentration that is different than the dopant concentration of the channel region of the first P-type device 3422.In a second example, FIG. 34B illustrates a cross-sectional view of three NMOS devices and three PMOS devices having differentiated gate electrode structures and modulated dopants in accordance with another embodiment of the present disclosure. A heterogeneous voltage threshold, the three PMOS devices have differentiated voltage thresholds based on the differentiated gate electrode structure and the modulated doping.Referring to Figure 34B, a first NMOS device 3452 is adjacent to a second NMOS device 3454 and a third NMOS device 3453 over a semiconductor active region 3450 (e.g., over a silicon fin or substrate). The first NMOS device 3452, the second NMOS device 3454, and the third NMOS device 3453 include a gate dielectric layer 3456. The second NMOS device 3454 and the third NMOS device 3453 have structurally identical or similar gate electrode stacks. However, the first NMOS device 3452 has a gate electrode stack that is structurally different from the second NMOS device 3454 and the third NMOS device 3453. In particular, first NMOS device 3452 includes a first gate electrode conductive layer 3458 (eg, a first work function layer), and a gate electrode conductive fill 3460. The second NMOS device 3454 and the third NMOS device 3453 include a second gate electrode conductive layer 3459 (e.g., a second work function layer), a first gate electrode conductive layer 3458, and a gate electrode conductive fill 3460. The first NMOS device 3452 has a lower VT than the second NMOS device 3454. In one such embodiment, the first NMOS device 3452 is referred to as a "standard VT" device and the second NMOS device 3454 is referred to as a "high VT" device. In an embodiment, the differentiated VT is achieved by using a differentiated gate stack for the same conductivity type device. In an embodiment, the third NMOS device 3453 has a different VT than the VT of the first NMOS device 3452 and the VT of the second NMOS device 3454, even though the gate electrode structure of the third NMOS device 3453 and the gate of the second NMOS device 3454 The electrode structure is the same. In one embodiment, the VT of the third NMOS device 3453 is between the VT of the first NMOS device 3452 and the VT of the second NMOS device 3454. In an embodiment, the differentiated VT between the third NMOS device 3453 and the second NMOS device 3454 is achieved by using modulated doping or differential implant doping at region 3462 of the third NMOS device 3453. In one such embodiment, the third N-type device 3453 has a channel region having a dopant concentration that is different than the dopant concentration of the channel region of the second N-type device 3454.Referring again to Figure 34B, a first PMOS device 3472 is adjacent to the second PMOS device 3474 and the third PMOS device 3473 over the semiconductor active region 3470 (e.g., over a silicon fin or substrate). The first PMOS device 3472, the second PMOS device 3474, and the third PMOS device 3473 include a gate dielectric layer 3476. The second PMOS device 3474 and the third PMOS device 3347 have structurally identical or similar gate electrode stacks. However, the first PMOS device 3472 has a gate electrode stack that is structurally different from the second PMOS device 3474 and the third PMOS device 3473. In particular, the first PMOS device 3472 includes a gate electrode conductive layer 3478A (e.g., a work function layer) having a first thickness, and a gate electrode conductive fill 3480. The second PMOS device 3474 and the third PMOS device 3473 include a gate electrode conductive layer 3478B having a second thickness, and a gate electrode conductive fill 3480. In one embodiment, the gate electrode conductive layer 3478A and the gate electrode conductive layer 3478B have the same composition, but the thickness (second thickness) of the gate electrode conductive layer 3478B is greater than the thickness of the gate electrode conductive layer 3478A (first thickness). In an embodiment, the first PMOS device 3472 has a higher VT than the second PMOS device 3474. In one such embodiment, the first PMOS device 3472 is referred to as a "standard VT" device and the second PMOS device 3474 is referred to as a "low VT" device. In an embodiment, the differentiated VT is achieved by using a differentiated gate stack for the same conductivity type device. In an embodiment, the third PMOS device 3473 has a different VT than the VT of the first PMOS device 3472 and the second PMOS device 3474, even though the gate electrode structure of the third PMOS device 3473 and the gate of the second PMOS device 3474 The electrode structure is the same. In one embodiment, the VT of the third PMOS device 3473 is between the VT of the first PMOS device 3472 and the VT of the second PMOS device 3474. In an embodiment, the differentiated VT between the third PMOS device 3473 and the first PMOS device 3472 is achieved by using a modulated doping or differential implant doping at region 3482 of the third PMOS device 3347. In one such embodiment, the third P-type device 3473 has a channel region having a dopant concentration that is different from the dopant concentration of the channel region of the second P-type device 3474.35A-35D illustrate cross-sectional views of various operations in a method of fabricating an NMOS device having a differentiated voltage threshold based on a differentiated gate electrode structure, in accordance with an embodiment of the present disclosure.Referring to FIG. 35A, in which a "standard VT NMOS" region (STD VT NMOS) and a "high VT NMOS" region (HIGH VTNMOS) are shown as bifurcated on a common substrate, a method of fabricating an integrated circuit structure includes the first semiconductor fin A gate dielectric layer 3506 is formed over the substrate 3502 and over the second semiconductor fins 3504 (eg, over the first and second silicon fins). A P-type metal layer 3508 is formed over the first semiconductor fins 3502 and the gate dielectric layer 3506 over the second semiconductor fins 3504.Referring to FIG. 35B, a portion of the P-type metal layer 3508 is removed from the gate dielectric layer 3506 over the first semiconductor fin 3502, but the portion 3509 of the P-type metal layer 3508 remains over the second semiconductor fin 3504. On the gate dielectric layer 3506.Referring to FIG. 35C, an N-type metal layer 3510 is formed over the gate dielectric layer 3506 over the first semiconductor fin 3502, and a P-type metal layer over the gate dielectric layer 3506 over the second semiconductor fin 3504. The part is on 3509. In an embodiment, the subsequent processing includes forming a first N-type device having a voltage threshold (VT) over the first semiconductor fin 3502 and forming a voltage threshold (VT) over the second semiconductor fin 3504 The second N-type device, wherein the VT of the second N-type device is higher than the VT of the first N-type device.Referring to FIG. 35D, in an embodiment, a conductive fill metal layer 3512 is formed over the N-type metal layer 3510. In one such embodiment, forming the conductive fill metal layer 3512 includes forming a tungsten-containing film using atomic layer deposition (ALD) using a tungsten hexafluoride (WF6) precursor.36A-36D illustrate cross-sectional views of various operations in a method of fabricating a PMOS device having a differentiated voltage threshold based on a differentiated gate electrode structure, in accordance with an embodiment of the present disclosure.Referring to FIG. 36A, in which a "standard VT PMOS" region (STD VT PMOS) and a "low VT PMOS" region (LOW VT PMOS) are shown as bifurcated on a common substrate, a method of fabricating an integrated circuit structure includes a first semiconductor fin A gate dielectric layer 3606 is formed over the substrate 3602 and over the second semiconductor fins 3604 (eg, over the first and second silicon fins). A first p-type metal layer 3608 is formed over the first semiconductor fin 3602 and over the gate dielectric layer 3606 over the second semiconductor fin 3604.Referring to FIG. 36B, a portion of the first P-type metal layer 3608 is removed from the gate dielectric layer 3606 over the first semiconductor fin 3602, but the portion 3609 of the first P-type metal layer 3608 remains in the second semiconductor fin On the gate dielectric layer 3606 above 3604.Referring to FIG. 36C, a second P-type metal layer 3610 is formed over the gate dielectric layer 3606 over the first semiconductor fin 3602, and first on the gate dielectric layer 3606 over the second semiconductor fin 3604. Part 3609 of the P-type metal layer. In an embodiment, the subsequent processing includes forming a first P-type device having a voltage threshold (VT) over the first semiconductor fin 3602, and forming a voltage threshold (VT) over the second semiconductor fin 3604. The second P-type device, wherein the VT of the second P-type device is lower than the VT of the first P-type device.In one embodiment, the first P-type metal layer 3608 and the second P-type metal layer 3610 have the same composition. In one embodiment, the first P-type metal layer 3608 and the second P-type metal layer 3610 have the same thickness. In one embodiment, the first P-type metal layer 3608 and the second P-type metal layer 3610 have the same thickness and the same composition. In one embodiment, the seam 3611 is between the first P-type metal layer 3608 and the second P-type metal layer 3610, as shown.Referring to FIG. 36D, in an embodiment, a conductive fill metal layer 3612 is formed over the P-type metal layer 3610. In one such embodiment, forming the conductive fill metal layer 3612 includes forming a tungsten-containing film using atomic layer deposition (ALD) using a tungsten hexafluoride (WF6) precursor. In one embodiment, an N-type metal layer 3614 is formed over the P-type metal layer 3610 prior to forming the conductive fill metal layer 3612, as shown. In one such embodiment, the N-type metal layer 3614 is an artifact of a dual metal gate replacement process.In another aspect, a metal gate structure for a complementary metal oxide semiconductor (CMOS) semiconductor device is described. In an example, FIG. 37 illustrates a cross-sectional view of an integrated circuit structure having a P/N junction in accordance with an embodiment of the present disclosure.Referring to FIG. 37, integrated circuit structure 3700 includes a semiconductor substrate 3702 having an N-well region 3704 and a P-well region 3708 having a first semiconductor fin 3706 protruding therefrom, a P-well region 3708 There is a second semiconductor fin 3710 protruding therefrom. The first half of the conductor fins 3706 are spaced apart from the second semiconductor fins 3710. In the semiconductor substrate 3702, the N well region 3704 is directly adjacent to the P well region 3708. Trench isolation structure 3712 is on semiconductor substrate 3702, external to and between first semiconductor fin 3706 and second semiconductor fin 3710. The first 3706 and second 3710 semiconductor fins extend over the trench isolation structure 3712.Gate dielectric layer 3714 is over first 3706 and second 3710 semiconductor fins and trench isolation structure 3712. Gate dielectric layer 3714 is continuous between first 3706 and second 3710 semiconductor fins. Conductive layer 3716 is over gate dielectric layer 3714 above first semiconductor fin 3706, but not over gate dielectric layer 3714 above second semiconductor fin 3710. In one embodiment, conductive layer 3716 includes titanium, nitrogen, and oxygen. The p-type metal gate layer 3718 is over the conductive layer 3716 over the first semiconductor fin 3706, but not over the conductive layer 3716 above the second semiconductor fin 3710. The p-type metal gate layer 3718 is further on, but not necessarily all of, the trench isolation structure 3712 between the first semiconductor fin 3706 and the second semiconductor fin 3710. N-type metal gate layer 3720 over second semiconductor fin 3710, over trench isolation structure 3712 between first semiconductor fin 3706 and second semiconductor fin 3710, and p-type metal gate Above the pole layer 3718.In one embodiment, an interlayer dielectric (ILD) layer 3722 is over the trench isolation structure 3712 on the exterior of the first semiconductor fin 3706 and the second semiconductor fin 3710. The ILD layer 3722 has an opening 3724 that exposes the first 3706 and second 3710 semiconductor fins. In one such embodiment, a conductive layer 3716, a p-type metal gate layer 3718, and an n-type metal gate layer 3720 are further formed along sidewalls 3726 of opening 3724, as shown. In a particular embodiment, the conductive layer 3716 has a sidewall 3726 along the opening 3724 below the top surface 3719 of the p-type metal gate layer 3718 and a sidewall 3726 along the opening 3724 at the top surface of the n-type metal gate layer 3720. The top surface 3718 below 3721 is as shown.In one embodiment, the p-type metal gate layer 3718 includes titanium and nitrogen. In one embodiment, the n-type metal gate layer 3720 includes titanium and aluminum. In one embodiment, a conductive fill metal layer 3730 is over the n-type metal layer 3720 as shown. In one such embodiment, the electrically conductive filler metal layer 3730 includes tungsten. In a particular embodiment, the electrically conductive fill metal layer 3730 comprises 95 or more atomic percent tungsten and 0.1 to 2 atomic percent fluorine. In one embodiment, the gate dielectric layer 3714 has a layer comprising germanium and oxygen. In one embodiment, the thermal or chemical oxide layer 3732 is between the first 3706 and the second portion of the second 3710 semiconductor fin, as shown. In one embodiment, the semiconductor substrate 3702 is a bulk silicon semiconductor substrate.Referring now only to the right side of FIG. 37, in accordance with an embodiment of the present disclosure, an integrated circuit structure includes a semiconductor substrate 3702 including an N-well region 3704 having semiconductor fins 3706 protruding therefrom. Trench isolation structure 3712 is on semiconductor substrate 3702 around semiconductor fins 3706. Semiconductor fins 3706 extend over trench isolation structures 3712. Gate dielectric layer 3714 is over semiconductor fin 3706. Conductive layer 3716 is over gate dielectric layer 3714 above semiconductor fin 3706. In one embodiment, conductive layer 3716 includes titanium, nitrogen, and oxygen. P-type metal gate layer 3718 is over conductive layer 3716 above semiconductor fin 3706.In one embodiment, an interlayer dielectric (ILD) layer 3722 is above the trench isolation structure 3712. The ILD layer has an opening that exposes the semiconductor fins 3706. A conductive layer 3716 and a P-type metal gate layer 3718 are further formed along the sidewall of the opening. In one such embodiment, the conductive layer 3716 has a top surface along the sidewall of the opening that is lower than the top surface of the P-type metal gate layer 3718 along the sidewall of the opening. In one embodiment, the P-type metal gate layer 3718 is over the conductive layer 3716. In one embodiment, the P-type metal gate layer 3718 includes titanium and nitrogen. In one embodiment, a conductive fill metal layer 3730 is over the P-type metal gate layer 3718. In one such embodiment, the electrically conductive fill metal layer 3730 includes tungsten. In a particular such embodiment, the electrically conductive fill metal layer 3730 is comprised of 95 or greater atomic percent tungsten and 0.1 to 2 atomic percent fluorine. In one embodiment, the gate dielectric layer 3714 includes a layer having germanium and oxygen.38A-38H illustrate cross-sectional views of various operations in a method of fabricating an integrated circuit structure using a dual metal gate replacement gate process flow, in accordance with an embodiment of the present disclosure.Referring to FIG. 38A, which shows an NMOS (N-type) region and a PMOS (P-type) region, a method of fabricating an integrated circuit structure includes forming an interlayer over a first 3804 and a second 3806 semiconductor fin above a substrate 3800. Dielectric (ILD) layer 3802. An opening 3808 is formed in the ILD layer 3802 that exposes the first 3804 and second 3806 semiconductor fins. In one embodiment, the opening 3808 is formed by removing a gate or dummy gate structure that is initially in a position above the first 3804 and second 3806 semiconductor fins.A gate dielectric layer 3810 is formed in the opening 3808 and over a portion of the trench isolation structure 3812 between the first 3804 and second 3806 semiconductor fins and between the first 3804 and second 3806 semiconductor fins. In one embodiment, a gate dielectric layer 3810 is formed over a thermal or chemical oxide layer 3811, such as a silicon oxide or silicon dioxide layer, formed over the first 3804 and second 3806 semiconductor fins, as shown. In another embodiment, the gate dielectric layer 3810 is formed directly on the first 3804 and second 3806 semiconductor fins.A conductive layer 3814 is formed over the gate dielectric layer 3810 formed over the first 3804 and second 3806 semiconductor fins. In one embodiment, conductive layer 3814 includes titanium, nitrogen, and oxygen. A p-type metal gate layer 3816 is formed over the first semiconductor fin 3804 and over the conductive layer 3814 formed over the second semiconductor fin 3806.Referring to FIG. 38B, a dielectric etch stop layer 3818 is formed over the p-type metal gate layer 3816. In one embodiment, the dielectric etch stop layer 3818 includes a first silicon oxide (eg, SiO 2 ) layer, an aluminum oxide layer (eg, Al 2 O 3 ) on the first silicon oxide layer, and a second silicon dioxide on the aluminum oxide layer ( For example, a layer of SiO2).Referring to Figure 38C, a mask 3820 is formed over the structure of Figure 38B. Mask 3820 covers the PMOS region and exposes the NMOS region.Referring to FIG. 38D, dielectric etch stop layer 3818, p-type metal gate layer 3816, and conductive layer 3814 are patterned to provide a patterned dielectric etch stop layer 3819, over first semiconductor fin 3804, but not at A patterned p-type metal gate layer 3817 over the patterned conductive layer 3815 over the semiconductor fins 3806. In an embodiment, conductive layer 3814 protects second semiconductor fin 3806 during patterning.Referring to Figure 38E, mask 3820 is removed from the structure of Figure 38D. Referring to Figure 38F, the patterned dielectric etch stop layer 3819 is removed from the structure of Figure 38E.Referring to FIG. 38G, an n-type metal gate layer 3822 is formed over the second semiconductor fin 3806, a portion of the trench isolation structure 3812 between the first semiconductor fin 3804 and the second semiconductor fin 3806. Above, and above the patterned p-type metal gate layer 3818. In an embodiment, a patterned conductive layer 3815, a patterned p-type metal gate layer 3817, and an n-type metal gate layer 3822 are further formed along sidewalls 3824 of opening 3808. In one such embodiment, the patterned conductive layer 3815 has a sidewall 3824 along the opening 3808 below the top surface of the patterned p-type metal gate layer 3817 and along the sidewall 3824 of the opening 3808 in an n-type metal The top surface below the top surface of the gate layer 3822.Referring to FIG. 38H, a conductive fill metal layer 3826 is formed over the n-type metal layer 3822. In one embodiment, a conductive fill metal layer 3826 is formed by depositing a tungsten-containing film using a tungsten hexafluoride (WF6) precursor using atomic layer deposition (ALD).In another aspect, a dual suicide structure for a complementary metal oxide semiconductor (CMOS) semiconductor device is described. As an exemplary process flow, FIGS. 39A-39H illustrate cross-sectional views representing various operations in a method of fabricating a dual silicide-based integrated circuit, in accordance with an embodiment of the present disclosure.Referring to FIG. 39A, wherein the NMOS region and the PMOS region are shown as bifurcated on a common substrate, the method of fabricating an integrated circuit structure includes forming a first gate structure 3902, which may include a first fin such as a first silicon fin Dielectric sidewall spacers 3903 above the 3904. A second gate structure 3952, which may include a dielectric sidewall spacer 3953, is formed over the second fin 3954, such as a second silicon fin. An insulating material 3906 is formed adjacent to the first gate structure 3902 above the first fin 3904 and adjacent to the second gate structure 3952 above the second fin 3954. In one embodiment, insulating material 3906 is a sacrificial material and is used as a mask during the double silicide process.Referring to FIG. 39B, a first portion of insulating material 3906 is removed from above but not from second fin 3954 to expose first fin 3904 adjacent first gate structure 3902 The first 3908 and the second 3910 source or drain region. In an embodiment, the first 3908 and second 3910 source or drain regions are epitaxial regions formed within the recessed portions of the first fins 3904, as shown. In one such embodiment, the first 3908 and second 3910 source or drain regions comprise silicon and germanium.Referring to FIG. 39C, a first metal silicide layer 3912 is formed over the first 3908 and second 3910 source or drain regions of the first fin 3904. In one embodiment, the first metal silicide is formed by depositing a layer comprising nickel and platinum on the structure of FIG. 39B, annealing a layer comprising nickel and platinum, and removing unreacted portions of the layer comprising nickel and platinum. Layer 3912.Referring to FIG. 39D, after forming the first metal silicide layer 3912, the second portion of the insulating material 3906 is removed from over the second fin 3954 to expose the second fin 3954 to the second gate structure 3952. Adjacent third 3958 and fourth 3960 source or drain regions. In an embodiment, the second 3958 and third 3960 source or drain regions are formed within the second fin 3954, such as within the second silicon fin, as shown. However, in another embodiment, the third 3958 and fourth 3960 source or drain regions are epitaxial regions formed within the recessed portions of the second fins 3954. In one such embodiment, the third 3958 and fourth 3960 source or drain regions comprise silicon.Referring to FIG. 39E, a first metal layer 3914 is formed on the structure of FIG. 39D, that is, on the first 3908, second 3910, third 3958, and fourth 3960 source or drain regions. A second metal silicide layer 3962 is then formed over the third 3958 and fourth 3960 source or drain regions of the second fin 3954. For example, a second metal silicide layer 3962 is formed from the first metal layer 3914 using an annealing process. In an embodiment, the composition of the second metal silicide layer 3962 is different from the composition of the first metal silicide layer 3912. In one embodiment, the first metal layer 3914 is or includes a titanium layer. In one embodiment, the first metal layer 3914 is formed as a conformal metal layer, for example, conformal to the open trench of FIG. 39D, as shown.Referring to FIG. 39F, in an embodiment, the first metal layer 3914 is recessed to form a U-shaped metal layer 3916 over each of the first 3908, second 3910, third 3958, and fourth 3960 source or drain regions. .Referring to Figure 39G, in an embodiment, a second metal layer 3918 is formed over the U-shaped metal layer 3916 of the structure of Figure 39F. In an embodiment, the composition of the second metal layer 3918 is different from the composition of the U-shaped metal layer 3916.Referring to Figure 39H, in an embodiment, a third metal layer 3920 is formed over the second metal layer 3918 of the structure of Figure 39G. In an embodiment, the third metal layer 3920 has the same composition as the U-shaped metal layer 3916.Referring again to FIG. 39H, integrated circuit structure 3900 includes a P-type semiconductor device (PMOS) over the substrate, in accordance with an embodiment of the present disclosure. The P-type semiconductor device includes a first fin 3904, such as a first silicon fin. It will be appreciated that the first fin has a top (shown as 3904A) and a side wall (into the page and away from the page). The first gate electrode 3902 includes a first gate dielectric layer over the top 3904A of the first fin 3904 and laterally adjacent the sidewall of the first fin 3904, and is included in the first fin 3904 A first gate electrode over the first gate dielectric layer over top 3904A and laterally adjacent the sidewalls of first fin 3904. The first gate electrode 3902 has a first side 3902A and a second side 3902B opposite the first side 3902A.The first 3908 and second 3910 semiconductor source or drain regions are adjacent to the first 3902A and second 3902B sides of the first gate electrode 3902, respectively. The first 3930 and second 3932 trench contact structures are over the first 3908 and second 3910 semiconductor source or drain regions adjacent the first 3902A and second 3902B sides of the first gate electrode 3902, respectively. The first metal silicide layer 3912 is directly between the first 3930 and second 3932 trench contact structures and the first 3908 and second 3910 semiconductor source or drain regions, respectively.Integrated circuit structure 3900 includes an N-type semiconductor device (NMOS) over the substrate. The N-type semiconductor device includes a second fin 3954, such as a second silicon fin. It will be appreciated that the second fin has a top (shown as 3954A) and a side wall (into the page and away from the page). The second gate electrode 3952 includes a second gate dielectric layer over the top 3954A of the second fin 3954 and laterally adjacent the sidewall of the second fin 3954, and includes the second fin 3954 A second gate electrode over the second gate dielectric layer over the top 3954A and laterally adjacent the sidewalls of the second fin 3954. The second gate electrode 3952 has a first side 3952A and a second side 3952B opposite the first side 3952A.The third 3958 and fourth 3960 semiconductor source or drain regions are adjacent to the first 3952A and second 3952B sides of the second gate electrode 3952, respectively. The third 3970 and fourth 3972 trench contact structures are respectively over the third 3958 and fourth 3960 semiconductor source or drain regions adjacent the first 3952A and second 3952B sides of the second gate electrode 3952. The second metal silicide layer 3962 is directly between the third 3970 and fourth 3972 trench contact structures and the third 3958 and fourth 3960 semiconductor source or drain regions, respectively. In an embodiment, the first metal silicide layer 3912 includes a metal species not included in the at least one second metal silicide layer 3962.In one embodiment, the second metal silicide layer 3962 includes titanium and silicon. The first metal silicide layer 3912 includes nickel, platinum, and silicon. In one embodiment, the first metal silicide layer 3912 further includes germanium. In one embodiment, the first metal silicide layer 3912 further includes titanium, for example, incorporated into the first metal silicide layer 3912 during the subsequent formation of the second metal silicide layer 3962 using the first metal layer 3914. In one such embodiment, the silicide layer that has been formed on the PMOS source or drain region is further modified by an anneal process for forming a silicide region on the NMOS source or drain region. This may result in a silicide layer on the PMOS source or drain region having a small percentage of all silicided metal. However, in other embodiments, such a silicide layer that has been formed on the PMOS source or drain region is not altered by the annealing process used to form the silicide region on the NMOS source or drain region or substantially Was changed.In one embodiment, the first 3908 and second 3910 semiconductor source or drain regions are first and second embedded semiconductor source or drain regions including silicon and germanium. In one such embodiment, the third 3958 and fourth 3960 semiconductor source or drain regions are third and fourth embedded semiconductor source or drain regions including silicon. In another embodiment, the third 3958 and fourth 3960 semiconductor source or drain regions are formed in the fin 3954 and are not embedded epitaxial regions.In an embodiment, the first 3930, second 3932, third 3970, and fourth 3972 trench contact structures all include a U-shaped metal layer 3916 and a T-shaped metal layer 3918 over and over the U-shaped metal layer 3916. In one embodiment, the U-shaped metal layer 3916 includes titanium and the T-shaped metal layer 3918 includes cobalt. In one embodiment, the first 3930, second 3932, third 3970, and fourth 3972 trench contact structures all include a third metal layer 3920 on the T-shaped metal layer 3918. In one embodiment, the third metal layer 3920 and the U-shaped metal layer 3916 have the same composition. In a particular embodiment, the third metal layer 3920 and the U-shaped metal layer comprise titanium, and the T-shaped metal layer 3918 comprises cobalt.In another aspect, a trench contact structure, such as for a source or drain region, is described. In the example, FIG. 40A illustrates a cross-sectional view of an integrated circuit structure having trench contacts for NMOS devices, in accordance with an embodiment of the present disclosure. 40B shows a cross-sectional view of an integrated circuit structure having trench contacts for PMOS devices, in accordance with another embodiment of the present disclosure.Referring to FIG. 40A, integrated circuit structure 4000 includes fins 4002, such as silicon fins. Gate dielectric layer 4004 is over fin 4002. Gate electrode 4006 is over gate dielectric layer 4004. In an embodiment, the gate electrode 4006 includes a conformal conductive layer 4008 and a conductive fill 4010. In an embodiment, dielectric cap 4012 is over gate electrode 4006 and over gate dielectric layer 4004. The gate electrode has a first side 4006A and a second side 4006B opposite the first side 4006A. Dielectric spacer 4013 is along the sidewall of gate electrode 4006. In one embodiment, the gate dielectric layer 4004 is further between the first of the dielectric spacers 4013 and the first side 4006A of the gate electrode 4006, and the second and gate electrodes of the dielectric spacer 4013. Between the second side 4006B of 4006, as shown. In an embodiment, although not shown, a thin oxide layer such as a thermal or chemical silicon oxide or silicon dioxide layer is between the fin 4002 and the gate dielectric layer 4004.The first 4014 and second 4016 semiconductor source or drain regions are adjacent to the first 4006A and second 4006B sides of the gate electrode 4006, respectively. In one embodiment, the first 4014 and second 4016 semiconductor source or drain regions are in the fin 4002 as shown. However, in another embodiment, the first 4014 and second 4016 semiconductor source or drain regions are embedded epitaxial regions formed in the recesses of the fins 4002.The first 4018 and second 4020 trench contact structures are respectively over the first 4014 and second 4016 semiconductor source or drain regions adjacent the first 4006A and second 4006B sides of the gate electrode 4006. The first 4018 and second 4020 trench contact structures each include a U-shaped metal layer 4022 and a T-shaped metal layer 4024 over and over the U-shaped metal layer 4022. In one embodiment, the U-shaped metal layer 4022 and the T-shaped metal layer 4024 have different compositions. In one such embodiment, the U-shaped metal layer 4022 includes titanium and the T-shaped metal layer 4024 includes cobalt. In one embodiment, the first 4018 and second 4020 trench contact structures each comprise a third metal layer 4026 on the T-shaped metal layer 4024. In one such embodiment, the third metal layer 4026 and the U-shaped metal layer 4022 have the same composition. In a particular embodiment, the third metal layer 4026 and the U-shaped metal layer 4022 comprise titanium, and the T-shaped metal layer 4024 comprises cobalt.The first trench contact via 4028 is electrically connected to the first trench contact 4018. In a particular embodiment, the first trench contact via 4028 is over and coupled to the third metal layer 4026 of the first trench contact 4018. The first trench contact via 4028 is further over and in contact with a portion of one of the dielectric spacers 4013 and over and in contact with a portion of the dielectric cap 4012. The second trench contact via 4030 is electrically connected to the second trench contact 4020. In a particular embodiment, the second trench contact via 4030 is over and coupled to the third metal layer 4026 of the second trench contact 4020. The second trench contact via 4030 is further over and in contact with a portion of the other of the dielectric spacers 4013 and over and in contact with another portion of the dielectric cap 4012.In an embodiment, the metal silicide layer 4032 is directly between the first 4018 and second 4020 trench contact structures and the first 4014 and second 4016 semiconductor source or drain regions, respectively. In one embodiment, the metal silicide layer 4032 comprises titanium and silicon. In certain such embodiments, the first 4014 and second 4016 semiconductor source or drain regions are first and second N-type semiconductor source or drain regions.Referring to FIG. 40B, integrated circuit structure 4050 includes fins 4052, such as silicon fins. Gate dielectric layer 4054 is over fin 4052. Gate electrode 4056 is over gate dielectric layer 4054. In an embodiment, the gate electrode 4056 includes a conformal conductive layer 4058 and a conductive fill 4060. In an embodiment, dielectric cap 4062 is over gate electrode 4056 and over gate dielectric layer 4054. The gate electrode has a first side 4056A and a second side 4056B opposite the first side 4056A. Dielectric spacers 4063 are along the sidewalls of gate electrode 4056. In one embodiment, the gate dielectric layer 4054 is further between the first of the dielectric spacers 4063 and the first side 4056A of the gate electrode 4056, and the second and the gate electrode 4056 of the dielectric spacer 4063. Between the second side 4056B as shown. In an embodiment, although not shown, a thin oxide layer such as a thermal or chemical silicon oxide or silicon dioxide layer is between the fins 4052 and the gate dielectric layer 4054.The first source 4064 and the second 4066 semiconductor source or drain region are adjacent to the first 4056A and second 4056B sides of the gate electrode 4056, respectively. In one embodiment, the first 4064 and second 4066 semiconductor source or drain regions are embedded epitaxial regions formed in recesses 4065 and 4067 of fins 4052, respectively, as shown. However, in another embodiment, the first 4064 and second 4066 semiconductor source or drain regions are in the fins 4052.The first 4068 and second 4070 trench contact structures are over the first 4064 and second 4066 semiconductor source or drain regions adjacent the first 4056A and second 4056B sides of the gate electrode 4056, respectively. The first 4068 and second 4070 trench contact structures each include a U-shaped metal layer 4072 and a T-shaped metal layer 4074 over and over the U-shaped metal layer 4072. In one embodiment, the U-shaped metal layer 4072 and the T-shaped metal layer 4074 have different compositions. In one such embodiment, the U-shaped metal layer 4072 includes titanium and the T-shaped metal layer 4074 includes cobalt. In one embodiment, both the first 4068 and the second 4070 trench contact structures further comprise a third metal layer 4076 on the T-shaped metal layer 4074. In one such embodiment, the third metal layer 4076 and the U-shaped metal layer 4072 have the same composition. In a particular embodiment, the third metal layer 4076 and the U-shaped metal layer 4072 comprise titanium, and the T-shaped metal layer 4074 comprises cobalt.The first trench contact via 4078 is electrically connected to the first trench contact 4068. In a particular embodiment, the first trench contact via 4078 is on and coupled to the third metal layer 4076 of the first trench contact 4068. The first trench contact via 4078 is further over and in contact with a portion of one of the dielectric spacers 4063 and over and in contact with a portion of the dielectric cap 4062. The second trench contact via 4080 is electrically connected to the second trench contact 4070. In a particular embodiment, the second trench contact via 4080 is on and coupled to the third metal layer 4076 of the second trench contact 4070. The second trench contact via 4080 is further over and in contact with a portion of the other of the dielectric spacers 4063 and over and in contact with another portion of the dielectric cap 4062.In an embodiment, the metal silicide layer 4082 is directly between the first 4068 and second 4070 trench contact structures and the first 4064 and second 4066 semiconductor source or drain regions, respectively. In one embodiment, the metal silicide layer 4082 comprises nickel, platinum, and silicon. In certain such embodiments, the first source 4064 and the second 4066 semiconductor source or drain region are first and second P-type semiconductor source or drain regions. In one embodiment, the metal silicide layer 4082 also includes germanium. In one embodiment, the metal silicide layer 4082 also includes titanium.One or more embodiments described herein relate to the use of metallurgical vapor deposition for a wraparound semiconductor contact. Embodiments may be applied to or include one or more of chemical vapor deposition (CVD), plasma enhanced chemical vapor deposition (PECVD), atomic layer deposition (ALD), conductive contact fabrication, or thin film.Particular embodiments may include fabricating a titanium or similar metal layer using chemical vapor deposition at a low temperature (eg, below 500 degrees Celsius, or in the range of 400-500 degrees Celsius) of the contact metal to provide a conformal source or drain contact. . Implementing such a conformal source or drain contact can improve three-dimensional (3D) transistor complementary metal oxide semiconductor (CMOS) performance.To provide context, sputtering can be used to deposit metal into the semiconductor contact layer. Sputtering is a line of sight process and may not be well suited for 3D transistor fabrication. Known sputtering schemes have poor or incomplete metal-semiconductor junctions on the device contact surface that are at an angle to the incident incidence of the deposit.In accordance with one or more embodiments of the present disclosure, a low temperature chemical vapor deposition process is performed to fabricate a contact metal to provide conformality in three dimensions and maximize contact area of the metal semiconductor junction. The resulting larger contact area can reduce the resistance of the junction. Embodiments can include depositing on a semiconductor surface having a non-flat topography, wherein the topography of the region refers to the surface shape and features themselves, and the non-planar topography includes portions of the uneven surface shape and features or surface shapes and features, That is, it is not a completely flat surface shape and feature.Embodiments described herein may include fabricating a wraparound contact structure. In one such embodiment, a pure metal that is conformally deposited onto a source-drain contact of a transistor by chemical vapor deposition, plasma enhanced chemical vapor deposition, atomic layer deposition, or plasma enhanced atomic layer deposition is described. use. Such conformal deposition can be used to increase the available area of the metal semiconductor contacts and reduce the electrical resistance, thereby improving the performance of the transistor device. In an embodiment, the lower temperature of the deposition results in a minimization of junction resistance per unit area.It will be appreciated that various integrated circuit structures can be fabricated using an integrated approach involving a metal layer deposition process as described herein. In accordance with an embodiment of the present disclosure, a method of fabricating an integrated circuit structure includes providing a substrate having a feature on a chemical vapor deposition (CVD) chamber having an RF source. The method also includes reacting titanium tetrachloride (TiCl4) and hydrogen (H2) to form a titanium (Ti) layer on the features of the substrate.In an embodiment, the titanium layer has a total atomic component comprising 98% or more titanium, and 0.5-2% chlorine. In an alternate embodiment, a similar process is used to produce a high purity metal layer of zirconium (Zr), hafnium (Hf), tantalum (Ta), niobium (Nb) or vanadium (V). In an embodiment, there is a relatively small change in film thickness, for example, in the embodiment, all coverage is greater than 50% and the nominal value is 70% or greater (ie, the thickness variation is 30% or less). ). In an embodiment, the thickness measured on silicon (Si) or silicon germanium (SiGe) is thicker than the thickness measured on other surfaces because Si or SiGe reacts during deposition and accelerates Ti uptake. In an embodiment, the film component comprises substantially 0.5% Cl (or less than 1%) as an impurity, substantially free of other observed impurities. In an embodiment, the deposition process enables the metal to be overlaid on a non-line of sight surface, such as a surface that is hidden by the sputter deposition line of sight. Embodiments described herein may be implemented to improve transistor device driving by reducing the external resistance of the current driven through the source and drain contacts.In accordance with an embodiment of the present disclosure, a substrate is characterized by exposing a source or drain contact trench of a semiconductor source or drain structure. The titanium layer (or other high purity metal layer) is a conductive contact layer for the semiconductor source or drain structure. Exemplary embodiments of such an embodiment are described below in conjunction with Figures 41A, 41B, 42, 43A-43C, and 44.41A shows a cross-sectional view of a semiconductor device having conductive contacts on a source or drain region, in accordance with an embodiment of the present disclosure.Referring to FIG. 41A, semiconductor structure 4100 includes a gate structure 4102 over substrate 4104. The gate structure 4102 includes a gate dielectric layer 4102A, a work function layer 4102B, and a gate fill 4102C. Source region 4108 and drain region 4110 are on opposite sides of gate structure 4102. Source or drain contact 4112 is electrically coupled to source region 4108 and drain region 4110 and is spaced apart from gate structure 4102 by one or both of interlayer dielectric 4114 or gate dielectric spacer 4116. Source region 4108 and drain region 4110 are regions of substrate 4104.In an embodiment, the source or drain contact 4112 includes, for example, a high purity metal layer 4112A and a conductive trench fill material 4112B as described above. In one embodiment, the high purity metal layer 4112A has a total atomic component comprising 98% or more of titanium. In one such embodiment, the total atomic component of the high purity metal layer 4112A further comprises 0.5-2% chlorine. In the embodiment, the high-purity metal layer 4112A has a thickness variation of 30% or less. In an embodiment, conductive trench fill material 4112B is comprised of a conductive material such as, but not limited to, Cu, Al, W, or alloys thereof.41B shows a cross-sectional view of another semiconductor device having conductive contacts on an elevated source or drain region, in accordance with an embodiment of the present disclosure.Referring to FIG. 41B, semiconductor structure 4150 includes a gate structure 4152 over substrate 4154. The gate structure 4152 includes a gate dielectric layer 4152A, a work function layer 4152B, and a gate fill 4152C. Source region 4158 and drain region 4160 are on opposite sides of gate structure 4152. Source or drain contact 4162 is electrically coupled to source region 4158 and drain region 4160 and is spaced apart from gate structure 4152 by one or both of interlayer dielectric layer 4164 or gate dielectric spacer 4166. Source region 4158 and drain region 4160 are epitaxial or recessed material regions formed in the etched regions of substrate 4154. As shown, in an embodiment, source region 4158 and drain region 4160 are elevated source and drain regions. In a specific such embodiment, the elevated source and drain regions are elevated silicon source and drain regions or elevated silicon germanium source and drain regions.In an embodiment, the source or drain contact 4162 includes, for example, a high purity metal layer 4162A as described above and a conductive trench fill material 4162B. In one embodiment, the high purity metal layer 4162A has a total atomic component comprising 98% or more of titanium. In one such embodiment, the total atomic component of the high purity metal layer 4162A further comprises 0.5-2% chlorine. In the embodiment, the high-purity metal layer 4162A has a thickness variation of 30% or less. In an embodiment, conductive trench fill material 4162B is comprised of a conductive material such as, but not limited to, Cu, Al, W, or alloys thereof.Thus, in an embodiment, with reference to Figures 41A and 41B in unison, an integrated circuit structure includes features having a surface (a source or drain contact trench exposing a semiconductor source or drain structure). High purity metal layer 4112A or 4162A is on the surface of the source or drain contact trench. It will be appreciated that the contact formation process can involve exposing the exposed silicon or germanium or silicon germanium material to the source or drain regions. This consumption can degrade device performance. In contrast, according to embodiments of the present disclosure, the surface (4149 or 4199) of the semiconductor source (4108 or 4158) or drain (4110 or 4160) structure is not corroded or consumed, or is in contact at the source or drain. The underside of the trench is substantially not corroded or consumed. In one such embodiment, there is a lack of wear or corrosion due to low temperature deposition of the high purity metal contact layer.Figure 42 shows a plan view of a plurality of gate lines over a pair of semiconductor fins in accordance with an embodiment of the present disclosure.Referring to FIG. 42, a plurality of active gate lines 4204 are formed over a plurality of semiconductor fins 4200. The dummy gate line 4206 is at the end of the plurality of semiconductor fins 4200. The spacing 4208 between the gate lines 4204/4206 is the location where the trench contacts can be formed as conductive contacts to the source or drain regions (eg, source or drain regions 4251, 4252, 4253, and 4254). .43A-43C illustrate cross-sectional views taken along line a-a' of Fig. 42 for various operations in a method of fabricating an integrated circuit structure, in accordance with an embodiment of the present disclosure.Referring to FIG. 43A, a plurality of active gate lines 4304 are formed over the semiconductor fins 4302 formed over the substrate 4300. The dummy gate line 4306 is at the end of the semiconductor fin 4302. The dielectric layer 4310 is interposed between the active gate lines 4304, between the dummy gate lines 4306 and the active gate lines 4304, and outside the dummy gate lines 4306. Embedded source or drain structure 4308 is in active semiconductor gate 4304 and in semiconductor fin 4302 between dummy gate line 4306 and active gate line 4304. Active gate line 4304 includes a gate dielectric layer 4312, a work function gate electrode portion 4314 and a fill gate electrode portion 4316, and a dielectric cap layer 4318. Dielectric spacers 4320 are arranged along the sidewalls of active gate line 4304 and dummy gate line 4306.Referring to FIG. 43B, portions of dielectric layer 4310 between active gate lines 4304 and between dummy gate lines 4306 and active gate lines 4304 are removed to provide openings 4330 at locations where trench contacts are to be formed. . Removing portions of dielectric layer 4310 between active gate lines 4304 and between dummy gate lines 4306 and active gate lines 4304 may result in etching embedded source or drain structures 4308 to provide an upper saddle that may be provided The etched embedded source or drain structure 4332 is shaped as shown in FIG. 43B.Referring to FIG. 43C, trench contacts 4334 are formed between active gate lines 4304 and in openings 4330 between dummy gate lines 4306 and active gate lines 4304. Each of the trench contacts 4334 can include a metal contact layer 4336 and a conductive fill material 4338.FIG. 44 illustrates a cross-sectional view taken along line b-b' of FIG. 42 for an integrated circuit structure in accordance with an embodiment of the present disclosure.Referring to FIG. 44, fins 4402 are shown above substrate 4404. The lower portion of the fin 4402 is surrounded by a trench isolation material 4404. The upper portion of the fin 4402 has been removed to enable the embedded source and drain structures 4406 to be grown. A trench contact 4408 is formed in the opening of the dielectric layer 4410 that exposes the embedded source and drain structures 4406. The trench contact portion includes a metal contact layer 4412 and a conductive fill material 4414. It will be appreciated that, in accordance with an embodiment, the metal contact layer 4412 extends to the top of the trench contact 4408 as shown in FIG. However, in another embodiment, the metal contact layer 4412 does not extend to the top of the trench contact 4408 and is somewhat recessed within the trench contact 4408, for example, similar to the depiction of the metal contact layer 4436 in FIG. 43C. .Thus, referring collectively to FIG. 42, FIG. 43A-FIG. 43C, and FIG. 44, an integrated circuit structure includes semiconductor fins (4200, 4302, 4402) over a substrate (4300, 4400), in accordance with an embodiment of the present disclosure. The semiconductor fins (4200, 4302, 4402) have a top and side walls. The gate electrode (4204, 4304) is above the top and adjacent the side walls of a portion of the semiconductor fins (4200, 4302, 4402). The gate electrode (4204, 4304) defines a channel region in the semiconductor fin (4200, 4302, 4402). a first semiconductor source or drain structure (4251, 4332, 4406) at a first end of the channel region on a first side of the gate electrode (4204, 4304), a first semiconductor source or drain structure ( 4251, 4332, 4406) have a non-flat appearance. a second semiconductor source or drain structure (4252, 4332, 4406) at a second end of the channel region of the second side of the gate electrode (4204, 4304), the second end being opposite the first end, And the second side is opposite to the first side. The second semiconductor source or drain structure (4252, 4332, 4406) has a non-flat topography. The metal contact material (4336, 4412) is directly on the first semiconductor source or drain structure (4251, 4332, 4406) and directly on the second semiconductor source or drain structure (4252, 4332, 4406). The metal contact material (4336, 4412) is conformal to the non-planar topography of the first semiconductor source or drain structure (4251, 4332, 4406) and to the second semiconductor source or drain structure (4252, 4332, 4406) The non-flat form conformal.In an embodiment, the metal contact material (4336, 4412) has a total atomic component comprising a single metal species of 95% or greater. In one such embodiment, the metal contact material (4336, 4412) has a total atomic component comprising 98% or greater titanium. In a specific such embodiment, the total atomic component of the metal contact material (4336, 4412) also includes 0.5-2% chlorine. In an embodiment, the metal contact material (4336, 4412) is along a non-planar topography of the first semiconductor source or drain structure (4251, 4332, 4406) and along a second semiconductor source or drain structure (4252, 4332) The non-flat topography of 4406) has a thickness variation of 30% or less.In an embodiment, the non-flat topography of the first semiconductor source or drain structure (4251, 4332, 4406) and the non-flat topography of the second semiconductor source or drain structure (4252, 4332, 4406) are both included The raised central portion and the lower side portion are, for example, as shown in FIG. In an embodiment, the non-flat topography of the first semiconductor source or drain structure (4251, 4332, 4406) and the non-flat topography of the second semiconductor source or drain structure (4252, 4332, 4406) are both included The saddle portion is, for example, as shown in Fig. 43C.In an embodiment, the first semiconductor source or drain structure (4251, 4332, 4406) and the second semiconductor source or drain structure (4252, 4332, 4406) all comprise silicon. In an embodiment, the first semiconductor source or drain structure (4251, 4332, 4406) and the second semiconductor source or drain structure (4252, 4332, 4406) further comprise germanium, such as silicon germanium.In an embodiment, the metal contact material (4336, 4412) directly on the first semiconductor source or drain structure (4251, 4332, 4406) is further along the first semiconductor source or drain structure (4251, 4332, 4406) A sidewall of the trench in the dielectric layer (4320, 4410) overlying the portion of the first semiconductor source or drain structure (4251, 4332, 4406). In one such embodiment, the thickness of the metal contact material (4336) along the sidewall of the trench is from the first semiconductor source or drain structure (4336A at 4332) to the first semiconductor source or drain structure (4332). The upper position (4336B) is thinned, and FIG. 43C shows an example thereof. In an embodiment, a conductive fill material (4338, 4414) is over the metal contact material (4336, 4412) within the trench, as shown in Figures 43C and 44.In an embodiment, the integrated circuit structure further includes a second semiconductor fin having a top and sidewalls (e.g., upper fins 4200, 4302, 4402 of Figure 42). The gate electrode (4204, 4304) is also over the top of a portion of the second semiconductor fin and adjacent the sidewall of the portion, the gate electrode defining a channel region in the second semiconductor fin. a third semiconductor source or drain structure (4253, 4332, 4406) at a first end of a channel region of a second semiconductor fin on a first side of the gate electrode (4204, 4304), the third semiconductor The source or drain structure has a non-flat topography. a fourth semiconductor source or drain structure (4254, 4332, 4406) at a second end of the channel region of the second semiconductor fin of the second side of the gate electrode (4204, 4304), the second end The fourth semiconductor source or drain structure (4254, 4332, 4406) has a non-flat topography as opposed to the first end. The metal contact material (4336, 4412) is directly on the third semiconductor source or drain structure (4253, 4332, 4406) and directly on the fourth semiconductor source or drain structure (4254, 4332, 4406), metal The contact material (4336, 4412) is conformal to the non-planar topography of the third semiconductor source or drain structure (4253, 4332, 4406) and to the fourth semiconductor source or drain structure (4254, 4332, 4406) The uneven shape is conformal. In an embodiment, the metal contact material (4336, 4412) is in the first semiconductor source or drain structure (4251, 4332, left 4406) and the third semiconductor source or drain structure (4253, 4332, right 4406) Between the two is continuous and is continuous between the second semiconductor source or drain structure (4252) and the fourth semiconductor source or drain structure (4254).In another aspect, a hard mask material can be used to preserve (prevent corrosion) and can remain at the location where the conductive trench contact over the dielectric material in the trench line location is interrupted, such as at the contact plug Location. For example, Figures 45A and 45B illustrate plan and corresponding cross-sectional views, respectively, of an integrated circuit structure including trench contact plugs having a hard mask material thereon, in accordance with an embodiment of the present disclosure.Referring to Figures 45A and 45B, in an embodiment, integrated circuit structure 4500 includes fins 4502A, such as silicon fins. A plurality of gate structures 4506 are over the fins 4502A. The individual gate structures in the gate structure 4506 are in a direction 4508 orthogonal to the fins 4502A and have a pair of dielectric sidewall spacers 4510. The trench contact structure 4512 is over the fins 4502A and is directly between the dielectric sidewall spacers 4510 of the first pair 4506A/4506B of the gate structure 4506. Contact plug 4514B is over fin 4502A and directly between dielectric sidewall spacers 4510 of second pair 4506B/4506C of gate structure 4506. Contact plug 4514B includes a lower dielectric material 4516 and an upper hard mask material 4518.In an embodiment, the lower dielectric material 4516 of the contact plug 4516B comprises silicon and oxygen, such as a silicon oxide or silicon dioxide material. The upper hard mask material 4518 of the contact plug 4516B includes silicon and nitrogen, such as silicon nitride, silicon-rich nitride or silicon-depleted nitride material.In an embodiment, the trench contact structure 4512 includes a lower conductive structure 4520 and a dielectric cap 4522 on the lower conductive structure 4520. In one embodiment, the dielectric cap 4522 of the trench contact structure 4512 has an upper surface that is coplanar with the upper surface of the upper hard mask material 4518 of the contact plug 4514B, as shown.In an embodiment, the individual gate structures in the plurality of gate structures 4506 include gate electrodes 4524 on the gate dielectric layer 4526. Dielectric cap 4528 is on gate electrode 4524. In one embodiment, the dielectric caps 4528 of the individual gate structures in the plurality of gate structures 4506 have an upper surface that is coplanar with the upper surface of the upper hard mask material 4518 of the contact plugs 4514B, as shown. In an embodiment, although not shown, a thin oxide layer such as a thermal or chemical silicon oxide or silicon dioxide layer is between the fins 4502A and the gate dielectric layer 4526.Referring again to Figures 45A and 45B, in an embodiment, integrated circuit structure 4500 includes a plurality of fins 4502, such as a plurality of silicon fins. The individual fins of the plurality of fins 4502 are in a first direction 4504. A plurality of gate structures 4506 are over the plurality of fins 4502. The individual gate structures of the plurality of gate structures 4506 are in a second direction 4508 that is orthogonal to the first direction 4504. The individual gate structures in the plurality of gate structures 4506 have a pair of dielectric sidewall spacers 4510. The trench contact structure 4512 is over the first fins 4502A of the plurality of fins 4502 and directly between the dielectric sidewall spacers 4510 of the pair of gate structures 4506. Contact plug 4514A is over second fin 4502B of the plurality of fins 4502 and directly between dielectric sidewall spacers 4510 of a pair of gate structures 4506. Similar to the cross-sectional view of contact plug 4514B, contact plug 4514A includes a lower dielectric material 4516 and an upper hard mask material 4518.In an embodiment, the lower dielectric material 4516 of the contact plug 4516A comprises silicon and oxygen, such as a silicon oxide or silicon dioxide material. The upper hard mask material 4518 of the contact plug 4516A includes silicon and nitrogen, such as silicon nitride, silicon-rich nitride or silicon-depleted nitride material.In an embodiment, the trench contact structure 4512 includes a lower conductive structure 4520 and a dielectric cap 4522 on the lower conductive structure 4520. In one embodiment, the dielectric cap 4522 of the trench contact structure 4512 has an upper surface that is coplanar with the upper surface of the upper hard mask material 4518 of the contact plug 4514A or 4514B, as shown.In an embodiment, the individual gate structures in the plurality of gate structures 4506 include gate electrodes 4524 on the gate dielectric layer 4526. Dielectric cap 4528 is on gate electrode 4524. In one embodiment, the dielectric caps 4528 of the individual gate structures in the plurality of gate structures 4506 have an upper surface that is coplanar with the upper surface of the upper hard mask material 4518 of the contact plugs 4514A or 4514B, as shown. In an embodiment, although not shown, a thin oxide layer such as a thermal or chemical silicon oxide or silicon dioxide layer is between the fins 4502A and the gate dielectric layer 4526.One or more embodiments of the present disclosure are directed to a gate alignment contact process. Such a process can be implemented to form contact structures for semiconductor structure fabrication (e.g., for integrated circuit fabrication). In an embodiment, the contact pattern is formed to be aligned with an existing gate pattern. In contrast, other approaches typically involve selective contact etch with an additional lithography process that uses a lithographic contact pattern to closely register an existing gate pattern. For example, another process may include patterning a multiple (gate) grid in which the contacts and contact plugs are individually patterned.In accordance with one or more embodiments described herein, a contact formation method involves forming a contact pattern that is substantially perfectly aligned to an existing gate pattern while eliminating lithographic operations with a super strict registration budget. usage of. In one such embodiment, this approach enables the use of inherently highly selective wet etching (e.g., relative to dry or plasma etching) to create contact openings. In an embodiment, the contact pattern is formed by utilizing an existing gate pattern in conjunction with a contact plug lithography operation. In one such embodiment, this approach makes it possible to eliminate the need for lithographic operations (as used in other ways) that are otherwise critical to creating contact patterns. In an embodiment, the trench contact grids are not individually patterned, but are formed between multiple (gate) lines. For example, in one such embodiment, a trench contact grid is formed after the gate grid is patterned but before the gate grid is cut.46A-46D illustrate cross-sectional views showing various operations in a method of fabricating an integrated circuit structure including a trench contact plug having a hard mask material thereon, in accordance with an embodiment of the present disclosure.Referring to Figure 46A, a method of fabricating an integrated circuit structure includes forming a plurality of fins, individual fins 4602 of the plurality of fins being in a first direction 4604. Individual fins 4602 of the plurality of fins can include a diffusion region 4606. A plurality of gate structures 4608 are formed over the plurality of fins. The individual gate structures in the plurality of gate structures 4508 are in a second direction 4610 that is orthogonal to the first direction 4604 (e.g., direction 4610 enters and exits the page). A sacrificial material structure 4612 is formed between the first pair in the gate structure 4608. Contact plug 4614 is between the second pair in gate structure 4608. The contact plug includes a lower dielectric material 4616. Hard mask material 4618 is on lower dielectric material 4616.In an embodiment, the gate structure 4608 includes a sacrificial or dummy gate stack and a dielectric spacer 4609. The sacrificial or dummy gate stack can be constructed of polysilicon or silicon nitride pillars or some other sacrificial material that can be referred to as a dummy gate material.Referring to Figure 46B, the sacrificial material structure 4612 is removed from the structure of Figure 46A to form an opening 4620 between the first pair of gate structures 4608.Referring to Figure 46C, trench contact structure 4622 is formed in opening 4620 between the first pair in gate structure 4608. Further, in the embodiment, the hard mask 4618 of Figs. 46A and 46B is planarized as a portion where the trench contact structure 4622 is formed. The final completed contact plug 4614' includes an upper hard mask material 4616 and an upper hard mask material 4624 formed of hard mask material 4618.In an embodiment, the lower dielectric material 4616 of each of the contact plugs 4614' includes silicon and oxygen, and the upper hard mask material 4624 of each of the contact plugs 4614' includes silicon and nitrogen. In an embodiment, each of the trench contact structures 4622 includes a lower conductive structure 4626 and a dielectric cap 4628 on the lower conductive structure 4626. In one embodiment, the dielectric cap 4628 of the trench contact structure 4622 has an upper surface that is coplanar with the upper surface of the upper hard mask material 4624 of the contact plug 4614'.Referring to Figure 46D, the sacrificial or dummy gate stack of gate structure 4608 is replaced in a replacement gate process scheme. In such an approach, the dummy gate material, such as polysilicon or silicon nitride pillar material, is removed and replaced with a permanent gate electrode material. In one such embodiment, a permanent gate dielectric layer is also formed in the process, as opposed to performing from an earlier process.Thus, the permanent gate structure 4630 includes a permanent gate dielectric layer 4632 and a permanent gate electrode layer or stack 4634. Moreover, in an embodiment, the top portion of the permanent gate structure 4630 is removed, for example, by an etching process, and replaced with a dielectric cap 4636. In an embodiment, the dielectric cap 4636 of the individual permanent gate structure in the permanent gate structure 4630 has an upper surface that is coplanar with the upper surface of the upper hard mask material 4624 of the contact plug 4614'.Referring again to Figures 46A-46D, in an embodiment, a replacement gate process is performed after trench contact structure 4622 is formed, as shown. However, according to other embodiments, the replacement gate process is performed prior to forming the trench contact structure 4622.In another aspect, a contact (COAG) structure and process over an active gate structure is described. One or more embodiments of the present disclosure are directed to a semiconductor structure or device having one or more gate contact structures disposed over an active portion of a gate electrode of a semiconductor structure or device (eg, as a gate contact hole). One or more embodiments of the present disclosure are directed to a method of fabricating a semiconductor structure or device having one or more gate contact structures formed over an active portion of a gate electrode of a semiconductor structure or device. The manner described herein can be used to reduce the standard cell area by enabling the formation of gate contacts over the active gate regions. In one or more embodiments, the gate contact structure fabricated to contact the gate electrode is a self-aligned via structure.In a technique in which spatial and layout constraints are somewhat relaxed compared to current generation space and layout constraints, contact with the gate structure can be made by forming contact with a portion of the gate electrode disposed over the isolation region. As an example, Fig. 47A shows a plan view of a semiconductor device having a gate contact disposed over a non-source portion of a gate electrode.Referring to Figure 47A, semiconductor structure or device 4700A includes a diffusion or active region 4704 disposed in substrate 4702 and within isolation region 4706. One or more gate lines (also referred to as multi-lines), such as gate lines 4708A, 4708B, and 4708C, are disposed over diffusion or active region 4704 and over a portion of isolation region 4706. Source or drain contacts (also referred to as trench contacts), such as contacts 4710A and 4710B, are disposed over the source and drain regions of semiconductor structure or device 4700A. The trench contact vias 4712A and 4712B provide contact with the trench contacts 4710A and 4710B, respectively. A separate gate contact 4714 and an overlying gate contact via 4716 provide contact with gate line 4708B. The gate contact 4714 is disposed over the isolation region 4706 from a plan view perspective, but not over the diffusion or active region 4704, as compared to the source or drain trench contact 4710A or 4710B. Further, neither the gate contact portion 4714 nor the gate contact via hole 4716 is disposed between the source or drain trench contact portions 4710A and 4710B.Figure 47B shows a cross-sectional view of a non-planar semiconductor device having a gate contact disposed over a non-active portion of a gate electrode. Referring to FIG. 47B, a semiconductor structure or device 4700B (eg, a non-planar version of device 4700A of FIG. 47A) includes a non-planar diffusion or active region 4704C (eg, a fin structure) formed from substrate 4702 and within isolation region 4706. . Gate line 4708B is disposed over non-planar diffusion or active region 4704B and over a portion of isolation region 4706. As shown, gate line 4708B includes a gate electrode 4750 and a gate dielectric layer 4752, along with a dielectric cap layer 4754. Also visible from this perspective is a gate contact 4714 and an overlying gate contact via 4716, along with an overlying metal interconnect 4760, which are disposed in an interlayer dielectric stack or layer 4770. It is also seen from the perspective view of Figure 47B that the gate contact 4714 is disposed over the isolation region 4706, but not over the non-planar diffusion or active region 4704B.Referring again to Figures 47A and 47B, the arrangement of semiconductor structures or devices 4700A and 4700B respectively places the gate contacts over the isolation regions. This arrangement wastes layout space. However, placing a gate contact over the active region would require a very tight registration budget, or the gate size would have to be increased to provide sufficient space for the gate contact to land. Moreover, historically, gate contact with the diffusion region has been avoided because of the risk of drilling through other gate materials (e.g., polysilicon) and contacting the underlying active region. One or more embodiments described herein address the above problems by providing a possible way to fabricate a contact structure in contact with a portion of a gate electrode formed over a diffusion or active region, and the resulting structure.As an example, FIG. 48A illustrates a plan view of a semiconductor device having gate contact vias disposed over an active portion of a gate electrode, in accordance with an embodiment of the present disclosure. Referring to Figure 48A, a semiconductor structure or device 4800A includes a diffusion or active region 4804 disposed in a substrate 4802 and within isolation region 4806. One or more gate lines, such as gate lines 4808A, 4808B, and 4808C, are disposed over the diffusion or active region 4804 and over a portion of the isolation region 4806. Source or drain contacts, such as contacts 4810A and 4810B, are disposed over the source and drain regions of semiconductor structure or device 4800A. The trench contact vias 4812A and 4812B provide contact with the trench contacts 4810A and 4810B, respectively. Gate contact vias 4816 without intervening individual gate contact layers provide contact with gate line 4808B. In comparison with Fig. 47A, the gate contact portion 4816 is disposed over the diffusion region or active region 4804 and between the source or drain contact portions 4810A and 4810B from a plan view.Figure 48B illustrates a cross-sectional view of a non-planar semiconductor device having gate contact vias disposed over an active portion of a gate electrode, in accordance with an embodiment of the present disclosure. Referring to FIG. 48B, a semiconductor structure or device 4800B (eg, a non-planar version of device 4800A of FIG. 48A) includes a non-planar diffusion or active region 4804B (eg, a fin structure) formed from substrate 4802 and within isolation region 4806. . Gate line 4808B is disposed over non-planar diffusion or active region 4804B and over a portion of isolation region 4806. As shown, gate line 4808B includes a gate electrode 4850 and a gate dielectric layer 4852, along with a dielectric cap layer 4854. Also visible from this perspective is a gate contact via 4816, along with an overlying metal interconnect 4860, both disposed in an interlayer dielectric stack or layer 4870. It is also seen from the perspective view of Fig. 48B that the gate contact vias 4816 are disposed over the non-planar diffused or active regions 4804B.Thus, referring again to FIGS. 48A and 48B, in an embodiment, trench contact vias 4812A, 4812B and gate contact vias 4816 are formed in the same layer and are substantially coplanar. In contrast to Figures 47A and 47B, the contacts to the gate lines may otherwise include additional gate contact layers, for example, which may extend perpendicular to the corresponding gate lines. However, in the structure described in connection with Figures 48A and 48B, the fabrication of structures 4800A and 4800B, respectively, enables the contacts to land directly from the metal interconnect layer on the active gate portion without being shorted to adjacent Source drain region. In an embodiment, such an arrangement provides a large area reduction in the circuit layout by eliminating the need to extend the gate of the transistor over the isolation region to form a reliable contact. As used throughout this document, in an embodiment, reference to an active portion of a gate refers to that portion of the gate line or structure disposed above (from a plan view angle) the active or diffusion region of the underlying substrate. In the embodiment, the reference to the passive portion of the gate refers to the portion of the gate line or structure disposed above the isolation region of the underlying substrate (from a plan view angle).In an embodiment, the semiconductor structure or device 4800 is a non-planar device such as, but not limited to, a fin FET or a tri-gate device. In such an embodiment, the corresponding semiconductor channel region is formed of a three-dimensional body or formed in a three-dimensional body. In one such embodiment, the gate electrode stack of gate lines 4808A-4808C surrounds at least a top surface and a pair of sidewalls of the three-dimensional body. In another embodiment, such as in a gate full enclosure device, at least the channel region is fabricated as a discrete three-dimensional body. In one such embodiment, the gate electrode stacks of gate lines 4808A-4808C all completely surround the channel region.More generally, one or more embodiments relate to a manner for directly landing a gate contact via on an active transistor gate and a structure formed therefrom. This approach eliminates the need to extend the gate lines over the isolation regions for contact purposes. This approach also eliminates the need for a separate gate contact (GCN) layer that conducts signals from the gate lines or structures. In an embodiment, the elimination of the above features is achieved by recessing the contact metal in the trench contact (TCN) and introducing additional dielectric material (e.g., TILA) into the process stream. An additional dielectric material is included as a trench contact having an etch characteristic different from that of a gate dielectric material cap that has been used for trench contact alignment in a gate alignment contact process (GAP) processing scheme (eg, GILA) Part of the dielectric cap layer.As an exemplary fabrication scheme, FIGS. 49A-49D illustrate various operations in a method of fabricating a semiconductor structure having a gate contact structure disposed over an active portion of a gate, in accordance with an embodiment of the present disclosure. Sectional view.Referring to Figure 49A, a semiconductor structure 4900 is provided after trench contact (TCN) formation. It will be appreciated that the specific arrangement of the use of structure 4900 is for illustrative purposes only, and that various possible arrangements may benefit from embodiments of the disclosure described herein. Semiconductor structure 4900 includes one or more gate stack structures, such as gate stack structures 4908A-4908E disposed over substrate 4902. The gate stack structure can include a gate dielectric layer and a gate electrode. A trench contact, such as a contact (e.g., trench contact 4910A-4910C) to the diffusion region of substrate 4902, is also included in structure 4900 and is separated from gate stack structure 4908A-4908E by dielectric spacer 4920. open. Insulating cap layer 4922 can be disposed on gate stack structure 4908A-4908E (e.g., GILA), as also shown in Figure 49A. As also shown in Figure 49A, a contact barrier or "contact plug" (e.g., region 4923 made of an interlayer dielectric material) can be included in the area where the contact is to be blocked.In an embodiment, providing structure 4900 involves forming a contact pattern that is substantially perfectly aligned to an existing gate pattern while eliminating the use of lithographic operations with ultra-rigorous registration budgets. In one such embodiment, this approach enables the use of inherently highly selective wet etching (e.g., as compared to dry or plasma etching) to create contact openings. In an embodiment, the contact pattern is formed by utilizing an existing gate pattern in conjunction with a contact plug lithography operation. In one such embodiment, this approach makes it possible to eliminate the need for lithographic operations (as used in other ways) that are otherwise critical to creating a contact pattern. In an embodiment, the trench contact grids are not individually patterned, but are formed between multiple (gate) lines. For example, in one such embodiment, a trench contact grid is formed after the gate grid is patterned but before the gate grid is cut.Additionally, gate stack structures 4908A-4908E can be fabricated by a replacement gate process. In such an arrangement, dummy gate materials such as polysilicon or silicon nitride pillar materials can be removed and replaced with permanent gate electrode materials. In one such embodiment, a permanent gate dielectric layer is also formed in the process as opposed to performing from an earlier process. In an embodiment, the dummy gate is removed by a dry etch or a wet etch process. In one embodiment, the dummy gate is composed of polysilicon or amorphous silicon and is removed using a dry etch process including SF6. In another embodiment, the dummy gate is comprised of polysilicon or amorphous silicon and is removed using a wet etch process including water based NH4OH or tetraethylammonium hydroxide. In one embodiment, the dummy gate is composed of silicon nitride and is removed by wet etching including water-based phosphoric acid.In an embodiment, one or more of the methods described herein essentially contemplate a dummy gate and replacement gate process in conjunction with a dummy and replacement contact process to implement structure 4900. In one such embodiment, a replacement contact process is performed after the replacement gate process to allow for high temperature annealing of at least a portion of the permanent gate stack. For example, in a particular such embodiment, for example, after forming the gate dielectric layer, annealing of at least a portion of the permanent gate structure is performed at a temperature above about 600 degrees Celsius. Annealing is performed before the permanent contacts are formed.Referring to FIG. 49B, the trench contacts 4910A-4910C of the structure 4900 are recessed into the spacers 4920 to provide recessed trench contacts 4911A-4911C having a lower surface than the spacers 4920 and the insulating cap layer 4922. height. An insulating cap layer 4924 is then formed over the recessed trench contacts 4911A-4911C (eg, TILA). In accordance with embodiments of the present disclosure, the insulating cap layer 4924 on the recessed trench contacts 4911A-4911C is comprised of a material having an etch characteristic different from the insulating cap layer 4922 on the gate stack structures 4908A-4908E. As will be seen in subsequent processing operations, the other of 4922/4924 can be selectively etched with respect to one of 4922/4924 using such a difference.The trench contact portions 4910A-4910C may be recessed by a process selective to the material of the spacers 4920 and the insulating cap layer 4922. For example, in one embodiment, the trench contacts 4910A-4910C are recessed by an etch process such as a wet etch process or a dry etch process. Insulating cap layer 4924 can be formed by a process suitable to provide a conformal and sealed layer over the exposed portions of trench contacts 4910A-4910C. For example, in one embodiment, the insulating cap layer 4924 is formed as a conformal layer over the entire structure by a chemical vapor deposition (CVD) process. The conformal layer is then planarized, for example by chemical mechanical polishing (CMP), to provide the insulating cap layer 4924 material only over the trench contacts 4910A-4910C and to re-expose the spacers 4920 and the insulating cap layer 4922.As for a suitable material combination for the insulating cap layer 4922/4924, in one embodiment, one of the 4922/4924 pairs is composed of silicon oxide and the other is composed of silicon nitride. In another embodiment, one of the 4922/4924 pairs is comprised of silicon oxide and the other is comprised of carbon-doped silicon nitride. In another embodiment, one of the 4922/4924 pairs is comprised of silicon oxide and the other is comprised of silicon carbide. In another embodiment, one of the 4922/4924 pairs is comprised of silicon nitride and the other is comprised of carbon-doped silicon nitride. In another embodiment, one of the 4922/4924 pairs consists of silicon nitride and the other consists of silicon carbide. In another embodiment, one of the 4922/4924 pairs consists of carbon-doped silicon nitride and the other consists of silicon carbide.Referring to Figure 49C, a stack of interlayer dielectric (ILD) 4930 and hard mask 4932 is formed and patterned to provide a metal (0) trench 4934 that is patterned, for example, over the structure of Figure 49B.Interlayer dielectric (ILD) 4930 can be constructed of a material suitable for electrically isolating the metal features ultimately formed therein while maintaining a robust structure between the front end and the back end processing. Moreover, in an embodiment, the composition of ILD 4930 is selected to be consistent with the via etch selectivity patterned for the trench contact dielectric cap layer, as described in more detail below in connection with Figure 49D. In one embodiment, ILD 4930 is comprised of a single or several layers of silicon oxide or a single or several layers of carbon doped oxide (CDO) materials. However, in other embodiments, ILD 4930 has a two-layer composition with a top portion comprised of a different material than the lower bottom portion of ILD 4930. The hard mask layer 4932 can be constructed of a material suitable for use as a subsequent sacrificial layer. For example, in one embodiment, the hard mask layer 4932 is substantially comprised of carbon, for example, as a crosslinked organic polymer layer. In other embodiments, a silicon nitride or carbon doped silicon nitride layer is used as the hard mask 4932. The interlayer dielectric (ILD) 4930 and hard mask 4932 stacks can be patterned by photolithography and etching processes.Referring to Figure 49D, a via opening 4936 (e.g., VCT) is formed in the interlayer dielectric (ILD) 4930, extending from the metal (0) trench 4934 to one or more of the recessed trench contacts 4911A-4911C. For example, in Fig. 49D, a via opening is formed to expose the recessed trench contacts 4911A and 4911C. Forming the via opening 4936 includes etching both the interlayer dielectric (ILD) 4930 and the corresponding portion of the corresponding insulating cap layer 4924. In one such embodiment, a portion of the insulating cap layer 4922 is exposed during patterning of the interlayer dielectric (ILD) 493 (eg, portions of the insulating cap layer 4922 that are over the gate stack structures 4908B and 4908E are exposed) ). In this embodiment, the insulating cap layer 4924 is etched to selectively form a via opening 4936 with respect to the insulating cap layer 4922 (i.e., without significantly etching or affecting the insulating cap layer 4922).In one embodiment, the via opening pattern is ultimately transferred to the insulating cap layer 4924 (i.e., the trench contact insulating cap layer) by an etching process without etching the insulating cap layer 4922 (i.e., the gate insulating cap layer). Insulating cap layer 4924 (TILA) may be composed of any of the following materials or combinations thereof: silicon oxide, silicon nitride, silicon carbide, carbon doped silicon nitride, carbon doped silicon oxide, amorphous silicon, various metal oxides And silicides, including zirconia, yttria, yttria or combinations thereof. The layer can be deposited using any of the following techniques: CVD, ALD, PECVD, PVD, HDP assisted CVD, low temperature CVD. Corresponding plasma dry etching has been developed as a combination of chemical and physical sputtering mechanisms. Consistent polymer deposition can be used to control material removal rate, etch profile, and film selectivity. Dry etching is typically produced using a mixture of gases including: NF3, CHF3, C4F8, HBr, and O2, typically at a pressure in the range of 30-100 mTorr and a plasma bias of 50-1000 watts. Dry etching can be designed to achieve significant etch selectivity between the cap layer 4924 (TILA) and 4922 (GILA) layers to minimize the loss of 4922 (GILA) during dry etching of 4929 (TILA), thereby A contact is formed to the source/drain regions of the transistor.Referring again to FIG. 49D, it will be appreciated that a similar manner can be implemented to fabricate a via opening pattern that is ultimately transferred to the insulating cap layer 4922 (ie, the trench contact insulating cap layer) by an etching process without etching. Insulating cap layer 4924 (ie, gate insulating cap layer).To further illustrate the concept of contact (COAG) technology over an active gate, FIG. 50 illustrates a plan view of an integrated circuit structure having trench contacts including overlying insulating cap layers in accordance with an embodiment of the present disclosure. And the corresponding section view.Referring to Figure 50, integrated circuit structure 5000 includes a semiconductor substrate such as a silicon fin or a gate line 5004 over fin 5002. The gate line 5004 includes a gate stack 5005 (e.g., including a gate dielectric layer or stack and a gate dielectric layer or a gate electrode on the stack) and a gate insulating cap layer 5006 over the gate stack 5005. Dielectric spacers 5008 are along the sidewalls of gate stack 5005, and in embodiments, along the sidewalls of gate insulating cap layer 5006, as shown.The trench contact portion 5010 is adjacent to the sidewall of the gate line 5004, and has a dielectric spacer 5008 between the gate line 5004 and the trench contact portion 5010. The individual trench contacts in trench contact 5010 include conductive contact structures 5011 and trench contact insulating cap layers 5012 on conductive contact structures 5011.Referring again to FIG. 50, a gate contact via 5014 is formed in the opening of the gate insulating cap layer 5006 and electrically contacts the gate stack 5005. In an embodiment, the gate contact via 5014 electrically contacts the gate stack 5005 at a location above the semiconductor substrate or fin 5002 and laterally between the trench contacts 5010, as shown. In one such embodiment, the trench contact insulating cap layer 5012 on the conductive contact structure 5011 prevents the gate contact via 5014 from shorting the gate to the source or shorting the gate to the drain.Referring again to FIG. 50, a trench contact via 5016 is formed in the opening of the trench contact insulating cap layer 5012 and electrically contacts the corresponding conductive contact structure 5011. In an embodiment, the trench contact via 5016 electrically contacts the corresponding conductive contact structure 5011 at a location above the semiconductor substrate or fin 5002 and laterally adjacent the gate stack 5005 of the gate line 5004, such as Shown. In one such embodiment, the gate insulating cap layer 5006 on the gate stack 5005 prevents the trench contact vias 5016 from shorting the source to the gate or shorting the drain to the gate.It will be appreciated that different structural relationships between the insulated gate cap layer and the insulating trench contact cap layer can be fabricated. By way of example, FIGS. 51A-51F illustrate cross-sectional views of various integrated circuit structures having trench contacts including an overlying insulating cap layer and having overlying insulation, in accordance with an embodiment of the present disclosure. The gate stack of the cap layer.Referring to Figures 51A, 51B, and 51C, integrated circuit structures 5100A, 5100B, and 5100C include fins 5102, such as silicon fins, respectively. Although shown as a cross-sectional view, it will be appreciated that the fins 5102 have a top portion 5102A and side walls (pages into and out of the perspective view shown). The first 5104 and second 5106 gate dielectric layers are over the top 5102A of the fin 5102 and laterally adjacent the sidewalls of the fin 5102. The first 5108 and second 5110 gate electrodes are over the first 5104 and second 5106 gate dielectric layers, respectively, over the top 5102A of the fin 5102 and laterally adjacent the sidewalls of the fin 5102. The first 5108 and second 5110 gate electrodes each include a conformal conductive layer 5109A (e.g., a work function setting layer) and a conductive fill material 5109B over the conformal conductive layer 5109A. The first 5108 and second 5110 gate electrodes each have a first side 5112 and a second side 5114 opposite the first side 5112. The first 5108 and second 5110 gate electrodes also have an insulative cap 5116 having a top surface 5118.The first dielectric spacer 5120 is adjacent to the first side 5112 of the first gate electrode 5108. The second dielectric spacer 5122 is adjacent to the second side 5114 of the second gate electrode 5110. Semiconductor source or drain region 5124 is adjacent to first 5120 and second 5122 dielectric spacers. The trench contact structure 5126 is over the semiconductor source or drain region 5124 adjacent the first 5120 and second 5122 dielectric spacers.The trench contact structure 5126 includes an insulating cap 5128 on the conductive structure 5130. The insulative cap 5128 of the trench contact structure 5126 has a top surface 5129 that is substantially coplanar with the top surface 5118 of the insulative cap 5116 of the first 5108 and second 5110 gate electrodes. In an embodiment, the insulative cap 5128 of the trench contact structure 5126 extends laterally into the recess 5132 in the first 5120 and second 5122 dielectric spacers. In such an embodiment, the insulative cap 5128 of the trench contact structure 5126 overlies the conductive structure 5130 of the trench contact structure 5126. However, in other embodiments, the insulative cap 5128 of the trench contact structure 5126 does not extend laterally into the recess 5132 in the first 5120 and second 5122 dielectric spacers, and thus does not overhang the conductive contact of the trench contact structure 5126. Above structure 5130.It will be appreciated that the electrically conductive structure 5130 of the trench contact structure 5126 may not be rectangular, as shown in Figures 51A-51C. For example, the conductive structure 5130 of the trench contact structure 5126 can have a cross-sectional geometry that is similar or identical to the geometry shown for the conductive structure 5130A shown in the projection of Figure 51A.In an embodiment, the insulative cap 5128 of the trench contact structure 5126 has a different composition than the components of the insulative caps 5116 of the first 5108 and second 5110 gate electrodes. In one such embodiment, the insulative cap 5128 of the trench contact structure 5126 includes a carbide material, such as a silicon carbide material. The insulating caps 5116 of the first 5108 and second 5110 gate electrodes comprise a nitride material, such as a silicon nitride material.In an embodiment, the insulating caps 5116 of the first 5108 and second 5110 gate electrodes each have a bottom surface 5117A that is lower than the bottom surface 5128A of the insulative cap 5128 of the trench contact structure 5126, as shown in Figure 51A. In another embodiment, the insulating caps 5116 of the first 5108 and second 5110 gate electrodes each have a bottom surface 5117A that is substantially coplanar with the bottom surface 5128B of the insulating cap 5128 of the trench contact structure 5126, as shown in FIG. 51B. Show. In another embodiment, the insulating caps 5116 of the first 5108 and second 5110 gate electrodes each have a bottom surface 5117C that is higher than the bottom surface 5128C of the insulating cap 5128 of the trench contact structure 5126, as shown in Figure 51C.In an embodiment, the conductive structure 5130 of the trench contact structure 5128 includes a U-shaped metal layer 5134, a T-shaped metal layer 5136 over and over the U-shaped metal layer 5134, and a third on the T-shaped metal layer 5136. Metal layer 5138. Insulating cap 5128 of trench contact structure 5126 is over third metal layer 5138. In one such embodiment, the third metal layer 5138 and the U-shaped metal layer 5134 comprise titanium, and the T-shaped metal layer 5136 comprises cobalt. In certain such embodiments, the T-shaped metal layer 5136 also includes carbon.In an embodiment, the metal silicide layer 5140 is directly between the conductive structure 5130 of the trench contact structure 5126 and the semiconductor source or drain region 5124. In one such embodiment, the metal silicide layer 5140 comprises titanium and silicon. In certain such embodiments, the semiconductor source or drain region 5124 is an N-type semiconductor source or drain region. In another embodiment, the metal silicide layer 5140 comprises nickel, platinum, and silicon. In certain such embodiments, the semiconductor source or drain region 5124 is a P-type semiconductor source or drain region. In another particular such embodiment, the metal silicide layer further comprises germanium.In an embodiment, referring to Figure 51D, a conductive via 5150 is over the portion of the first gate electrode 5108 above the top 5102A of the fin 5102 and is electrically connected to the portion. Conductive via 5150 is in opening 5152 in insulating cap 5116 of first gate electrode 5108. In one such embodiment, the conductive vias 5150 are on a portion of the insulating cap 5128 of the trench contact structure 5126, but are not electrically connected to the conductive structures 5130 of the trench contact structure 5126. In certain such embodiments, the conductive vias 5150 are in the etched portions 5154 of the insulating caps 5128 of the trench contact structures 5126.In an embodiment, referring to Figure 51E, a conductive via 5160 is over a portion of the trench contact structure 5126 and is electrically connected to the portion. The conductive via is in the opening 5162 of the insulating cap 5128 of the trench contact structure 5126. In one such embodiment, the conductive vias 5160 are on a portion of the insulating caps 5116 of the first 5108 and second 5110 gate electrodes, but are not electrically connected to the first 5108 and second 5110 gate electrodes. In a particular such embodiment, the conductive vias 5160 are in the etched portions 5164 of the insulating caps 5116 of the first 5108 and second 5110 gate electrodes.Referring again to FIG. 51E, in the embodiment, the conductive via 5160 is a second conductive via having the same structure as the conductive via 5150 of FIG. 51D. In one such embodiment, such second conductive vias 5160 are isolated from conductive vias 5150. In another such embodiment, such second conductive vias 5160 are fused with conductive vias 5150 to form electrical shorting contacts 5170, as shown in Figure 51F.The manner and structure described herein may enable the formation of other structures or devices that are otherwise impossible or difficult to manufacture using other methods. In a first example, FIG. 52A illustrates a plan view of another semiconductor device having a gate contact via disposed over an active portion of a gate, in accordance with another embodiment of the present disclosure. Referring to Figure 52A, a semiconductor structure or device 5200 includes a plurality of gate structures 5208A-5208C that intersect a plurality of trench contacts 5210A and 5210B (these features are disposed over an active region of the substrate, not shown). A gate contact via 5280 is formed on the active portion of the gate structure 5208B. Gate contact vias 5280 are also disposed on the active portion of gate structure 5208C, coupling gate structures 5208B and 5208C. It will be appreciated that the intervening trench contact portion 5210B can be isolated from the contact portion 5280 using a trench contact isolation cap layer (e.g., TILA). The contact configuration of FIG. 52A can provide an easier way of tying adjacent gate lines in a layout without the need to route the tying lines through the metallized upper layer, thereby enabling a smaller cell area or less complexity. Line plan or both.In a second example, FIG. 52B illustrates a plan view of another semiconductor device having trench contact vias that couple a pair of trench contacts, in accordance with another embodiment of the present disclosure. Referring to Figure 52B, semiconductor structure or device 5250 includes a plurality of gate structures 5258A-5258C that interdigitate with a plurality of trench contacts 5260A and 5260B (the features are disposed over the active regions of the substrate, not shown). A trench contact via 5290 is formed on the trench contact 5260A. The trench contact vias 5290 are also disposed on the trench contacts 5260B, coupling the trench contacts 5260A and 5260B. It will be appreciated that the intervening gate structure 5258B can be isolated from the trench contact vias 5290 using a gate isolation cap layer (eg, by a GILA process). The contact configuration of FIG. 52B can provide an easier way of tying adjacent trench contacts in a layout without the need to route the tying wires through the metallized upper layer, thereby enabling for a smaller cell area or less complexity. Line plan or both.The insulating cap layer for the gate electrode can be fabricated using several deposition operations, and as a result, the insulating cap layer can include artifacts that are deposited multiple times in the fabrication process. For example, Figures 53A-53E illustrate cross-sectional views showing various operations in a method of fabricating an integrated circuit structure including a gate stack having an overlying insulating cap layer, in accordance with an embodiment of the present disclosure.Referring to Figure 53A, the starting structure 5300 includes a gate stack 5304 over the substrate or fin 5302. Gate stack 5304 includes a gate dielectric layer 5306, a conformal conductive layer 5308, and a conductive fill material 5310. In an embodiment, gate dielectric layer 5306 is a high-k gate dielectric layer formed using an atomic layer deposition (ALD) process, and the conformal conductive layer is a work function layer formed using an ALD process. In one such embodiment, a thermal or chemical oxide layer 5312, such as a layer of thermal or chemical silicon or silicon oxide, is between the substrate or fin 5302 and the gate dielectric layer 5306. A dielectric spacer 5314 such as a silicon nitride spacer is adjacent to a sidewall of the gate stack 5304. Dielectric gate stack 5304 and dielectric spacer 5314 are housed in an interlayer dielectric (ILD) layer 5316. In an embodiment, the gate stack 5304 is formed using a replacement gate and a replacement gate dielectric processing scheme. Mask 5318 is patterned over gate stack 5304 and ILD layer 5316 to provide openings 5320 that expose gate stack 5304.Referring to Figure 53B, gate stack 5304 including gate dielectric layer 5306, conformal conductive layer 5308, and conductive fill material 5310 is recessed relative to dielectric spacers 5314 and 5316 using one or more selective etch processes. Mask 5318 is then removed. The recess provides a cavity 5322 over the recessed gate stack 5324.In another embodiment, not shown, the conformal conductive layer 5308 and the conductive fill material 5310 are recessed relative to the dielectric spacers 5314 and 5316, but the gate dielectric layer 5306 is not recessed or only the gate dielectric layer 5306 is left. Minimal depression. It will be appreciated that in other embodiments, a maskless approach based on high etch selectivity is used for the recess.Referring to Figure 53C, a first deposition process in a multiple deposition process for fabricating a gate insulating cap layer is performed. A first deposition process is used to form a first insulating layer 5326 that conforms to the structure of FIG. 53B. In an embodiment, the first insulating layer 5326 includes silicon and nitrogen. For example, the first insulating layer 5326 is a silicon nitride (Si 3 N 4 ) layer, a silicon-rich silicon nitride layer, a silicon-deficient silicon nitride layer, or carbon doped nitride. Silicon layer. In an embodiment, the first insulating layer 5326 only partially fills the cavity 5322 over the recessed gate stack 5324, as shown.Referring to Figure 53D, the first insulating layer 5326 is subjected to a deep etch process, such as an anisotropic etch process, to provide a first portion 5328 of the insulating cap layer. The first portion 5328 of the insulating cap layer only partially fills the cavity 5322 over the recessed gate stack 5324.Referring to Figure 53E, an additional alternating deposition process and etch back process are performed until the cavity 5322 is filled with the insulated gate cap structure 5330 over the recessed gate stack 5324. In the cross-sectional analysis, the seam 5332 may be apparent and may indicate the number of alternating deposition processes and etch processes used to insulate the gate cap structure 5330. In the example shown in Figure 53E, the presence of the three sets of seams 5332A, 5332B, and 5332C indicates a four-time alternating deposition process and a deep etch process for insulating the gate cap structure 5330. In an embodiment, the materials 5330A, 5330B, 5330C, and 5330D of the insulated gate cap structure 5330 separated by seams 5332 all have precise or substantially identical compositions.As described throughout this application, the substrate can be constructed of a semiconductor material that is resistant to the fabrication process and in which charge can migrate. In an embodiment, a substrate is described herein as being comprised of crystalline silicon, a silicon/germanium or germanium layer doped with charge carriers, such as, but not limited to, phosphorus, arsenic, boron, or combinations thereof to form Active area. In one embodiment, the concentration of silicon atoms in such a bulk substrate is greater than 97%. In another embodiment, the bulk substrate is comprised of an epitaxial layer grown on top of different crystal substrates, such as a silicon epitaxial layer grown on top of a boron doped silicon single crystal substrate. The bulk substrate may alternatively be composed of a III-V material. In an embodiment, the bulk substrate is composed of a III-V material such as, but not limited to, gallium nitride, gallium phosphide, gallium arsenide, indium phosphide, indium antimonide, indium gallium arsenide, aluminum gallium arsenide, phosphorus Indium gallium or a combination thereof. In one embodiment, the bulk substrate is comprised of a III-V material and the charge carrier dopant impurity atoms are atoms such as, but not limited to, carbon, silicon, germanium, oxygen, sulfur, selenium or tellurium.As described throughout this application, an isolation region, such as a shallow trench isolation region or a sub-fin isolation region, may be adapted to ultimately isolate or isolate the portion of the permanent gate structure from the underlying bulk substrate, or The active region isolation (eg, isolating the active region of the fin) formed within the underlying body substrate is constructed. For example, in one embodiment, the isolation regions are comprised of one or more layers of dielectric material such as, but not limited to, silicon dioxide, silicon oxynitride, silicon nitride, carbon doped silicon nitride, or combinations thereof.As described throughout this application, a gate line or gate structure can be comprised of a gate electrode stack that includes a gate dielectric layer and a gate electrode layer. In an embodiment, the gate electrode of the gate electrode stack is composed of a metal gate, and the gate dielectric layer is composed of a high-k material. For example, in one embodiment, the gate dielectric layer is composed of, for example, but not limited to, hafnium oxide, hafnium oxynitride, hafnium silicate, hafnium oxide, zirconium oxide, zirconium silicate, hafnium oxide, barium titanate, barium titanate, It is composed of barium titanate, cerium oxide, aluminum oxide, lead lanthanum titanium oxide, lead lanthanum citrate or a combination thereof. Additionally, a portion of the gate dielectric layer can include a native oxide layer formed from the top layers of the semiconductor substrate. In an embodiment, the gate dielectric layer is comprised of a top high k portion and a lower portion comprised of an oxide of a semiconductor material. In one embodiment, the gate dielectric layer is comprised of a top portion of yttrium oxide and a bottom portion of silicon dioxide or silicon oxynitride. In some embodiments, a portion of the gate dielectric is a "U"-shaped structure comprising a bottom portion that is substantially parallel to a surface of the substrate and two sidewall portions that are substantially perpendicular to a top surface of the substrate .In one embodiment, the gate electrode is composed of a metal layer such as, but not limited to, metal nitride, metal carbide, metal silicide, metal aluminide, hafnium, zirconium, titanium, hafnium, aluminum, hafnium, palladium, Platinum, cobalt, nickel or conductive metal oxides. In a specific embodiment, the gate electrode is formed of a non-work function setting filling material formed over the metal work function setting layer. Depending on whether the transistor is a PMOS or an NMOS transistor, the gate electrode layer may be composed of a P-type work function metal or an N-type work function metal. In some embodiments, the gate electrode layer can be composed of a stack of two or more metal layers, wherein one or more of the metal layers is a work function metal layer and at least one of the metal layers is a conductive fill layer. For PMOS transistors, metals that can be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides such as ruthenium oxide. The P-type metal layer will enable the formation of a PMOS gate electrode having a work function between about 4.9 eV and about 5.2 eV. For NMOS transistors, metals that can be used for the gate electrode include, but are not limited to, yttrium, zirconium, titanium, hafnium, aluminum, alloys of these metals, and carbides of these metals, such as tantalum carbide, zirconium carbide, titanium carbide, tantalum carbide And aluminum carbide. The N-type metal layer will enable the formation of an NMOS gate electrode having a work function between about 3.9 eV and about 4.2 eV. In some embodiments, the gate electrode can be constructed of a "U" shaped structure that includes a bottom portion that is generally parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In another embodiment, at least one of the metal layers forming the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments of the present disclosure, the gate electrode may be composed of a combination of a U-shaped structure and a planar non-U-shaped structure. For example, the gate electrode can be constructed of one or more U-shaped metal layers formed at the top of one or more planar non-U-shaped layers.As described throughout this application, the spacer associated with the gate line or electrode stack can be electrically isolated or isolated from the permanent conductive structure (eg, self-aligned contact). The composition of the material that contributed. For example, in one embodiment, the spacer is comprised of a dielectric material such as, but not limited to, silicon dioxide, silicon oxynitride, silicon nitride, or carbon doped silicon nitride.In an embodiment, the methods described herein may involve forming a contact pattern that is well aligned to an existing gate pattern while eliminating the use of a lithographic operation with a super strict registration budget. In one such embodiment, this approach enables the use of inherently highly selective wet etching (e.g., as compared to dry or plasma etching) to create contact openings. In an embodiment, the contact pattern is formed by utilizing an existing gate pattern in conjunction with a contact plug lithography operation. In one such embodiment, this approach makes it possible to eliminate the need for lithographic operations (as used in other ways) that are otherwise critical to creating contact patterns. In an embodiment, the trench contact grids are not individually patterned, but rather are formed between multiple (gate) lines. For example, in one such embodiment, a trench contact grid is formed after the gate grid is patterned but before the gate grid is diced.In addition, the gate stack structure can be fabricated by a replacement gate process. In such an arrangement, dummy gate materials such as polysilicon or silicon nitride pillar materials can be removed and replaced with permanent gate electrode materials. In one such embodiment, a permanent gate dielectric layer is also formed in the process, as opposed to performing from an earlier process. In an embodiment, the dummy gate is removed by a dry etch or a wet etch process. In one embodiment, the dummy gate is comprised of polysilicon or amorphous silicon and is removed using a dry etch process that includes the use of SF6. In another embodiment, the dummy gate is comprised of polysilicon or amorphous silicon and is removed using a wet etch process that includes the use of water based NH4OH or tetraethylammonium hydroxide. In one embodiment, the dummy gate is comprised of silicon nitride and is removed using a wet etch comprising water-based phosphoric acid.In an embodiment, one or more of the methods described herein essentially contemplate a dummy gate and replacement gate process in conjunction with a dummy and replacement contact process to achieve the structure. In one such embodiment, a replacement contact process is performed after the replacement gate process to allow for high temperature annealing of at least a portion of the permanent gate stack. For example, in a particular such embodiment, for example, after forming the gate dielectric layer, annealing of at least a portion of the permanent gate structure is performed at a temperature above about 600 degrees Celsius. Annealing is performed before the permanent contacts are formed.In some embodiments, the arrangement of the semiconductor structure or device places a gate contact over a portion of the gate line or a gate stack over the isolation region. However, such an arrangement can be considered as an inadequate use of the layout space. In another embodiment, a semiconductor device has a contact structure that contacts a portion of a gate electrode formed over an active region. Typically, one or more of the present disclosure is preceded (eg, otherwise) prior to forming a gate contact structure (eg, a via) over the active portion of the gate and in the same layer as the trench contact via One embodiment includes a trench contact process that first uses gate alignment. Such a process can be implemented to form trench contact structures for semiconductor structure fabrication (e.g., for integrated circuit fabrication). In an embodiment, the trench contact pattern is formed to be aligned with an existing gate pattern. In contrast, other approaches typically involve selective contact etch with an additional lithography process that uses a lithographic contact pattern with a tight registration of existing gate patterns. For example, another process may include patterning a multiple (gate) grid with individual patterning of contact features.It should be understood that not all aspects of the above-described processes are required to fall within the spirit and scope of the embodiments of the present disclosure. For example, in one embodiment, the dummy gate need not always be formed prior to fabrication of the gate contact over the active portion of the gate stack. The above gate stack may actually be an initially formed permanent gate stack. Moreover, one or more semiconductor devices can be fabricated using the processes described herein. The semiconductor device can be a transistor or the like. For example, in an embodiment, the semiconductor device is a metal oxide semiconductor (MOS) transistor for logic or memory, or a bipolar transistor. Moreover, in an embodiment, the semiconductor device has a three-dimensional architecture, such as a tri-gate device, an independently accessed dual gate device, or a FIN-FET. One or more embodiments may be particularly useful for fabricating semiconductor devices at 10 nanometer (10 nm) technology nodes, sub-10 nanometer (10 nm) technology nodes.Additional or intermediate operations of FEOL layer or structure fabrication may include standard microelectronic fabrication processes such as photolithography, etching, thin film deposition, planarization (eg, chemical mechanical polishing (CMP)), diffusion, metrology, use of sacrificial layers, etching The use of a stop layer, the use of a planarization stop layer, or any other action associated with microelectronic component fabrication. Moreover, it will be appreciated that the process operations described above for the process flow may be practiced in an alternate order, and that each operation may not be performed, or additional process operations may be performed, or both.It will be appreciated that in the above exemplary FEOL embodiment, in an embodiment, 10 nanometer or sub-10 nanometer node processing is implemented directly into the fabrication scheme and resulting structure as a technology driver. In other embodiments, FEOL considerations can be driven by BEOL 10 nanometer or sub-10 nanometer processing requirements. For example, material selection and layout of FEOL layers and devices may need to accommodate BEOL processing. In one such embodiment, the material selection and gate stack architecture is selected to accommodate high density metallization of the BEOL layer, for example, to reduce the high density metallization formed in the FEOL layer but through the BEOL layer. The edge capacitance in the transistor structure.A back end of process (BEOL) layer of an integrated circuit typically includes a conductive microelectronic structure referred to in the prior art as a via to electrically connect metal lines or other interconnects over the via to the metal line under the via or Other interconnections. The via holes can be formed by a photolithography process. Typically, a photoresist layer can be spin coated over the dielectric layer, the photoresist layer can be exposed to the patterned photochemical radiation through a patterned mask, and then the exposed layer can be developed to An opening is formed in the photoresist layer. Next, an opening for the via hole may be etched in the dielectric layer using an opening in the photoresist layer as an etch mask. This opening is referred to as a through hole opening. Finally, the via openings can be filled with one or more metals or other conductive materials to form vias.The size and spacing of the vias have been gradually reduced, and it is expected that in the future, for at least some types of integrated circuits (eg, advanced microprocessors, chipset components, graphics chips, etc.), the vias will be sized and spaced. Continue to gradually decrease. There are several challenges to themselves when patterning very small vias at very small pitches by such a photolithography process. One such challenge is the overlap between the via and the overlying interconnect, and the overlap between the via and the underlying landing interconnect, which typically need to be controlled to a high capacitance on the order of a quarter of the via pitch. limit. The via pitch can be scaled down to a smaller time, and the overlap tolerance tends to scale with a greater rate that the lithographic apparatus can follow.Another such challenge is that the critical dimension of the via opening typically tends to scale faster than the resolution of the lithography scanner. There are some shrinking techniques for shrinking the critical dimension of the through hole opening. However, the amount of shrinkage tends to be limited by the minimum via pitch and the shrinking process is sufficiently optically adjacent (OPC) neutral and does not significantly affect line width roughness (LWR) or critical dimension uniformity (CDU) or both. limits. Yet another such challenge is that the LWR or CDU characteristics of the photoresist or both generally need to be increased as the critical dimension of the via opening decreases to maintain the same overall fraction of the critical dimension budget.The above factors are considered for non-conductive spaces or interruptions between metal lines ("plugs", "dielectric plugs" or "wire ends" among metal lines referred to as back-end process (BEOL) metal interconnect structures. The placement and scaling of ") is also relevant. Thus, there is a need for improvements in the art of post-metallization fabrication for the fabrication of metal lines, metal vias and dielectric plugs.In another aspect, a pitch quadruple method is implemented for patterning trenches in a dielectric layer for forming a BEOL interconnect structure. According to an embodiment of the present disclosure, pitch division is applied to fabricate metal lines in a BEOL manufacturing scheme. Embodiments can achieve continuous scaling of the pitch of the metal layers, scaling beyond the resolving power of prior art lithographic apparatus.Figure 54 is a schematic illustration of a pitch quadrant 5400 for fabricating trenches of an interconnect structure, in accordance with an embodiment of the present disclosure.Referring to Figure 54, in operation (a), backbone features 5402 are formed using direct lithography. For example, the photoresist layer or stack can be patterned and transferred into the hard mask material to ultimately form the backbone features 5402. The photoresist layer or stack used to form the backbone features 5402 can be patterned using standard lithographic processing techniques such as 193 immersion lithography. A first spacer feature 5404 adjacent the sidewall of the backbone feature 5402 is then formed.At operation (b), the backbone features 5402 are removed to leave only the first spacer features 5404. At this stage, the first spacer feature 5404 is actually a half pitch mask, for example, representing a pitch halving process. The first spacer feature 5404 can be used directly in the pitch quadruple process, or the pattern of the first spacer feature 5404 can first be transferred to a new hard mask material, the latter being illustrated herein.In operation (c), the pattern of the first spacer features 5404 is transferred into a new hard mask material to form a first spacer feature 5404'. A second spacer feature 5406 is then formed adjacent the sidewall of the first spacer feature 5404'.In operation (d), the first spacer feature 5404' is removed to leave only the second spacer feature 5406. At this stage, the second spacer feature 5406 is actually a quarter-pitch mask, for example, representing a four-minute process.In operation (e), the second spacer feature 5406 is used as a mask to pattern a plurality of trenches 5408 in the dielectric or hard mask layer. The trench can ultimately be filled with a conductive material to form a conductive interconnect in the metallization layer of the integrated circuit. The groove 5408 having the mark "B" corresponds to the backbone feature 5402. The groove 5408 having the indicia "S" corresponds to the first spacer feature 5404 or 5404'. The groove 5408 having the indicia "C" corresponds to the complementary region 5407 between the backbone features 5402.It will be appreciated that since the individual trenches in trenches 5408 of FIG. 54 have patterned origins corresponding to one of backbone features 5402, first spacer features 5404 or 5404' or complementary regions 5407 of FIG. The difference in width and/or spacing may be manifested as a pitched quadrilateral artifact in the resulting conductive interconnect in the metallization layer of the integrated circuit. By way of example, Figure 55A illustrates a cross-sectional view of a metallization layer fabricated using a pitch quadrant scheme in accordance with an embodiment of the present disclosure.Referring to FIG. 55A, integrated circuit structure 5500 includes an interlayer dielectric (ILD) layer 5504 over substrate 5502. A plurality of conductive interconnects 5506 are in the ILD layer 5504, and the individual conductive interconnects of the plurality of conductive interconnects 5506 are spaced apart from each other by portions of the ILD layer 5504. The individual conductive interconnects in the plurality of conductive interconnects 5506 include a conductive barrier layer 5508 and a conductive fill material 5510.Referring to Figures 54 and 55A, a conductive interconnect 5506B is formed in the trench with a pattern derived from the backbone features 5402. A conductive interconnect 5506S is formed in the trench having a pattern derived from the first spacer feature 5404 or 5404'. Conductive interconnects 5506C are formed in the trenches with a pattern derived from complementary regions 5407 between backbone features 5402.Referring again to FIG. 55A, in an embodiment, the plurality of conductive interconnects 5506 include a first interconnect 5506B having a width (Wl). The second interconnect 5506S is in close proximity to the first interconnect 5506B, and the second interconnect 5506S has a different width (W2) than the width (W1) of the first interconnect 5506B. The third interconnect 5506C is adjacent to the second interconnect 5506S, and the third interconnect 5506C has a width (W3). The fourth interconnect line (second 5506S) is in close proximity to the third interconnect line 5506C, and the fourth interconnect line has the same width (W2) as the width (W2) of the second interconnect line 5506S. The fifth interconnect (second 5506B) is in close proximity to the fourth interconnect (second 5506S), and the fifth interconnect (second 5506B) has the same width (W1) as the width (W1) of the first interconnect 5506B (W1) ).In an embodiment, the width (W3) of the third interconnect 5506C is different from the width (W1) of the first interconnect 5506B. 5 In one such embodiment, the width (W3) of the third interconnect 5506C is different than the width (W2) of the second interconnect 5506S. In another such embodiment, the width (W3) of the third interconnect 5506C is the same as the width (W2) of the second interconnect 5506S. In another embodiment, the width (W3) of the third interconnect 5506C is the same as the width (W1) of the first interconnect 5506B.In an embodiment, the spacing (P1) between the first interconnect 5506B and the third interconnect 5506C and the spacing between the second interconnect 5506S and the fourth interconnect (second 5506S) (P2) the same. In another embodiment, the spacing between the spacing (P1) between the first interconnect 5506B and the third interconnect 5506C and the second interconnect 5506S and the fourth interconnect (second 5506S) ( P2) is different.Referring again to Figure 55A, in another embodiment, the plurality of electrically conductive interconnects 5506 comprise a first interconnect 5506B having a width (W1). The second interconnect 5506S is in close proximity to the first interconnect 5506B, and the second interconnect 5506S has a width (W2). The third interconnect 5506C is adjacent to the second interconnect 5506S, and the third interconnect 5506S has a different width (W3) than the width (W1) of the first interconnect 5506B. The fourth interconnect line (second 5506S) is in close proximity to the third interconnect line 5506C, and the fourth interconnect line has the same width (W2) as the width (W2) of the second interconnect line 5506S. The fifth interconnect (second 5506B) is in close proximity to the fourth interconnect (second 5506S), and the fifth interconnect (second 5506B) has the same width (W1) as the width (W1) of the first interconnect 5506B (W1) ).In an embodiment, the width (W2) of the second interconnect 5506S is different from the width (W1) of the first interconnect 5506B. In one such embodiment, the width (W3) of the third interconnect 5506C is different than the width (W2) of the second interconnect 5506S. In another such embodiment, the width (W3) of the third interconnect 5506C is the same as the width (W2) of the second interconnect 5506S.In an embodiment, the width (W2) of the second interconnect 5506S is the same as the width (W1) of the first interconnect 5506B. In an embodiment, the spacing (P1) between the first interconnect 5506B and the third interconnect 5506C and the spacing between the second interconnect 5506S and the fourth interconnect (second 5506S) (P2) the same. In an embodiment, the spacing (P1) between the first interconnect 5506B and the third interconnect 5506C and the spacing between the second interconnect 5506S and the fourth interconnect (second 5506S) (P2) different.Figure 55B illustrates a cross-sectional view of a metallization layer fabricated using a pitch halving scheme over a metallization layer fabricated using a pitch quadruple scheme, in accordance with an embodiment of the present disclosure.Referring to Figure 55B, integrated circuit structure 5550 includes a first interlayer dielectric (ILD) layer 5554 over substrate 5552. A first plurality of conductive interconnect lines 5556 are in the first ILD layer 5554, and individual conductive interconnects in the first plurality of conductive interconnect lines 5556 are spaced apart from one another by portions of the first ILD layer 5554. The individual conductive interconnects in the plurality of conductive interconnects 5556 include a conductive barrier layer 5558 and a conductive fill material 5560. Integrated circuit structure 5550 also includes a second interlayer dielectric (ILD) layer 5574 over substrate 5552. The second plurality of conductive interconnect lines 5576 are in the second ILD layer 5574, and the individual conductive interconnects in the second plurality of conductive interconnect lines 5576 are spaced apart from each other by portions of the second ILD layer 5574. The individual conductive interconnects in the plurality of conductive interconnects 5576 include a conductive resistive layer 5778 and a conductive fill material 5580.In accordance with an embodiment of the present disclosure, referring again to FIG. 55B, a method of fabricating an integrated circuit structure includes forming a first interlayer dielectric (ILD) layer 5554 spaced apart by a first interlayer dielectric (ILD) layer 5554 above a substrate 5552. The first plurality of conductive interconnects 5556. The first plurality of conductive interconnects 5556 are formed using a spacer-based pitch quadruple process (e.g., as described in connection with operations (a)-(e) of Figure 54). A second plurality of electrically conductive interconnects 5576 spaced apart by the second ILD layer 5574 are formed in the second ILD layer 5574 above the first ILD layer 5554. A second plurality of conductive interconnects 5576 are formed using a spacer-based pitch halving process (e.g., in the manner described in connection with operations (a) and (b) of FIG. 54).In an embodiment, the first plurality of electrically conductive interconnects 5556 have a pitch (P1) of 40 nanometers between adjacent lines. The second plurality of conductive interconnects 5576 have a pitch (P2) of 44 nanometers or more between adjacent lines. In an embodiment, the spacer-based pitch quadruple process and the spacer-based pitch halving process are based on immersion in a 193 nm lithography process.In an embodiment, the individual conductive interconnects in the first plurality of conductive interconnects 5554 include a first conductive barrier liner 5558 and a first conductive fill material 5560. The individual conductive interconnects in the second plurality of conductive interconnects 5556 include a second conductive barrier liner 5778 and a second conductive fill material 5580. In one such embodiment, the composition of the first electrically conductive filler material 5560 is different than the second electrically conductive filler material 5580. In another embodiment, the first conductive fill material 5560 has the same composition as the second conductive fill material 5580.Although not shown, in an embodiment, the method further includes forming a third plurality of electrically conductive interconnects spaced apart by the third ILD layer in the third ILD layer above the second ILD layer 5574. The third plurality of conductive interconnects are not formed using pitch division.Although not shown, in an embodiment, the method further includes forming a third ILD layer spacer in the third ILD layer above the first ILD layer 5554 prior to forming the second plurality of conductive interconnect lines 5576 A third plurality of conductive interconnects are opened. A third plurality of conductive interconnect lines are formed using a spacer-based pitch quadruple process. In one such embodiment, after forming the second plurality of conductive interconnect lines 5576, forming a fourth plurality of conductive interconnects separated by the fourth ILD layer in the fourth ILD layer above the second ILD layer 5574 Connected. A spacer-based pitch halving process is used to form a fourth plurality of conductive interconnects. In an embodiment, such a method further includes forming a fifth plurality of electrically conductive interconnects spaced apart by the fifth ILD layer in the fifth ILD layer above the fourth ILD layer, using a spacer-based pitch halving process A fifth plurality of electrically conductive interconnect lines are formed. Forming a sixth plurality of electrically conductive interconnects spaced apart by the sixth ILD layer in the sixth ILD layer above the fifth ILD layer, forming a sixth plurality of electrically conductive interconnects using a spacer-based pitch halving process . A seventh plurality of electrically conductive interconnects separated by a seventh ILD layer are then formed in the seventh ILD layer above the sixth ILD layer. The seventh plurality of conductive interconnects are not formed using pitch division.In another aspect, the metal line component varies between metallization layers. Such an arrangement can be referred to as a heterogeneous metallization layer. In an embodiment, copper is used as the conductive fill material for the larger interconnect lines, while cobalt is used as the conductive fill material for the smaller interconnect lines. A smaller line with cobalt as the fill material can provide reduced electromigration while maintaining low resistivity. The use of cobalt instead of copper for smaller interconnects can solve the problem of scaling copper wires, where the conductive barrier consumes a larger amount of interconnect volume and reduces copper, which substantially prevents normal copper interconnects from being associated with copper interconnects. The advantages of the union.In a first example, FIG. 56A illustrates a cross-sectional view of an integrated circuit structure in which a metallization layer having a metal line component is over a metallization layer having different metal line components, in accordance with an embodiment of the present disclosure.Referring to FIG. 56A, integrated circuit structure 5600 includes a first plurality of conductive interconnects 5606 that are in a first interlayer dielectric (ILD) layer 5604 over substrate 5602 and are separated by a first interlayer dielectric (ILD) layer 5604. . One of the conductive interconnects 5606A is shown as having a lower via 5768. The individual conductive interconnects in the first plurality of conductive interconnects 5606 include a first conductive barrier material 5608 along the sidewalls and bottom of the first conductive fill material 5610.A second plurality of electrically conductive interconnects 5616 are in the second ILD layer 5614 above the first ILD layer 5604 and are separated by a second ILD layer 5614. One of the conductive interconnects 5616A is shown as having a lower via 5617. The individual conductive interconnects in the second plurality of conductive interconnects 5616 include a second conductive barrier material 5618 along the sidewalls and bottom of the second conductive fill material 5620. The composition of the second conductive fill material 5620 is different from the first conductive fill material 5610.In an embodiment, the second electrically conductive filler material 5620 is substantially comprised of copper, and the first electrically conductive filler material 5610 is substantially comprised of cobalt. In one such embodiment, the composition of the first electrically conductive barrier material 5608 is different than the second electrically conductive barrier material 5618. In another such embodiment, the first conductive barrier material 5608 has the same composition as the second conductive barrier material 5618.In an embodiment, the first conductive fill material 5610 includes copper having a first concentration of dopant impurity atoms, and the second conductive fill material 5620 includes copper having a second concentration of dopant impurity atoms. The second concentration of dopant impurity atoms is less than the first concentration of dopant impurity atoms. In one such embodiment, the dopant impurity atoms are selected from the group consisting of aluminum (Al) and manganese (Mn). In an embodiment, the first conductive barrier material 5610 and the second conductive barrier material 5620 have the same composition. In an embodiment, the first conductive barrier material 5610 and the second conductive barrier material 5620 have different compositions.Referring again to FIG. 56A, a second ILD layer 5614 is over the etch stop layer 5622. Conductive vias 5617 are in the second ILD layer 5614 and in the openings of the etch stop layer 5622. In an embodiment, the first and second ILD layers 5604 and 5614 comprise silicon, carbon, and oxygen, and the etch stop layer 5622 includes silicon and nitrogen. In an embodiment, the individual conductive interconnects in the first plurality of conductive interconnects 5606 have a first width (W1), and the individual conductive interconnects in the second plurality of conductive interconnects 5616 have a greater than the first The second width (W2) of the width (W1).In a second example, FIG. 56B illustrates a cross-sectional view of an integrated circuit structure in which a metallization layer having a metal line component is coupled to a metallization layer having a different metal line composition, in accordance with an embodiment of the present disclosure.Referring to FIG. 56B, integrated circuit structure 5650 includes a first plurality of conductive interconnects 5656 spaced apart by a first interlayer dielectric (ILD) layer 5654 in a first interlayer dielectric (ILD) layer 5654 over substrate 5652. One of the conductive interconnects 5656A is shown as having a lower via 5657. The individual conductive interconnects in the first plurality of conductive interconnects 5656 include a first conductive barrier material 5658 along the sidewalls and bottom of the first conductive fill material 5660.A second plurality of conductive interconnects 5666 are in the second ILD layer 5664 above the first ILD layer 5654 and are spaced apart by the second ILD layer 5664. One of the conductive interconnects 5666A is shown as having a lower via 5667. The individual conductive interconnects in the second plurality of conductive interconnects 5666 include a second conductive barrier material 5668 along the sidewalls and bottom of the second conductive fill material 5670. The composition of the second electrically conductive filler material 5670 is different from the first electrically conductive filler material 5660.In an embodiment, conductive vias 5657 are on and electrically coupled to individual conductive interconnects 5656B in the first plurality of conductive interconnects 5656 to electrically interconnect the individual of the second plurality of conductive interconnects 5666 Line 5666A is electrically coupled to individual conductive interconnects 5656B of the first plurality of conductive interconnects 5656. In an embodiment, individual conductive interconnects in the first plurality of conductive interconnects 5656 are in a first direction 5698 (eg, entering and exiting a page), and individual conductive interconnects in the second plurality of conductive interconnects 5666 A second direction 5699 orthogonal to the first direction 5698 is as shown. In an embodiment, the conductive vias 5667 include a second conductive resist material 5668 along the sidewalls and bottom of the second conductive fill material 5670, as shown.In an embodiment, the second ILD layer 5664 is on the etch stop layer 5672 on the first ILD layer 5654. Conductive vias 5667 are in the second ILD layer 5664 and in the openings of the etch stop layer 5672. In an embodiment, the first and second ILD layers 5654 and 5664 comprise silicon, carbon and oxygen, and the etch stop layer 5672 comprises silicon and nitrogen. In an embodiment, the individual conductive interconnects in the first plurality of conductive interconnects 5656 have a first width (W1), and the individual conductive interconnects in the second plurality of conductive interconnects 5666 have a greater than the first The second width (W2) of the width (W1).In an embodiment, the second electrically conductive filler material 5670 is substantially comprised of copper, and the first electrically conductive filler material 5660 is substantially comprised of cobalt. In one such embodiment, the composition of the first electrically conductive barrier material 5658 is different than the second electrically conductive barrier material 5668. In another such embodiment, the first conductive barrier material 5658 has the same composition as the second conductive barrier material 5668.In an embodiment, the first conductive fill material 5660 includes copper having a first concentration of dopant impurity atoms, and the second conductive fill material 5670 includes copper having a second concentration of dopant impurity atoms. The second concentration of dopant impurity atoms is less than the first concentration of dopant impurity atoms. In one such embodiment, the dopant impurity atoms are selected from the group consisting of aluminum (Al) and manganese (Mn). In an embodiment, the first conductive barrier material 5660 and the second conductive barrier material 5670 have the same composition. In an embodiment, the first conductive barrier material 5660 and the second conductive barrier material 5670 have different compositions.Figures 57A-57C illustrate cross-sectional views of individual interconnect lines having various barrier liners and conductive cap structure arrangements suitable for the structures described in connection with Figures 56A and 56B, in accordance with an embodiment of the present disclosure.Referring to FIG. 57A, interconnect 5700 in dielectric layer 5701 includes conductive barrier material 5702 and conductive fill material 5704. Conductive barrier material 5702 includes an outer layer 5706 remote from conductive fill material 5704 and an inner layer 5708 proximate to conductive fill material 5704. In an embodiment, the electrically conductive filler material comprises cobalt, the outer layer 5706 comprises titanium and nitrogen, and the inner layer 5708 comprises tungsten, nitrogen and carbon. In one such embodiment, the outer layer 5706 has a thickness of about 2 nanometers and the inner layer 5708 has a thickness of about 0.5 nanometers. In another embodiment, the electrically conductive filler material comprises cobalt, the outer layer 5706 comprises tantalum, and the inner layer 5708 comprises tantalum. In one such embodiment, the outer layer 5706 also includes nitrogen.Referring to FIG. 57B, the interconnect 5720 in the dielectric layer 5721 includes a conductive barrier material 5722 and a conductive fill material 5724. A conductive cap layer 5730 is on top of the conductive fill material 5724. In one such embodiment, the conductive cap layer 5730 is also on top of the conductive barrier material 5722, as shown. In another embodiment, the conductive cap layer 5730 is not on the top of the conductive barrier material 5722. In an embodiment, the conductive cap layer 5730 is substantially comprised of a drill, and the electrically conductive filler material 5724 is substantially comprised of copper.Referring to FIG. 57C, interconnect 5740 in dielectric layer 5741 includes conductive barrier material 5742 and conductive fill material 5744. Conductive barrier material 5742 includes an outer layer 5746 that is remote from conductive fill material 5744 and an inner layer 5748 that is adjacent to conductive fill material 5744. A conductive cap layer 5750 is on top of the conductive fill material 5744. In one embodiment, the conductive cap layer 5750 is only on top of the electrically conductive filler material 5744. However, in another embodiment, the conductive cap layer 5750 is also on top of the inner layer 5748 of the conductive material 5742, i.e., at position 5752. In one such embodiment, the conductive cap layer 5750 is also on top of the outer layer 5746 of the electrically conductive barrier material 5742, i.e., at location 5754.In an embodiment, referring to Figures 57B and 57C, a method of fabricating an integrated circuit structure includes forming an interlayer dielectric (ILD) layer 5721 or 5741 over a substrate. A plurality of conductive interconnects 5720 or 5740 are formed in the trench and are separated by an ILD layer, and individual conductive interconnects of the plurality of conductive interconnects 5720 or 5740 are in corresponding trenches in the trench. A plurality of conductive interconnects are formed by first forming a conductive barrier material 5722 or 5724 on the bottom and sidewalls of the trench, and then forming a conductive fill material 5724 or 5744 on the conductive barrier material 5722 or 5742, respectively, and filling The trenches wherein the conductive barrier material 5722 or 5742 are along the bottom of the conductive fill material 5730 or 5750 and along the sidewalls thereof, respectively. The top of conductive fill material 5724 or 5744 is then treated with a gas comprising oxygen and carbon. After treating the top of the conductive fill material 5724 or 5744 with a gas comprising oxygen and carbon, a conductive cap layer 5730 or 5750 is formed on top of the conductive fill material 5724 or 5744, respectively.In one embodiment, treating the top of conductive fill material 5724 or 5744 with a gas comprising oxygen and carbon includes treating the top of conductive fill material 5724 or 5744 with carbon monoxide (CO). In one embodiment, conductive fill material 5724 or 5744 comprises copper, and forming a conductive cap layer 5730 or 5750 on top of conductive fill material 5724 or 5744 includes forming a layer comprising cobalt using chemical vapor deposition (CVD). In one embodiment, a conductive cap layer 5730 or 5750 is formed on top of the conductive fill material 5724 or 5744, but not on top of the conductive barrier material 5722 or 5724.In one embodiment, forming conductive barrier material 5722 or 5744 includes forming a first conductive layer on the bottom and sidewalls of the trench, the first conductive layer comprising germanium. A first portion of the first conductive layer is first formed using atomic layer deposition (ALD), and then a second portion of the first conductive layer is formed using physical vapor deposition (PVD). In one such embodiment, forming the electrically conductive barrier material further includes forming a second electrically conductive layer on the first electrically conductive layer on the bottom and sidewalls of the trench, the second electrically conductive layer comprising germanium, and the electrically conductive fill material comprising copper. In one embodiment, the first conductive layer further comprises nitrogen.58 illustrates an integrated circuit structure in which four metallization layers having one metal line composition and pitch are over two metallization layers having different metal line compositions and smaller pitches, in accordance with an embodiment of the present disclosure. Sectional view.Referring to FIG. 58, integrated circuit structure 5800 includes a first plurality of conductive interconnects 5804 separated by a first interlayer dielectric (ILD) layer 5802 in a first interlayer dielectric (ILD) layer 5802 over substrate 5801. The individual conductive interconnects in the first plurality of conductive interconnects 5804 include a first conductive barrier material 5806 along the sidewalls and bottom of the first conductive fill material 5808. The individual conductive interconnects in the first plurality of conductive interconnects 5804 are in a first direction 5988 (e.g., in and out of the page).A second plurality of conductive interconnects 5814 are in the second ILD layer 5812 above the first ILD layer 5802 and are spaced apart by the second ILD layer 5812. The individual conductive interconnects in the second plurality of conductive interconnects 5814 include first conductive barrier material 5806 along the sidewalls and bottom of the first conductive fill material 5808. The individual conductive interconnects in the second plurality of conductive interconnects 5814 are in a second direction 5899 that is orthogonal to the first direction 5988.A third plurality of conductive interconnects 5824 are in the third ILD layer 5822 over the second ILD layer 5812 and are spaced apart by the third ILD layer 5822. The individual conductive interconnects in the third plurality of conductive interconnects 5824 include a second conductive barrier material 5826 along the sidewalls and bottom of the second conductive fill material 5828. The composition of the second electrically conductive filler material 5828 is different than the first electrically conductive filler material 5808. The individual conductive interconnects in the third plurality of conductive interconnects 5824 are in a first direction 5988.A fourth plurality of conductive interconnects 5834 are in the fourth ILD layer 5832 above the third ILD layer 5822 and are spaced apart by the fourth ILD layer 5832. The individual conductive interconnects in the fourth plurality of conductive interconnects 5834 include a second conductive barrier material 5826 along the sidewalls and bottom of the second conductive fill material 5828. The individual conductive interconnects in the fourth plurality of conductive interconnects 5834 are in a second direction 5899.A fifth plurality of conductive interconnects 5844 are in the fifth ILD layer 5842 above the fourth ILD layer 5832 and are spaced apart by the fifth ILD layer 5842. The individual conductive interconnects in the fifth plurality of conductive interconnects 5844 include a second conductive barrier material 5826 along the sidewalls and bottom of the second conductive fill material 5828. The individual conductive interconnects in the fifth plurality of conductive interconnects 5844 are along the first direction 5988.A sixth plurality of conductive interconnects 5854 are in the sixth ILD layer 5852 above the fifth ILD layer and are spaced apart by the sixth ILD layer 5852. The individual conductive interconnects in the sixth plurality of conductive interconnects 5854 include a second conductive barrier material 5826 along the sidewalls and bottom of the second conductive fill material 5828. The individual conductive interconnects in the sixth plurality of conductive interconnects 5854 are in a second direction 5899.In an embodiment, the second electrically conductive filler material 5828 is substantially comprised of copper, and the first electrically conductive filler material 5808 is substantially comprised of cobalt. In an embodiment, the first conductive fill material 5808 includes copper having a first concentration of dopant impurity atoms, and the second conductive fill material 5828 includes copper having a second concentration of dopant impurity atoms, dopant impurity atoms The second concentration is less than the first concentration of dopant impurity atoms.In an embodiment, the composition of the first electrically conductive barrier material 5806 is different than the second electrically conductive barrier material 5826. In another embodiment, the first conductive barrier material 5806 and the second conductive barrier material 5826 have the same composition.In an embodiment, the first conductive vias 5819 are on and electrically coupled to the individual conductive interconnects 5804A of the first plurality of conductive interconnects 5804. The individual conductive interconnects 5814A of the second plurality of conductive interconnects 5814 are over and electrically coupled to the first conductive vias 5819.Second conductive vias 5829 are on and electrically coupled to individual conductive interconnects 5814B in the second plurality of conductive interconnects 5814. The individual conductive interconnects 5824A of the third plurality of conductive interconnects 5824 are on and electrically coupled to the second conductive vias 5829.The third conductive via 5839 is on and electrically coupled to the individual conductive interconnect 5824B of the third plurality of conductive interconnects 5824. The individual conductive interconnects 5834A of the fourth plurality of conductive interconnects 5834 are on and electrically coupled to the third conductive vias 5839.A fourth conductive via 5849 is on and electrically coupled to the individual conductive interconnect 5834B of the fourth plurality of conductive interconnects 5834. The individual conductive interconnects 5844A of the fifth plurality of conductive interconnects 5844 are on and electrically coupled to the fourth conductive vias 5849.A fifth conductive via 5859 is on and electrically coupled to the individual conductive interconnect 5844B of the fifth plurality of conductive interconnects 5844. The individual conductive interconnects 5854A in the sixth plurality of conductive interconnects 5854 are on and electrically coupled to the fifth conductive vias 5859.In one embodiment, the first conductive vias 5819 include a first conductive barrier material 5806 along the sidewalls and bottom of the first conductive fill material 5808. The second 5829, third 5839, fourth 5849, and fifth 5859 conductive vias include a second electrically conductive barrier material 5826 along the sidewalls and bottom of the second electrically conductive fill material 5828.In an embodiment, the first 5802, the second 5822, the third 5822, the fourth 5832, the fifth 5842, and the sixth 5852 ILD layer are separated from each other by a corresponding etch stop layer 5890 between adjacent ILD layers. In an embodiment, the first 5802, the second 5822, the third 5822, the fourth 5832, the fifth 5842, and the sixth 5852 ILD layer comprise silicon, carbon, and oxygen.In an embodiment, the individual conductive interconnects of the first 5804 and the second 5814 plurality of conductive interconnects have a first width (W1). The third 5824, the fourth 5834, the fifth 5844, and the sixth 5854 have an individual conductive interconnect of the plurality of conductive interconnects having a second width (W2) that is greater than the first width (W1).59A-59D illustrate cross-sectional views of various interconnection and via arrangements with a bottom conductive layer, in accordance with an embodiment of the present disclosure.Referring to Figures 59A and 59B, integrated circuit structure 5900 includes an interlayer dielectric (ILD) layer 5904 over substrate 5902. Conductive vias 5906 are in first trenches 5908 in ILD layer 5904. Conductive interconnect 5910 is over and electrically coupled to conductive via 5906. Conductive interconnect 5910 is in second trench 5912 in ILD layer 5904. The second trench 5912 has a larger opening 5913 than the opening 5909 of the first trench 5908.In an embodiment, the conductive vias 5906 and the conductive interconnects 5910 are on the bottom of the first trench 5908 but not along the sidewalls of the first trench 5908 and not along the bottom and sidewalls of the second trench 5912. First conductive barrier layer 5914. The second conductive barrier layer 5916 is over the first conductive barrier layer 5914 on the bottom of the first trench 5908. The second conductive barrier layer 5916 is further along the sidewall of the first trench 5908 and further along the bottom and sidewalls of the second trench 5912. The third conductive barrier layer 5918 is on the second conductive barrier layer 5916 on the bottom of the first trench 5908. The third conductive barrier layer 5918 is further on the second conductive barrier layer 5916 along the sidewall of the first trench 5908 and along the bottom and sidewalls of the second trench 5912. Conductive fill material 5920 is over third conductive barrier layer 5918 and fills first 5908 and second trench 5912. A third conductive barrier layer 5918 is along the bottom of the conductive fill material 5920 and along its sidewalls.In one embodiment, the first conductive barrier layer 5914 and the third conductive barrier layer 5918 have the same composition, and the composition of the second conductive barrier layer 5916 is different from the first conductive barrier layer 5914 and the third conductive barrier layer 5918. . In one such embodiment, first conductive barrier layer 5914 and third conductive barrier layer 5918 comprise germanium, and second conductive barrier layer 5916 includes germanium. In certain such embodiments, the second electrically conductive barrier layer 5916 also includes nitrogen. In an embodiment, the electrically conductive filler material 5920 is substantially comprised of copper.In an embodiment, the conductive cap layer 5922 is on top of the conductive fill material 5920. In one such embodiment, the conductive cap layer 5922 is not on top of the second conductive barrier layer 5916 and is not on top of the third conductive barrier layer 5918. However, in another embodiment, the conductive cap layer 5922 is further on top of the third conductive barrier layer 5918, for example, at location 5924. In one such embodiment, the conductive cap layer 5922 is further on top of the second conductive barrier layer 5916, for example, at location 5926. In an embodiment, the conductive cap layer 5922 is substantially comprised of cobalt, and the electrically conductive filler material 5920 is substantially comprised of copper.Referring to Figures 59C and 59D, in an embodiment, conductive vias 5906 are on and electrically coupled to second conductive interconnects 5950 in a second ILD layer 5952 below ILD layer 5904. The second conductive interconnect 5950 includes a conductive fill material 5954 and a conductive cap 5956 thereon. Etch stop layer 5958 can be over conductive cap 5956 as shown.In one embodiment, the first conductive barrier layer 5914 of the conductive via 5956 is in the opening 5960 of the conductive cap 5956 of the second conductive interconnect 5950, as shown in Figure 59C. In one such embodiment, the first conductive barrier layer 5914 of the conductive via 5956 includes germanium, and the conductive cap 5956 of the second conductive interconnect 5950 includes cobalt.In another embodiment, the first conductive barrier layer 5914 of the conductive via 5956 is over a portion of the conductive cap 5956 of the second conductive interconnect 5950, as shown in Figure 59D. In one such embodiment, the first conductive barrier layer 5914 of the conductive via 5956 includes germanium, and the conductive cap 5956 of the second conductive interconnect 5950 includes cobalt. In a particular embodiment, although not shown, the first conductive barrier layer 5914 of the via 5906 is on the recess of the conductive cap 5956 that enters but does not penetrate the second conductive interconnect 5950.In another aspect, the BEOL metallization layer has a non-planar topography, such as a step height difference between the conductive lines and the ILD layer that houses the wires. In an embodiment, the overlying etch stop layer is formed to conform to the topography and exhibit the topography. In an embodiment, the topography assists in guiding the over-hole etch process toward the conductive lines to impede "unlanding" of the conductive vias.In a first example of an etch stop layer topography, FIGS. 60A-60D illustrate cross-sectional views of a structural arrangement of a line topography for a recess of a BEOL metallization layer in accordance with an embodiment of the present disclosure.Referring to FIG. 60A, integrated circuit structure 6000 includes a plurality of conductive interconnects 6006 in an interlayer dielectric (ILD) layer 6004 over substrate 6002 and separated by a first interlayer dielectric (ILD) layer 6004. For exemplary purposes, one of the plurality of conductive interconnects 6006 is shown coupled to the lower via 6007. The individual conductive interconnects in the plurality of conductive interconnects 6006 have an upper surface 6008 below the upper surface 6010 of the ILD layer 6004. The etch stop layer 6012 is on and conformal to the ILD layer 6004 and the plurality of conductive interconnects 6006. The etch stop layer 6012 has a non-planar upper surface, and the uppermost portion 6014 of the non-planar upper surface is above the ILD layer 6004 and the lowermost portion 6016 of the non-planar upper surface is over the plurality of conductive interconnect lines 6006.Conductive vias 6018 are on and electrically coupled to individual conductive interconnects 6006A of the plurality of conductive interconnects 6006. Conductive vias 6018 are in openings 6020 of etch stop layer 6012. The opening 6020 is over the individual 6006A conductive interconnects in the plurality of conductive interconnects 6006 but not above the ILD layer 6014. Conductive vias 6018 are in the second ILD layer 6022 over the etch stop layer 6012. In one embodiment, the second ILD layer 6022 is on and conformal to the etch stop layer 6012, as shown in FIG. 60A.In an embodiment, the center 6024 of the conductive vias 6018 is aligned with the center 6026 of the individual conductive interconnects 6006A of the plurality of conductive interconnects 6006, as shown in Figure 60A. However, in another embodiment, the center 6024 of the conductive vias 6018 is offset from the center 6026 of the individual conductive interconnects 6006A of the plurality of conductive interconnects 6006, as shown in Figure 60B.In an embodiment, the individual conductive interconnects in the plurality of conductive interconnects 6006 include a barrier layer 6028 along the sidewalls and bottom of the conductive fill material 6030. In one embodiment, barrier layer 6028 and conductive fill material 6030 have an uppermost surface below upper surface 6010 of ILD layer 6004, as shown in Figures 60A, 60B, and 60C. In certain such embodiments, the uppermost surface of barrier layer 6028 is over the uppermost surface of conductive fill material 6030, as shown in Figure 60C. In another embodiment, the electrically conductive fill material 6030 has an uppermost surface that is below the upper surface 6010 of the ILD layer 6004, and the barrier layer 6028 has an uppermost surface that is coplanar with the upper surface 6010 of the ILD layer 6004, as shown in Figure 60D.In an embodiment, ILD layer 6004 includes silicon, carbon, and oxygen, and etch stop layer 6012 includes silicon and nitrogen. In an embodiment, the upper surface 6008 of the individual conductive interconnects in the plurality of conductive interconnects 6006 is in an amount ranging from 0.5 to 1.5 nanometers below the upper surface 6010 of the ILD layer 6004.Referring collectively to FIGS. 60A-60D, a method of fabricating an integrated circuit structure includes forming a first interlayer dielectric (ILD) layer 6004 over a substrate 6002 separated by a first interlayer dielectric layer, in accordance with an embodiment of the present disclosure. Multiple conductive interconnects. A plurality of electrically conductive interconnects are recessed relative to the first ILD layer to provide individual conductive interconnects 6006 of the plurality of electrically conductive interconnects having an upper surface 6008 that is lower than the upper surface 6010 of the first ILD layer 6004. After recessing the plurality of conductive interconnect lines, an etch stop layer 6012 is formed over and conformal to the first ILD layer 6004 and the plurality of conductive interconnect lines 6006. The etch stop layer 6012 has a non-planar upper surface with the uppermost portion 6016 of the non-planar upper surface over the first ILD layer 6004 and the lowermost portion 6014 of the non-planar upper surface over the plurality of conductive interconnect lines 6006. A second ILD layer 6022 is formed over the etch stop layer 6012. The via trenches are etched in the second ILD layer 6022. The etch stop layer 6012 is directed to the location of the via trenches in the second ILD layer 6022 during etching. The etch stop layer 6012 is etched through the via trench to form an opening 6020 in the etch stop layer 6012. The opening 6020 is over the individual conductive interconnects 6006A of the plurality of conductive interconnects 6006 but not above the first ILD layer 6004. Conductive vias 6018 are formed in vias 6020 in via trenches and etch stop layer 6012. Conductive vias 6018 are on and electrically coupled to individual conductive interconnects 6006A of the plurality of conductive interconnects 6006.In one embodiment, the individual conductive interconnects in the plurality of conductive interconnects 6006 include a barrier layer 6028 along the sidewalls and bottom of the conductive fill material 6030, and recessing the plurality of conductive interconnects includes the barrier layer 6028 Both the conductive fill material 6030 and the conductive fill material 6030 are recessed as shown in FIGS. 60A-60C. In another embodiment, the individual conductive interconnects of the plurality of conductive interconnects 6006 include a barrier layer 6028 along the bottom and sidewalls of the conductive fill material 6030, and recessing the plurality of conductive interconnects includes electrically padding Material 6030 is recessed, but does not substantially recess barrier layer 6028, as shown in Figure 60D. In an embodiment, the etch stop layer 6012 is redirected to a lithographically misaligned via trench pattern. In an embodiment, recessing the plurality of conductive interconnect lines comprises recessing the amount in the range of 0.5-1.5 nanometers relative to the first ILD layer 6004.In a second example of an etch stop layer topography, FIGS. 61A-61D illustrate cross-sectional views of a structural arrangement for a stepped line topography of a BEOL metallization layer, in accordance with an embodiment of the present disclosure.Referring to Figure 61A, integrated circuit structure 6100 includes a plurality of conductive interconnects 6106 in an interlayer dielectric (ILD) layer 6104 over a substrate 6102 and separated by a first interlayer dielectric (ILD) layer 6104. For exemplary purposes, one of the plurality of conductive interconnects 6106 is shown coupled to the lower via 6107. The individual conductive interconnects in the plurality of conductive interconnects 6106 have an upper surface 6108 over the upper surface 6110 of the ILD layer 6104. The etch stop layer 6112 is on and conformal to the ILD layer 6104 and the plurality of conductive interconnect lines 6106. The etch stop layer 6112 has a non-planar upper surface with the lowermost portion 6114 of the non-planar upper surface above the ILD layer 6104 and the uppermost portion 6116 of the non-planar upper surface over the plurality of conductive interconnect lines 6106.Conductive vias 6118 are on and electrically coupled to individual conductive interconnects 6106A of the plurality of conductive interconnects 6106. Conductive vias 6118 are in openings 6120 of etch stop layer 6112. The opening 6120 is over the individual conductive interconnects 6106A in the plurality of conductive interconnects 6106 but not above the ILD layer 6114. Conductive vias 6118 are in the second ILD layer 6122 over the etch stop layer 6112. In one embodiment, the second ILD layer 6122 is conformed to and conforms to the etch stop layer 6112, as shown in Figure 61A.In an embodiment, the center 6124 of the conductive via 6118 is aligned with the center 6126 of the individual conductive interconnects 6106A of the plurality of conductive interconnects 6106, as shown in Figure 61A. However, in another embodiment, the center 6124 of the conductive vias 6118 is offset from the center 6126 of the individual conductive interconnects 6106A of the plurality of conductive interconnects 6106, as shown in Figure 61B.In an embodiment, the individual conductive interconnects in the plurality of conductive interconnects 6106 include a barrier layer 6128 along the sidewalls and bottom of the conductive fill material 6130. In one embodiment, barrier layer 6128 and conductive fill material 6130 have an uppermost surface above upper surface 6110 of ILD layer 6104, as shown in Figures 61A, 61B, and 61C. In a particular such embodiment, the uppermost surface of barrier layer 6128 is lower than the uppermost surface of conductive fill material 6130, as shown in Figure 61C. In another embodiment, the electrically conductive fill material 6130 has an uppermost surface above the upper surface 6110 of the ILD layer 6104, and the barrier layer 6128 has an uppermost surface that is coplanar with the upper surface 6110 of the ILD layer 6104, as shown in Figure 61D.In an embodiment, the ILD layer 6104 includes silicon, carbon, and oxygen, and the etch stop layer 6112 includes silicon and nitrogen. In an embodiment, the upper surface 6108 of the individual conductive interconnects in the plurality of conductive interconnects 6106 is in an amount ranging from 0.5 to 1.5 nanometers above the upper surface 6110 of the ILD layer 6004.Referring collectively to FIGS. 61A-61D, in accordance with an embodiment of the present disclosure, a method of fabricating an integrated circuit structure includes forming a first interlayer dielectric layer (ILD) layer over a substrate 6102 that is separated by a first interlayer dielectric layer. A plurality of conductive interconnects 6106. The first ILD layer 6104 is recessed relative to the plurality of conductive interconnects 6106 to provide individual conductive interconnects in the plurality of conductive interconnects 6106 having an upper surface 6108 over the upper surface 6110 of the first ILD layer 6104. After recessing the first ILD layer 6104, an etch stop layer 6112 is formed over and conformal to the first ILD layer 6104 and the plurality of conductive interconnect lines 6106. The etch stop layer 6112 has a non-planar upper surface with the lowermost portion 6114 of the non-planar surface above the first ILD layer 6104 and the uppermost portion 6116 of the non-planar upper surface over the plurality of conductive interconnect lines 6106. A second ILD layer 6122 is formed on the etch stop layer 6112. A via trench is etched in the second ILD layer 6122. The etch stop layer 6112 is directed to the location of the via trenches in the second ILD layer 6122 during etching. The etch stop layer 6112 is etched through the via trench to form an opening 6120 in the etch stop layer 6112. The opening 6120 is over the individual conductive interconnects 6106A in the plurality of conductive interconnects 6106 but not above the first ILD layer 6104. A conductive via 6118 is formed in the via 6120 in the via trench and etch stop layer 6112. Conductive vias 6118 are on and electrically coupled to individual conductive interconnects 6106A of the plurality of conductive interconnects 6106.In one embodiment, the individual conductive interconnects in the plurality of conductive interconnects 6106 include a barrier layer 6128 along the sidewalls and bottom of the conductive fill material 6130, and recessing the first ILD layer 6104 includes opposing the barrier layer 6128 Both the conductive fill material 6130 and the conductive fill material 6130 are recessed as shown in FIGS. 61A-61C. In another embodiment, the individual conductive interconnects in the plurality of conductive interconnects 6i06 include a barrier layer 6128 along the sidewalls and bottom of the conductive fill material 6130, and recessing the first ILD layer 6104 includes a conductive fill Material 6130 is recessed but not recessed relative to barrier layer 6128, as shown in Figure 61D. In an embodiment, the etch stop layer 6112 is redirected toward the lithographically misaligned via trench pattern. In an embodiment, recessing the first ILD layer 6104 includes recessing the amount in the range of 0.5-1.5 nanometers relative to the plurality of conductive interconnects 6106.In another aspect, a technique for patterning the ends of metal wires is described. To provide context, in advanced nodes of semiconductor fabrication, lower level interconnects can be created by separate patterning processes for line grids, line ends, and vias. However, the fidelity of the composite pattern tends to deteriorate as the through hole encroaches on the end of the line (and vice versa). The embodiments described herein provide a line end process, also referred to as a plug process, that eliminates associated adjacent rules. Embodiments may allow for the placement of through holes at the wire ends and allow large through holes to bundle the entire wire ends.In order to provide more context, FIG. 62A shows a plan view of a metallization layer and a corresponding cross-sectional view taken along the a-a' axis of the plan view, in accordance with an embodiment of the present disclosure. Figure 62B shows a cross-sectional view of a wire end or plug in accordance with the presently disclosed embodiments. Figure 62C illustrates another cross-sectional view of a wire end or plug in accordance with an embodiment of the present disclosure.Referring to FIG. 62A, metallization layer 6200 includes metal lines 6202 formed in dielectric layer 6204. Metal line 6202 can be coupled to lower via 6203. Dielectric layer 6204 can include a wire end or plug region 6205. Referring to Figure 62B, the wire end or plug region 6205 of the dielectric layer 6204 can be fabricated by patterning the hard mask layer 6210 over the dielectric layer 6204 and then etching the exposed portions of the dielectric layer 6204. The exposed portion of dielectric layer 6204 can be etched to a depth suitable for forming line trench 6206, or further etched to a depth suitable to form via trench 6208. Referring to Figure 62C, two vias adjacent the end of the wire or the opposite sidewalls of the plug 6205 can be fabricated in a single large exposure 6216 to ultimately form the wire trench 6212 and the via trench 6214.However, referring again to Figures 62A-62C, fidelity issues and/or hard mask erosion issues may result in imperfect patterning schemes. In contrast, one or more embodiments described herein include embodiments involving a process stream for constructing a line end dielectric (plug) after a trench and via patterning process.In one aspect, then, one or more embodiments described herein relate to constructing a non-conductive space or interruption between metal lines and between associated conductive vias (in some embodiments) (referred to as The way "line end", "plug" or "cut". By definition, the conductive vias are used to land on the previous layer metal pattern. In this regard, the embodiments described herein achieve a more robust interconnect fabrication scheme because of less dependence on the alignment of the lithographic apparatus. Such an interconnect fabrication scheme can be used to relax the alignment/exposure constraints, can be used to improve electrical contact (eg, by reducing via resistance), and can be used to reduce the use of conventional methods in other situations. The characteristics of the total process operation and processing time required for patterning.63A-63F show plan and corresponding cross-sectional views showing various operations in the final processing scheme of the plug, in accordance with an embodiment of the present disclosure.Referring to Figure 63A, a method of fabricating an integrated circuit structure includes forming a line trench 6306 in an upper portion 6304 of an interlayer dielectric (ILD) material layer 6302 formed over the underlying metallization layer 6300. A via trench 6308 is formed in the lower portion 6310 of the ILD material layer 6302. Via trench 6308 exposes metal line 6312 of underlying metallization layer 6300.Referring to FIG. 63B, a sacrificial material 6314 is formed over the ILD material layer 6302 and in the via trenches 6306 and via trenches 6308. A hard mask 6315 may be formed on the sacrificial material 6314 as shown in FIG. 63B. In one embodiment, the sacrificial material 6314 includes carbon.Referring to Figure 63C, the sacrificial material 6314 is patterned to break the continuity of the sacrificial material 6314 in the line trench 6306, for example, to provide an opening 6316 in the sacrificial material 6314.Referring to Figure 63D, the opening 6316 in the sacrificial material 6314 is filled with a dielectric material to form a dielectric plug 6318. In an embodiment, after filling the opening 6316 in the sacrificial material 6314 with a dielectric material, the hard mask 6315 is removed to provide a dielectric plug 6318 having an upper surface 6320 over the upper surface 6322 of the ILD material 6302, as shown in FIG. 63D. Show. The sacrificial material 6314 is removed to retain the dielectric plugs 6318.In an embodiment, filling the opening 6316 of the sacrificial material 6314 with a dielectric material includes filling with a metal oxide material. In one such embodiment, the metal oxide material is alumina. In an embodiment, filling the opening 6314 of the sacrificial material 6316 with a dielectric material includes filling using atomic layer deposition (ALD).Referring to FIG. 63E, the wire trench 6306 and the via trench 6308 are filled with a conductive material 6324. In an embodiment, conductive material 6324 is formed over and over dielectric plugs 6318 and ILD layer 6302, as shown.Referring to Figure 63F, conductive material 6324 and dielectric plug 6318 are planarized to provide planarized dielectric plugs 6318&apos;, breaking the continuity of conductive material 6324 in line trenches 6306.Referring again to FIG. 63F, integrated circuit structure 6350 includes an interlayer dielectric (ILD) layer 6302 over the substrate, in accordance with an embodiment of the present disclosure. Conductive interconnects 6324 are in trenches 6306 in ILD layer 6302. The conductive interconnect line 6324 has a first portion 6324A and a second portion 6324B, the first portion 6324A being laterally adjacent to the second portion 6324B. A dielectric plug 6318' is interposed between and laterally adjacent the first 6324A and second 6324B portions of the conductive interconnect 6324. Although not shown, in an embodiment, conductive interconnects 6324 include a conductive barrier liner and a conductive fill material, and exemplary materials for conductive barrier liners and conductive fill materials are described above. In one such embodiment, the electrically conductive filler material comprises cobalt.In an embodiment, dielectric plug 6318' comprises a metal oxide material. In one such embodiment, the metal oxide material is alumina. In an embodiment, dielectric plugs 6318' are in direct contact with portions of first 6324A and second 6324B of conductive interconnects 6324.In an embodiment, dielectric plug 6318' has a bottom portion 6318A that is substantially coplanar with bottom portion 6324C of conductive interconnect line 6324. In an embodiment, the first conductive via 6326 is in the trench 6308 in the ILD layer 6302. In one such embodiment, the first conductive via 6326 is lower than the bottom 6324C of the interconnect 6324, and the first conductive via 6326 is electrically coupled to the first portion 6324A of the conductive interconnect 6324.In an embodiment, the second conductive via 6328 is in the third trench 6330 in the ILD layer 6302. The second conductive via 6328 is lower than the bottom 6324C of the interconnect line 6324, and the second conductive via 6328 is electrically coupled to the second portion 6324B of the conductive interconnect 6324.The dielectric plug can be formed using a filling process such as a chemical vapor deposition process. Artifacts may be retained in the manufactured dielectric plug. As an example, FIG. 64A illustrates a cross-sectional view of a conductive wire plug having a seam therein in accordance with an embodiment of the present disclosure.Referring to Figure 64A, dielectric plug 6418 has an approximate vertical direct slit 6400 that is approximately equal to the spacing of first portion 6324A of conductive interconnect 6324 and second portion 6324B of conductive interconnect 6324.It will be appreciated that dielectric plugs having different compositions than the ILD materials containing them may be included only on the selected metallization layer, such as in the lower metallization layer. By way of example, Figure 64B illustrates a cross-sectional view of a stack of metallization layers including conductive line plugs at a lower metal line location, in accordance with an embodiment of the present disclosure.Referring to FIG. 64B, integrated circuit structure 6450 includes a first plurality of conductive interconnects 6456 that are in a first interlayer dielectric (ILD) layer 6454 over substrate 6452 and are separated by a first interlayer dielectric (ILD) layer 6454. . The individual conductive interconnects in the first plurality of conductive interconnects 6456 have continuity that is broken by one or more dielectric plugs 6458. In an embodiment, one or more of the dielectric plugs 6458 comprise a different material than the ILD layer 6452. A second plurality of conductive interconnects 6466 are in the second ILD layer 6464 above the first ILD layer 6454 and are separated by a second ILD layer 6464. In an embodiment, the individual conductive interconnects in the second plurality of conductive interconnects 6466 have continuity that is broken by one or more portions 6468 of the second ILD layer 6464. It will be appreciated that other metallization layers may be included in integrated circuit structure 6450 as shown.In one embodiment, the one or more dielectric plugs 6458 comprise a metal oxide material. In one such embodiment, the metal oxide material is alumina. In one embodiment, the first ILD layer 6454 and the second ILD layer 6464 (and thus, one or more portions 6568 of the second ILD layer 6464) comprise a carbon doped silicon oxide material.In one embodiment, the individual conductive interconnects in the first plurality of conductive interconnects 6456 include a first conductive barrier liner 6456A and a first conductive fill material 6456B. The individual conductive interconnects in the second plurality of conductive interconnects 6466 include a second conductive barrier liner 6466A and a second conductive fill material 6466B. In one such embodiment, the composition of the first conductive fill material 6456B is different than the second conductive fill material 6466B. In certain such embodiments, the first conductive fill material 6456B comprises cobalt and the second conductive fill material 6466B comprises copper.In one embodiment, the first plurality of conductive interconnects 6456 have a first pitch (P1, as shown in similar layer 6470). The second plurality of electrically conductive interconnects 6466 have a second pitch (P2, as shown in similar layer 6480). The second pitch (P2) is greater than the first pitch (P1). In one embodiment, the individual conductive interconnects in the first plurality of conductive interconnects 6456 have a first width (W1, as shown in analog layer 6470). The individual conductive interconnects in the second plurality of conductive interconnects 6466 have a second width (W2, as shown in similar layer 6480). The second width (W2) is greater than the first width (W1).It will be appreciated that the layers and materials described above in connection with the back end of line (BEOL) structure and processing may be formed on or over the underlying semiconductor substrate or structure (e.g., the underlying device layer of an integrated circuit). In an embodiment, the underlying semiconductor substrate represents a general workpiece object used to fabricate an integrated circuit. Semiconductor substrates often include wafers or other sheets of silicon or another semiconductor material. Suitable semiconductor substrates include, but are not limited to, single crystal silicon, polysilicon, and silicon-on-insulator (SOI), as well as similar substrates formed from other semiconductor materials, such as substrates including germanium, carbon, or III-V materials. Depending on the stage of fabrication, semiconductor substrates often include transistors, integrated circuits, and the like. The substrate may also include semiconductor materials, metals, dielectrics, dopants, and other materials commonly found in semiconductor substrates. Furthermore, the structure shown can be fabricated on the lower level interconnect layers of the lower layer.Although the foregoing method of fabricating portions of a metallization layer or metallization layer of a BEOL metallization layer is described in detail with respect to a select operation, it will be appreciated that additional or intermediate operations for fabrication may include standard microelectronic fabrication processes, such as photolithography. , etching, thin film deposition, planarization (eg chemical mechanical polishing (CMP)), diffusion, metrology, use of sacrificial layers, use of etch stop layers, use of planarization stop layers, or any other associated with microelectronic component fabrication action. Moreover, it will be appreciated that the process operations described for the foregoing process flows may be practiced in an alternate order, and that each operation need not be performed, or additional process operations may be performed, or both.In an embodiment, as used throughout this specification, an interlayer dielectric (ILD) material is comprised of or comprises a dielectric layer or layer of insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (eg, silicon dioxide (SiO2)), doped oxides of silicon, fluorinated oxides of silicon, carbon-doped oxides of silicon, prior art Various low k dielectric materials and combinations thereof are known. The interlayer dielectric material can be formed by techniques such as chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.In an embodiment, as also used throughout this specification, the metal line or interconnect material (and via material) is comprised of one or more metals or other conductive structures. A common example is the use of copper wires and structures that may or may not include a barrier between copper and surrounding ILD material. As used herein, the term metal includes alloys, stacks, and other combinations of various metals. For example, the metal interconnect may include a barrier layer (e.g., a layer including one or more of Ta, TaN, Ti, or TiN), a stack of different metals or alloys, and the like. Thus, the interconnect lines can be a single material layer or can be formed from several layers, including a conductive liner and a fill layer. The interconnect line can be formed using any suitable deposition process such as electroplating, chemical vapor deposition, or physical vapor deposition. In an embodiment, the interconnecting wires are comprised of a conductive material such as, but not limited to, Cu, Al, Ti, Zr, Hf, V, Ru, Co, Ni, Pd, Pt, W, Ag, Au, or alloys thereof. Interconnect lines are sometimes referred to in the art as traces, wires, lines, metals, or simply interconnects.In the embodiment, as also used throughout this specification, the hard mask material is composed of a dielectric material different from the interlayer dielectric material. In one embodiment, different hard mask materials can be used in different regions to provide different growth or etch selectivity with respect to each other and with respect to the underlying dielectric and metal layers. In some embodiments, the hard mask layer comprises a nitride (e.g., silicon nitride) layer of silicon or an oxide layer of silicon, or a combination of the two or a combination thereof. Other suitable materials may include carbon based materials. In another embodiment, the hard mask material comprises a metal species. For example, the hard mask or other overlying material may comprise a layer of titanium or another metal nitride (eg, titanium nitride). Other materials, such as oxygen, may be included in one or more of these layers. Alternatively, other hard mask layers known in the art may be used depending on the particular implementation. The hard mask layer can be formed by CVD, PVD, or other deposition methods.In the examples, as also used throughout this specification, a lithography operation is performed using 193 nm immersion lithography (i193), extreme ultraviolet (EUV) lithography, or electron beam direct writing (EBDW) lithography or the like. A positive or negative tone resist can be used. In one embodiment, the lithographic mask is a three layer mask consisting of a topographical masking portion, an anti-reflective coating (ARC), and a photoresist layer. In a particular such embodiment, the topography masking portion is a carbon hard mask (CHM) layer and the anti-reflective coating is a silicon ARC layer.In another aspect, one or more embodiments described herein relate to memory bit cells having internal node jumpers. Particular embodiments may include techniques for implementing efficient layout of memory bit cells in advanced self-aligned process technology. Embodiments may involve a technology node of 10 nanometers or less. Embodiments may provide the ability to develop memory bit cells with improved performance in the same footprint by utilizing contacts (COAG) or aggressive metal 1 (M1) pitch scaling over active gates, or both. Embodiments may include or involve bit cell layouts to enable higher performance bit cells to be implemented in the same or smaller footprint than prior art nodes.In accordance with embodiments of the present disclosure, higher metal layer (eg, metal 1 or M1) jumpers are implemented to connect internal nodes instead of using conventional gate-trench contact-gate contacts (poly-tcn-polycon) )connection. In an embodiment, a contact (COAG) integration scheme above the active gate is combined with a metal 1 jumper to connect internal nodes, alleviating or completely eliminating the need to increase the footprint for higher performance units. That is, an improved transistor ratio can be achieved. In an embodiment, such an approach enables aggressive scaling to provide improved cost per transistor for, for example, a 10 nanometer (10 nm) technology node. Internal node M1 jumpers can be implemented in SRAM, RF, and dual port bit cells in 10nm technology to produce a very compact layout.As a comparative example, FIG. 65 shows a first view of a cell layout for a memory cell.Referring to FIG. 65, an exemplary 14 nanometer (14 nm) layout 6500 includes a bit cell 6502. Bit cell 6502 includes a gate or multi-line 6504 and a metal 1 (M1) line 6506. In the illustrated example, multi-line 6504 has a 1x pitch and M1 line 6506 has a 1x pitch. In a particular example, multi-line 6504 has a 70 nm pitch and M1 line 6506 has a 70 nm pitch.In contrast to FIG. 65, FIG. 66 illustrates a first view of a cell layout for a memory cell having internal node jumpers, in accordance with an embodiment of the present disclosure.Referring to FIG. 66, an exemplary 10 nanometer (10 nm) layout 6600 includes a bit cell 6602. Bit cell 6602 includes a gate or multi-line 6604 and a metal 1 (M1) line 6606. In the illustrated example, multi-line 6604 has a 1x pitch and M1 line 6606 has a 0.67x pitch. As a result, an overlapping line 6605 is obtained which includes the M1 line directly above the multi-line. In a particular embodiment, multi-line 6604 has a 54 nm pitch and M1 line 6606 has a 36 nm pitch.Compared to layout 6500, in layout 6600, the M1 spacing is less than the gate spacing, and additional lines (6605) are vacated every three lines (e.g., for every two multiple lines, there are three M1 lines). The "empty" M1 line is referred to herein as the internal node jumper. Internal node jumpers can be used for gate-to-gate (multi-to-many) interconnects or for trench-to-trray contact interconnections. In an embodiment, contact with multiple lines is achieved by a contact (COAG) arrangement over the active gate such that internal node jumpers can be fabricated.Referring more generally to Figure 66, in an embodiment, the integrated circuit structure includes memory bit cells 6602 on the substrate. Memory bit cell 6602 includes first and second gate lines 6604 that are parallel along a second direction 2 of the substrate. The first and second gate lines 6602 have a first pitch along a first direction (1) of the substrate, the first direction (1) being perpendicular to the second direction (2). First, second, and third interconnect lines 6606 are over the first and second gate lines 6604. The first, second and third interconnect lines 6606 are parallel along the second direction (2) of the substrate. The first, second, and third interconnect lines 6606 have a second pitch along the first direction, wherein the second pitch is less than the first pitch. In one embodiment, one of the first, second, and third interconnect lines 6606 is an internal node jumper for memory bitcell 6602.As applied throughout the disclosure, the gate lines 6604 can be referred to as being on a track to form a grid structure. Thus, the grid-like patterns described herein can have gate lines or interconnects that are spaced apart at a constant pitch and have a constant width. The pattern can be fabricated by halving the pitch or pitching four or other pitch divisions.As a comparative example, Figure 67 shows a second view of a cell layout 6700 for a memory cell.Referring to FIG. 67, a 14 nm bit cell 6502 is shown having an N diffusion 6702 (eg, a P-type doped active region, such as a boron-doped diffusion region of an underlying substrate) and a P-diffusion 6704 (eg, an N-type doped active region) For example, a phosphorus-doped or arsenic-doped diffusion region of the underlying substrate, the M1 line is removed for clarity. The layout 6700 of the bit cell 102 includes a gate line or a plurality of lines 6504, a trench contact portion 6706, a gate contact portion 6708 (specific to the 14 nm node), and a contact via 6710.In contrast to FIG. 67, FIG. 68 illustrates a second view of a cell layout 6800 for a memory cell having internal node jumpers, in accordance with an embodiment of the present disclosure.Referring to FIG. 68, a 10 nm bit cell 6602 is illustrated as having an N diffusion 6802 (eg, a P-type doped active region, such as a boron-doped diffusion region of an underlying substrate) and a P-diffusion 6804 (eg, an N-type doped active region) For example, a phosphorus-doped or arsenic-doped diffusion region of the underlying substrate, the M1 line is removed for clarity. The layout 6800 of the bit cell 202 includes a gate line or multi-line 6604, a trench contact 6806, a gate via 6808 (specific to the 10 nm node), and a trench contact via 6710.In contrast to layouts 6700 and 6800, in a 14 nm layout, the internal nodes are only connected by gate contacts (GCN), in accordance with embodiments of the present disclosure. Due to the multi-GCN space constraints, an enhanced performance layout cannot be generated in the same footprint. In a 10 nm layout, this design allows a landing contact (VCG) on the gate to eliminate the need for multiple contacts. In one embodiment, this arrangement enables the connection of internal nodes using M1, allowing for an increase in active area density (e.g., an increase in the number of fins) within a 14 nm footprint. In a 10 nm layout, the spacing between the diffusion regions can be made smaller when using the COAG architecture because they are not limited by the trench contact to gate contact spacing. In an embodiment, the layout 6700 of Figure 67 is referred to as 112 (1 fin pull up, 1 fin transfer gate, 2 fin pull down) arrangement. In contrast, the layout 6800 of Figure 68 is referred to as 122 (1 fin pull up, 2 fin transfer gates, 2 fin pull down) arrangement, in a particular embodiment, the arrangement is in The 112 layout of Figure 67 is within the same footprint. In an embodiment, the 122 arrangement provides improved performance compared to the 112 arrangement.As a comparative example, FIG. 69 shows a third view of the cell layout 6900 for a memory cell.Referring to Figure 69, a 14 nm bit cell 6502 is shown having a metal 0 (M0) line 6902 with multiple lines removed for clarity. Also shown is a metal 1 (M1) line 6506, a contact via 6710, and a via 0 structure 6904.In contrast to FIG. 69, FIG. 70 illustrates a third view of a cell layout 7000 for a memory cell having internal node jumpers, in accordance with an embodiment of the present disclosure.Referring to Figure 70, a 10 nm bit cell 6602 is shown having a metal 0 (M0) line 7002 with multiple lines removed for clarity. Also shown is a metal 1 (M1) line 6606, a gate via 6808, a trench contact via 6810, and a via 0 structure 7004. Compared to FIG. 69 and FIG. 70, according to an embodiment of the present disclosure, for a 14 nm layout, internal nodes are only connected by gate contacts (GCN), and for a 10 nm layout, M1 jumpers are used to connect internal nodes. One.Referring collectively to FIGS. 66, 68, and 70, in accordance with an embodiment of the present disclosure, an integrated circuit structure includes a memory bitcell 6602 on a substrate. Memory bit cell 6602 includes a first (top 6802) active region, a second (top 6804) active region, a third (bottom 6804) active region, and a fourth (parallel) along a first direction (1) of the substrate ( Bottom 6802) active area. The first (left 6604) and second (right 6604) gate lines are above the first, second, third and fourth active regions 6802/6804. The first and second gate lines 6604 are parallel along the second direction (2) of the substrate, and the second direction (2) is perpendicular to the first direction (1). A first (far left 6606), a second (near left 6606), and a third (near right 6606) interconnect are above the first and second gate lines 6604. The first, second and third interconnect lines 6606 are parallel along the second direction (2) of the substrate.In an embodiment, the first (far left 6606) and second (near left 6606) interconnects are over one or more of the first, second, third, and fourth active regions 6802/6804 The locations of the first and second gate lines 6604 (eg, at a so-called "active gate" location) are electrically coupled to the first and second gate lines 6604. In one embodiment, the first (far left 6606) and second (near left 6606) interconnects are located between the first and second interconnect lines 6606 and the first and second gate lines 6604 vertically. A plurality of interconnected interconnects 7004 are electrically connected to the first and second gate lines 6604. The plurality of interconnecting interconnects 7004 are parallel along the first direction (1) of the substrate.In an embodiment, a third interconnect (near right 6606) electrically couples the gate electrode pairs of memory bit cells 6602, which are included in the first and second gate lines 6604. In another embodiment, the third interconnect (near right 6606) electrically couples the pair of trench contacts of the memory bit cell 6602 together, the pair of trench contacts being included in the plurality of trench contact lines 6806 . In an embodiment, the third interconnect (near right 6606) is an internal node jumper.In an embodiment, the first active region (top 6802) is a P-type doped active region (eg, providing N diffusion for an NMOS device) and the second active region (top 6804) is an N-type doped active region (eg, providing P diffusion for PMOS devices), the third active region (bottom 6804) is an N-type doped active region (eg, providing P diffusion for PMOS devices), and the fourth active region (bottom 6802) is An N-type doped active region (eg, providing N diffusion for NMOS devices). In an embodiment, the first, second, third, and fourth active regions 6802/6804 are in a silicon fin. In an embodiment, the memory bit unit 6602 includes a single silicon fin based pull-up transistor, two silicon fin based pass gate transistors, and two silicon fin based pull-down transistors.In an embodiment, the first and second gate lines 6604 alternate in parallel with the individual trench contact lines in the plurality of trench contact lines 6806 in a second direction (2) of the substrate. The plurality of trench contact lines 6806 include trench contact portions of the memory bit cells 6602. The first and second gate lines 6604 include the gate electrodes of the memory bit cells 6602.In an embodiment, the first and second gate lines 6604 have a first pitch along the first direction (1). The first, second, and third interconnect lines 6606 have a second pitch in the second direction (2). In one such embodiment, the second pitch is less than the first pitch. In a specific such embodiment, the first pitch is in the range of 50 nanometers to 60 nanometers and the second pitch is in the range of 30 nanometers to 40 nanometers. In certain such embodiments, the first pitch is 54 nanometers and the second pitch is 36 nanometers.Embodiments described herein may be implemented to provide an increased number of fins within a relatively identical bit cell footprint of prior art nodes, enhancing the performance of smaller technology node memory bit cells relative to the previous generation. By way of example, Figures 71A and 71B illustrate bit cell layout and schematic diagrams for a six transistor (6T) static random access memory (SRAM), respectively, in accordance with an embodiment of the present disclosure.Referring to Figures 71A and 71B, the bit cell layout 7102 includes a gate line 7104 (which may also be referred to as a multi-line) that is parallel in the direction (2). The trench contact line 7106 alternates with the gate line 7104. The gate line 7104 and the trench contact line 7106 are in an NMOS diffusion region 7108 (eg, a P-type doped active region, such as a boron-doped diffusion region of the underlying substrate) and a PMOS diffusion region 7110 (parallel in the direction (1)). For example, an N-type doped active region, such as a phosphorus-doped or arsenic-doped diffusion region of the underlying substrate, or both. In an embodiment, both NMOS diffusion regions 7108 include two silicon fins. Both PMOS diffusion regions 7110 include a silicon fin.Referring again to FIGS. 71A and 71B, the NMOS pass gate transistor 7112, the NMOS pull-down transistor 7114, and the PMOS pull-up transistor 7116 are formed by the gate line 7104 and the NMOS diffusion region 7108 and the PMOS diffusion region 7110. Also shown are word line (WL) 7118, internal nodes 7120 and 7126, bit line (BL) 7122, bit line (BLB) 7124, SRAM VCC 7128, and VSS 7130.In an embodiment, contact with the first and second gate lines 7104 of the bit cell layout 7102 is made to the active gate locations of the first and second gate lines 7104. In an embodiment, the 6T SRAM bit unit 7104 includes internal node jumpers, such as described above.In an embodiment, the layout described herein is compatible with uniform plugs and mask patterns, including uniform fin trim masks. The layout is compatible with non-EUV processes. In addition, the layout may only require the use of an intermediate fin trim mask. The embodiments described herein are capable of increasing density over an area compared to other layouts. Embodiments may be implemented to provide a layout efficient memory implementation in advanced self-aligned process technology. Advantages can be realized in terms of die area or memory performance or both. Circuit technology can be uniquely realized by such a layout.One or more embodiments described herein relate to multi-version library unit processing when parallel interconnect lines (eg, metal 1 lines) and gate lines are misaligned. Embodiments may involve a technology node of 10 nanometers or less. Embodiments may include or involve a cell layout that enables higher performance units in the same or smaller footprint as compared to prior art nodes. In an embodiment, the interconnect lines overlying the gate lines are fabricated to have an increased density relative to the underlying gate lines. Such an embodiment can achieve increased pin hits, increased routing possibilities, or increased access to cell pins. Embodiments may be practiced to provide a block level density greater than 6%.To provide context, the next parallel level of gate lines and interconnects (typically referred to as metal 1 and the metal 0 layer extending orthogonally between metal 1 and gate lines) requires alignment at the block level. However, in an embodiment, the pitch of the metal 1 lines is made different (e.g., smaller than) the pitch of the gate lines. There are two standard cell versions for each cell (for example, two different cell patterns) that can be used to accommodate differences in spacing. The particular version selected follows the rules that are adhered to at the block level. If not properly selected, a contaminated registration (DR) may occur. In accordance with an embodiment of the present disclosure, a higher metal layer (e.g., metal 1 or M1) having an increased pitch density relative to the underlying gate line is implemented. In an embodiment, such an approach enables aggressive scaling to provide an improved cost per transistor for, for example, a 10 nanometer (10 nm) technology node.Figure 72 shows a cross-sectional view of two different layouts for the same standard cell, in accordance with an embodiment of the present disclosure.Referring to part (a) of Fig. 72, the gate line 7204A is overlaid on the substrate 7272A. A metal 1 (M1) interconnect 7206A is overlying the set of gate lines 7204A. The metal 1 (M1) interconnect 7206A group has a tighter pitch than the gate line 7204A group. However, the outermost metal 1 (M1) interconnect 7206A has an outer alignment with the outermost gate line 7204A. For design purposes, as used throughout this disclosure, the alignment arrangement of portion (a) of Figure 72 is referred to as having an even (E) alignment.In contrast to the portion (a), referring to part (b) of Fig. 72, the gate line 7204B is overlaid on the substrate 7202B. A group of metal 1 (M1) interconnects 7206B overlies the set of gate lines 7204B. The metal 1 (M1) interconnect 7206B group has a tighter pitch than the gate line 7204B group. The outermost metal 1 (M1) interconnect 7206B does not have external alignment with the outermost gate line 7204B. For design purposes, as used throughout this disclosure, the non-aligned arrangement of portion (b) of Figure 72 is referred to as having an odd (O) alignment.Figure 73 shows a plan view showing four different cell arrangements for even (E) or odd (O) designations, in accordance with an embodiment of the present disclosure.Referring to part (a) of Fig. 73, unit 7300A has a gate (or multiple) line 7302A and a metal 1 (M1) line 7304A. Unit 7300A is designated as an EE unit because the left side of unit 7300A and the right side of unit 7300A have aligned gates 7302A and M1 7304A lines. In contrast, referring to part (b) of Fig. 73, the unit 7300B has a gate (or multi) line 7302B and a metal 1 (M1) line 7304B. Unit 7300B is designated as an OO unit because the left side of unit 7300B and the right side of unit 7300B have non-aligned gates 7302B and M1 7304B lines.Referring to part (c) of Fig. 73, unit 7300C has a gate (or multiple) line 7302C and a metal 1 (M1) line 7304C. Cell 7300C is designated as an EO cell because the left side of cell 7300C has aligned gates 7302C and M1 7304C lines, but the right side of cell 7300C has non-aligned gates 7302C and M1 7304C lines. In contrast, referring to part (d) of Fig. 73, unit 7300D has a gate (or multiple) line 7302D and a metal 1 (M1) line 7304D. Cell 7300D is designated as an OE cell because the left side of cell 7300D has non-aligned gates 7302D and M1 7304D lines, but the right side of cell 7300D has aligned gates 7302D and M1 7304D lines.As a basis for placing a selected first or second version of a standard cell type, FIG. 74 illustrates a plan view of a block-level multi-grid in accordance with an embodiment of the present disclosure. Referring to Figure 74, block level multi-grid 7400 includes gate lines 7402 extending in parallel along direction 7404. The specified cell layout boundaries 7406 and 7408 are shown extending in a second orthogonal direction. Gate line 7402 alternates between even (E) and odd (O) specifications.Figure 75 illustrates an exemplary acceptable (pass) layout based on standard cells having different versions, in accordance with an embodiment of the present disclosure. Referring to Figure 75, layout 7500 includes three cells of type 7300C/7300D placed in a left-to-right order between boundaries 7406 and 7408: 7300D, a contiguous first 7300C, and a spaced apart second 7300C. The choice between 7300C and 7300D is based on the alignment specified by E or O on the corresponding gate line 7402. Layout 7500 also includes cells of type 7300A/7300B placed in a left-to-right order below boundary 7408: first 7300A is spaced apart from second 7300A. The choice between 7300A and 7300B is based on the alignment specified by E or O on the corresponding gate line 7402. In the sense that no pollution registration (DR) occurs in layout 7500, layout 7500 is a pass unit. It will be appreciated that p specifies power and a, b, c or o is an exemplary fin. In arrangement 7500, power lines p are aligned with each other across boundary 7408.Referring more generally to FIG. 75, in accordance with an embodiment of the present disclosure, an integrated circuit structure includes a plurality of gate lines 7402 that are parallel along a first direction of the substrate and that are spaced apart in a second direction that is orthogonal to the first direction. A first version of the cell type 7300C is over the first portion of the plurality of gate lines 7402. The first version 7300C of the cell type includes a first plurality of interconnect lines having a second pitch in the second direction, the second pitch being less than the first pitch. A second version 7300D of cell types is over a second portion of the plurality of gate lines 7402 that is laterally adjacent to the first version 7300C of the cell type in the second direction. The second version 7300D of the cell type includes a second plurality of interconnect lines having a second pitch in the second direction. The second version 7300D of the unit type is structurally different from the first version 7300C of the unit type.In an embodiment, at a first edge of the cell type 7300C along a first edge (eg, a left edge) of the second direction rather than a second edge (eg, a right edge), the first version of the first version of the cell type 7300C Individual ones of the plurality of interconnect lines are aligned with individual ones of the plurality of gate lines 7402 in a first direction. In one such embodiment, the first version of the cell type 7300C is the first version of the NAND cell. At a first edge (eg, a left edge) of the second version 7300D of the cell type along the second direction, the individual interconnects in the second plurality of interconnect lines of the second version 7300D of the cell type are along the first direction Individual gate lines in the plurality of gate lines 7402 are misaligned, while at a second edge (eg, right edge) of the second version of the cell type 7300D in the second direction, the second version of the second version of the cell type 7300D Individual ones of the plurality of interconnect lines are aligned with individual ones of the plurality of gate lines 7402 in a first direction. In one such embodiment, the second version of the cell type, 7300D, is the second version of the NAND cell.In another embodiment, the first and second versions are selected from unit types 7300A and 7300D. At two edges of the first version of the cell type 7300A along the second direction, the individual interconnects in the first plurality of interconnect lines of the first version 7300A of the cell type are along the first direction and the plurality of gate lines 7402 The individual gate lines are aligned. In one embodiment, the first version of the cell type 7300A is the first version of the inverter unit. It will be appreciated that at the two edges of the second version 7300B of the cell type along the second direction, the individual interconnects in the second plurality of interconnect lines of the second version 7300B of the cell type are otherwise first along The direction is not aligned with the individual gate lines of the plurality of gate lines 7402. In one embodiment, the second version 7300B of the unit type is the second version of the inverter unit.Figure 76 illustrates an exemplary unacceptable (failed) layout based on standard cells having different versions, in accordance with an embodiment of the present disclosure. Referring to Figure 76, layout 7600 includes three cells of type 7300C/7300D placed in a left-to-right order between boundaries 7406 and 7408: 7300D, a first first 7300C and a second spaced apart 7300C. Appropriate selection between 7300C and 7300D is based on the alignment specified by E or O on the corresponding gate line 7402, as shown. However, layout 7600 also includes cells of type 7300A/7300B placed in a left-to-right order below boundary 7408: first 7300A is spaced apart from second 7300A. The layout 7600 differs from the 7500 in that the second 7300A moves a line to the left. Although the choice between 7300A and 7300B should be based on the E or O specified alignment on the corresponding gate line 7402, this is not the case and the second unit 7300A is misaligned, one result of which is misaligned power (p) line. Layout 7600 is the failure unit because of the presence of contaminated registration (DR) in layout 7600.Figure 77 illustrates another exemplary acceptable (pass) layout based on standard cells having different versions, in accordance with an embodiment of the present disclosure. Referring to Figure 77, layout 7700 includes three cells of type 7300C/7300D placed in a left-to-right order between boundaries 7406 and 7408: 7300D, a contiguous first 7300C, and a spaced apart second 7300C. The choice between 7300C and 7300D is based on the alignment specified by E or O on the corresponding gate line 7402. Layout 7700 also includes cells of type 7300A/7300B placed in a left-to-right order below boundary 7408: 7300A and 7300B are spaced apart. The position of 7300B in layout 7600 is the same as the location of 7300A, but selected cell 7300B is based on the proper alignment specified by O on corresponding gate line 7402. Layout 7700 is a pass-through unit in the sense that no pollution registration (DR) occurs in layout 7700. It will be appreciated that p specifies power and a, b, c or o is an exemplary fin. In arrangement 7700, power lines p are aligned with one another across boundary 7408.Referring collectively to FIGS. 76 and 77, a method of fabricating a layout for an integrated circuit structure includes designating alternating gate lines of a plurality of gate lines 7402 that are parallel in a first direction as even (E) along a second direction. Or odd (O). A location is then selected for the cell type above the plurality of gate lines 7402. The method also includes selecting between a first version of the unit type and a second version of the unit type depending on the location, the second version being structurally different from the first version, wherein the selected version of the unit type is for the unit type edge The interconnect at the edge of the second direction has an even (E) or odd (O) designation, and wherein the designation of the cell type edge matches the designation of individual ones of the plurality of gate lines under the interconnect.In another aspect, one or more embodiments relate to fabricating a metal resistor on a fin-based structure included in a fin field effect transistor (FET) architecture. In an embodiment, such precision resistors are implanted as a fundamental component of system-on-a-chip (SoC) technology since faster data transfer rates require high speed IO. Such resistors can achieve high speed analog circuits (such as CSI/SERDES) and scaled IO architecture due to low variation and near zero temperature coefficients. In one embodiment, the resistors described herein are adjustable resistors.To provide context, conventional resistors used in current process technology typically fall into one of two categories: general resistors or precision resistors. Typical resistors such as trench contact resistors are moderately costly, but may suffer from high variations due to variations inherent in the fabrication methods used or the associated relatively large temperature coefficients of the resistors or both. Precision resistors can alleviate variations and temperature coefficient problems, but often result in higher process costs and increased manufacturing operations. The integration of polysilicon precision resistors has created increasing difficulties in high-k/metal gate process technology.According to an embodiment, a fin-based thin film resistor (TFR) is described. In one embodiment, such a resistor has a temperature coefficient close to zero. In one embodiment, such resistors exhibit reduced variations due to dimensional control. In accordance with one or more embodiments of the present disclosure, an integrated precision resistor is fabricated within a fin FET transistor architecture. It will be appreciated that conventional resistors used in high k/metal gate process technology are typically tungsten trench contacts (TCN), well resistors or polysilicon precision resistors. Such resistors either increase process cost or complexity or suffer high variations and poor temperature coefficients due to variations in the manufacturing process used. In contrast, in an embodiment, the fabrication of the integrated finned thin film resistor enables a solution that is cost-effective, has a good temperature coefficient (near zero), and low variation in place of known methods.In order to provide more context, prior art precision resistors have been fabricated using two-dimensional (2D) metal films or highly doped multi-wires. Such resistors are often discretized to a fixed value of the template power, and therefore, it is difficult to achieve a finer-grained resistance value.In order to address one or more of the above problems, in accordance with one or more embodiments of the present disclosure, a design of a high density precision resistor using a fin backbone (eg, a silicon fin backbone) is described herein. In one embodiment, the advantages of such high density precision resistors include the ability to achieve high density using fin packing density. Moreover, in one embodiment, such resistors are integrated on the same level as the active transistors, enabling the fabrication of compact circuits. The use of a silicon fin backbone allows for high packing density and provides multiple degrees of freedom to control the resistance of the resistor. Thus, in a particular embodiment, the flexibility of the fin patterning process provides a wide range of resistance values, resulting in an adjustable precision resistor fabrication.As an exemplary geometry of a fin-based precision resistor, FIG. 78 illustrates a partially cut plan view and a corresponding cross-sectional view of a fin-based thin film resistor structure in accordance with an embodiment of the present disclosure, wherein the cross-sectional view is along Part of the cut plan is taken from the a-a' axis.Referring to FIG. 78, integrated circuit structure 7800 includes a semiconductor fin 7802 overlying substrate 7804 that protrudes through trench isolation region 7814. In one embodiment, the semiconductor fins 7802 protrude from the substrate 7804 and are continuous therewith, as shown. The semiconductor fin has a top surface 7805, a first end 7806 (shown as a dashed line in a partially cut plan view, as the fin is covered in this view), a second end 7808 (shown in a partially cut plan view) It is a dashed line because the fins are covered in this view, and a pair of side walls 7807 between the first end 7806 and the second end 7808. It will be appreciated that in a partially cut plan view, the side walls 7807 are actually covered by the layer 7812.The isolation layer 7812 is conformal to the top surface 7805, the first end 7806, the second end 7808, and the sidewall 7807 of the semiconductor fin 7802. The metal resistor layer 7810 is conformal to the isolation layer 7814, the isolation layer 7814 and the top surface 7805 of the semiconductor fin 7802 (metal resistor layer portion 7810A), the first end portion 7806 (metal resistor layer portion 7810B), The two end portions 7808 (metal resistor layer portion 7810C) and the side walls 7807 are conformal to the (metal resistor layer portion 7810D). In a particular embodiment, metal resistor layer 7810 includes footed features 7810E adjacent sidewalls 7807. The isolation layer 7812 electrically isolates the metal resistor layer 7810 from the semiconductor fins 7802 and, therefore, is electrically isolated from the substrate 7804.In an embodiment, the metal resistor layer 7810 is constructed of a material suitable to provide a near zero temperature coefficient because the resistance of the metal resistor layer portion 7810 does not range from the operating temperature of the thin film resistor (TFR) fabricated therefrom. Significant changes within. In an embodiment, the metal resistor layer 7810 is a titanium nitride (TiN) layer. In another embodiment, the metal resistor layer 7810 is a tungsten (W) metal layer. It will be appreciated that other metals may be used in place of the titanium nitride (TiN) or tungsten (W) or in combination with titanium nitride (TiN) or tungsten (W) for the metal resistor layer 7810. In an embodiment, metal resistor layer 7810 has a thickness in the range of approximately 2-5 nanometers. In an embodiment, the metal resistor layer 7810 has a resistivity in the range of approximately 100-100,000 ohms/square.In an embodiment, the anode and cathode electrodes are electrically coupled to a metal resistor layer 7810, an exemplary embodiment of which is described in greater detail below in connection with FIG. In one such embodiment, the metal resistor layer 7810, the anode electrode, and the cathode electrode form a precision thin film resistor (TFR) passive device. In an embodiment, the TFR based on the structure 7800 of Figure 78 allows for precise control of the resistance based on the height of the fins 7802, the width of the fins 7802, the thickness of the metal resistor layer 7810, and the length of the total fins 7802. These degrees of freedom can allow the circuit designer to achieve a selected resistance value. In addition, since resistor patterning is based on fins, high density is possible on the scale of transistor density.In an embodiment, fins suitable for fabricating fin-based resistors are provided using prior art fin FET processing operations. The advantage of this approach may be its high density and proximity to the active transistors, enabling easy integration into the circuit. Moreover, the flexibility of the geometry of the underlying fins allows for a wide range of resistance values. In an exemplary processing scheme, the fins are first patterned using bone lithography and spacerization. The fins are then covered with an isolation oxide that is recessed to set the height of the resistor. An insulating oxide is then conformally deposited over the fin to separate the conductive film from the underlying substrate (eg, the underlying silicon substrate). A metal or highly doped polysilicon film is then deposited over the fins. The membrane is then spacerized to create a precision resistor.In an exemplary processing scheme, FIGS. 79-83 illustrate plan views and corresponding cross-sectional views representing various operations in a method of fabricating a fin-based thin film resistor structure, in accordance with an embodiment of the present disclosure.Referring to Figure 79, a plan view and a corresponding cross-sectional view taken along the b-b' axis of the plan view illustrate the stages of the process flow after formation of the backbone template structure 7902 on the semiconductor substrate 7801. A sidewall spacer layer 7904 conforming to the sidewall surface of the backbone template structure 7902 is then formed. In an embodiment, after patterning of the backbone template structure 7902, a conformal oxide material is deposited and then anisotropically etched (spaced) to provide a sidewall spacer layer 7904.Referring to Figure 80, a plan view illustrates the stage of the process flow after exposure of region 7906 of sidewall spacer layer 7904, such as by a photolithographic masking and exposure process. The portion of sidewall spacer layer 7904 included in region 7906 is then removed, such as by an etch process. The removed portion is those portions that will be used for the final fin definition.Referring to FIG. 81, a plan view and a corresponding cross-sectional view taken along the c-c' axis of the plan view illustrate portions of the sidewall spacer layer 7904 included in the region 7906 of FIG. 80 removed to form a fin patterning mask (eg, , the stage of the process stream after the oxide fin patterning mask). The backbone template structure 7902 is then removed and the remaining patterned mask is used as an etch mask to pattern the substrate 7801. Upon patterning of the substrate 7801 and subsequent removal of the fin patterning mask, the semiconductor fins 7802 remain protruding from and continuous with the now patterned semiconductor substrate 7804. The semiconductor fin 7802 has a top surface 7805, a first end 7806, a second end 7808, and a pair of side walls 7807 between the first end and the second end, as described above in connection with FIG.Referring to Figure 82, a plan view and a corresponding cross-sectional view taken along the d-d' axis of the plan view illustrate the stages of the process flow after formation of the trench isolation layer 7814. In an embodiment, the trench isolation layer 7814 is formed by depositing an insulating material and then recessing it to define a fin height (Hsi).Referring to Figure 83, a plan view and a corresponding cross-sectional view taken along the e-e' axis of the plan view illustrate the stages of the process flow after formation of the isolation layer 7812. In an embodiment, the isolation layer 7812 is formed by a chemical vapor deposition (CVD) process. A spacer layer 7812 conforming to the top surface 7805 of the semiconductor fin 7802, the first end 7806, the second end 7808, and the sidewall (7807) is formed. A metal resistor layer 7810 conforming to the isolation layer 7812 is then formed, the isolation layer 7812 being conformal to the top surface, the first end, the second end, and the sidewall pairs of the semiconductor fin 7802.In an embodiment, the metal resistor layer 7810 is formed using a blanket deposition and a subsequent anisotropic etch process. In an embodiment, metal resistor layer 7810 is formed using atomic layer deposition (ALD). In an embodiment, the metal resistor layer 7810 is formed to a thickness in the range of 2-5 nanometers. In an embodiment, the metal resistor layer 7810 is or includes a titanium nitride (TiN) layer or a tungsten (W) layer. In an embodiment, the metal resistor layer 7810 is formed to have a resistivity in the range of 100-100,000 ohms/square.In a subsequent processing operation, a pair of anode or cathode electrodes may be formed and may be electrically connected to the metal resistor layer 7810 of the structure of FIG. By way of example, FIG. 84 illustrates a plan view of a fin-based thin film resistor structure having various exemplary locations for anode or cathode electrode contacts, in accordance with an embodiment of the present disclosure.Referring to FIG. 84, a first anode or cathode electrode (eg, one of 8400, 8402, 8404, 8406, 8408, 8410) is electrically coupled to metal resistor layer 7810. A second anode or cathode electrode (eg, the other of 8400, 8402, 8404, 8406, 8408, 8410) is electrically coupled to metal resistor layer 7810. In an embodiment, the metal resistor layer 7810, the anode electrode, and the cathode electrode form a precision thin film resistor (TFR) passive device. The precision TRF passive device can be adjustable because the resistance can be selected based on the distance between the first anode or cathode electrode and the second anode or cathode electrode. The selection can be provided by forming a variety of actual electrodes (e.g., 8400, 8402, 8404, 8406, 8408, 8410, and other possible electrodes) and then selecting the actual pairing based on the interconnect circuitry. Alternatively, a single anode or cathode pair can be formed and the location for each can be selected during fabrication of the TFR device. In either case, in an embodiment, the position for one of the anode or cathode electrodes is at the end of the fin 7802 (eg, at location 8400 or 8402), at the corner of the fin 7802 (eg, At the position 8404, 8406, or 8408, or at the center of the transition between the corners (eg, at location 8410).In an exemplary embodiment, the first anode or cathode electrode is electrically coupled to a metal resistor layer 7810 proximate to the first end 7806 of the semiconductor fin 7802 (e.g., at location 8400). The second anode or cathode electrode is electrically coupled to a metal resistor layer 7810 proximate to the second end 7808 of the semiconductor fin 7802 (e.g., at location 8402).In another exemplary embodiment, the first anode or cathode electrode is electrically coupled to the metal resistor layer 7810 proximate to the first end 7806 of the semiconductor fin 7802 (e.g., at location 8400). The second anode or cathode electrode is electrically coupled to a metal resistor layer 7810 remote from the second end 7808 of the semiconductor fin 7802 (e.g., at locations 8410, 8408, 8406, or 8404).In another exemplary embodiment, the first anode or cathode electrode is electrically coupled to a metal resistor layer 7810 remote from the first end 7806 of the semiconductor fin 7802 (e.g., at location 8404 or 8406). The second anode or cathode electrode is electrically coupled to a metal resistor layer 7810 remote from the second end 7808 of the semiconductor fin 7802 (e.g., at location 8410 or 8408).More specifically, in accordance with one or more embodiments of the present disclosure, topographical features of a fin-based transistor architecture are used as a basis for fabricating embedded resistors. In one embodiment, a precision resistor is fabricated on the fin structure. In a particular embodiment, such a way achieves a very high density integration of passive components such as precision resistors.It will be appreciated that a variety of fin geometries are suitable for making fin-based precision resistors. 85A-85D illustrate plan views of various fin geometries for fabricating fin-based precision resistors in accordance with an embodiment of the present disclosure.In an embodiment, referring to Figures 85A-85C, semiconductor fins 7802 are non-linear semiconductor fins. In one embodiment, the semiconductor fins 7802 protrude through the trench isolation regions above the substrate. Metal resistor layer 7810 is conformal to an isolation layer (not shown) that conforms to nonlinear semiconductor fins 7802. In one embodiment, two or more anode or cathode electrodes 8400 are electrically coupled to metal resistor layer 7810, and the dashed circles in Figures 85A-85C illustrate exemplary optional locations thereof.The non-linear fin geometry includes one or more angles such as, but not limited to, a single angle (eg, an L-shape), two angles (eg, a U-shape), four angles (eg, an S-shape), or six angles (For example, the structure of Fig. 78). In an embodiment, the non-linear fin geometry is an open structural geometry. In another embodiment, the non-linear fin geometry is a closed structure geometry.As an exemplary embodiment of an open structure geometry for a non-linear fin geometry, Figure 85A shows a non-linear fin having an angle to provide an open structure L-shaped geometry. Figure 85B shows a non-linear fin with two corners to provide an open structure U-shaped geometry. In the case of an open structure, the non-linear semiconductor fin 7802 has a top surface, a first end, a second end, and a pair of sidewalls between the first end and the second end. The metal resistor layer 7810 is conformal with a spacer layer (not shown) that conforms to the top surface, the first end, the second end, and the sidewall pair between the first end and the second end .In a specific embodiment, referring again to FIGS. 85A and 85B, the first anode or cathode electrode is electrically coupled to the metal resistor layer 7810 near the first end of the open structure nonlinear semiconductor fin, and the second anode or cathode The electrode is electrically connected to a metal resistor layer 7810 near the second end of the open structure nonlinear semiconductor fin. In another specific embodiment, the first anode or cathode electrode is electrically coupled to the metal resistor layer 7810 proximate the first end of the open structure nonlinear semiconductor fin, and the second anode or cathode electrode is electrically connected to be remotely open A metal resistor layer 7810 is fabricated at the second end of the nonlinear semiconductor fin. In another specific embodiment, the first anode or cathode electrode is electrically connected to the metal resistor layer 7810 remote from the first end of the open structure nonlinear semiconductor fin, and the second anode or cathode electrode is electrically connected to be remote from the open A metal resistor layer 7810 is fabricated at the second end of the nonlinear semiconductor fin.As an exemplary embodiment of a closed structure geometry for a non-linear fin geometry, Figure 85C shows a non-linear fin having four corners to provide a closed structure square or rectangular geometry. In the case of a closed structure, the non-linear semiconductor fins 7802 have a top surface and a pair of side walls, particularly an inner side wall and an outer side wall. However, the enclosed structure does not include the exposed first and second ends. Metal resistor layer 7810 is conformal to an isolation layer (not shown) that conforms to the top, inner, and outer sidewalls of fin 7802.In another embodiment, referring to Figure 85D, the semiconductor fins 7802 are linear semiconductor fins. In one embodiment, semiconductor fins 7802 protrude through the trench isolation regions over the substrate. Metal resistor layer 7810 is conformal to an isolation layer (not shown) that conforms to linear semiconductor fins 7802. In one embodiment, two or more anode or cathode electrodes 8400 are electrically coupled to metal resistor layer 7810, and the dashed circle in Figure 85D shows an exemplary optional location thereof.In another aspect, a new structure for high resolution phase shift mask (PSM) fabrication for lithography is described in accordance with an embodiment of the present disclosure. Such a PSM mask can be used for general (direct) lithography or complementary lithography.Photolithography is commonly used in the fabrication process for forming patterns in a photoresist layer. In a photolithography process, a layer of photoresist is deposited over the underlying layer to be etched. Typically, the underlying layer is a semiconductor layer, but can be any type of hard mask or dielectric material. The photoresist layer is then selectively exposed to radiation through a photomask or reticle. The photoresist is then developed, and in the case of a "positive" photoresist, those portions of the photoresist that are exposed to radiation are removed.A photomask or reticle for patterning the wafer is placed in a lithographic exposure tool commonly referred to as a "lithographic machine" or "scanner." In a lithography machine or scanner machine, a photomask or reticle is placed between the radiation source and the wafer. The photomask or reticle is typically formed from patterned chromium (absorber layer) placed on a quartz substrate. The radiation passes through the photomask or the reticle of the quartz segment in a position free of chromium substantially without attenuation. In contrast, the radiation does not pass through the chrome portion of the mask. This type of mask is referred to as a binary mask because the radiation incident on the mask either passes completely through the quartz segment or is completely blocked by the chrome segment. After the radiation selectively passes through the mask, the pattern on the mask is transferred into the photoresist by projecting an image of the mask into the photoresist via a series of lenses.As the features on the photomask or reticle become closer and closer, the size of the features on the mask is comparable to the wavelength of the source, and the diffraction effect begins to have an effect. Diffraction blurs the image projected onto the photoresist, resulting in poor resolution.One way to prevent the diffraction pattern from interfering with the desired patterning of the photoresist is to cover the selected openings in the photomask or reticle with a transparent layer called a phase shifter. The phase shifter moves one of the plurality of sets of exposure rays to a different phase from the other adjacent set, thus counteracting the diffracted interference pattern. This method is called a phase shift mask (PSM) method. Nonetheless, alternative mask fabrication schemes that reduce defects and increase throughput of mask production are a key area of focus for lithography process development.One or more embodiments of the present disclosure relate to methods for fabricating photolithographic masks and resulting lithographic masks. In order to provide context, the requirement to meet the aggressive device scaling goals set forth by the semiconductor industry relies on the ability of photolithographic masks to pattern smaller features with high fidelity. However, the manner in which the increasingly smaller features are patterned presents a serious challenge to mask fabrication. In this regard, currently widely used lithography masks rely on the concept of phase shift mask (PSM) technology to pattern features. However, reducing defects while generating smaller and smaller patterns remains one of the biggest obstacles in mask fabrication. The use of a phase shift mask can have several disadvantages. First, the design of a phase shift mask is a relatively complex process that requires a lot of resources. Second, because of the nature of the phase shift mask, it is difficult to check for defects in the phase shift mask. This defect in the phase shift mask comes from the current integration scheme used to produce the mask itself. Some phase shift masks use a cumbersome and somewhat prone to defect pattern to pattern a thick light absorbing material and then transfer the pattern to an auxiliary layer that assists in phase shifting. Complicating things is that the absorber layer is subjected to two plasma etches, and as a result, the undesirable effects of plasma etching lead to defects in mask production, such as loading effects, reactive ion etch lag, charging, and Reproduce the effect.Innovations in materials and novel integration techniques for making defect-free lithography masks remain a high priority for device scaling. Therefore, in order to fully utilize the phase shift mask technique, a novel integration scheme may be required that employs (i) patterning the phase shift layer with high fidelity, and (ii) only patterning the absorber once during the final stage of fabrication. Chemical. Moreover, such manufacturing schemes can provide other advantages such as flexibility in material selection, reduced substrate damage during manufacturing, and increased throughput in mask manufacturing.FIG. 86 shows a cross-sectional view of a lithographic mask structure 8601, in accordance with an embodiment of the present disclosure. The photolithographic mask 8601 includes a die inner region 8610, a frame region 8620, and a die-frame interface region 8630. The die-frame interface region 8630 includes adjacent portions of the die inner region 8610 and the frame region 8620. The inner die region 8610 includes a patterned phase shifter layer 8606 disposed directly on the substrate 8600, wherein the patterned phase shifter layer has features with sidewalls. The frame region 8620 surrounds the inner die region 8610 and includes a patterned absorber layer 8602 disposed directly on the substrate 8600.The die-frame interface region 8630 disposed on the substrate 8600 includes a two-layer stack 8640. The dual layer stack 8640 includes an upper layer 8604 disposed on the lower patterned phase shifter layer 8606. The upper layer 8604 of the two-layer stack 8640 is constructed of the same material as the patterned absorber layer 8602 of the frame region 8620.In an embodiment, the uppermost surface 8608 of the features of the patterned phase shifter layer 8606 has a different height than the uppermost surface 8612 of the features of the die-frame interface region and a different height from the uppermost surface 8614 of the features in the frame region. Moreover, in an embodiment, the height of the uppermost surface 8612 of the features of the die-frame interface region is different from the height of the uppermost surface 8614 of the features of the frame region. The typical thickness of the phase shifter layer 8606 is in the range of 40-100 nm, while the typical thickness of the absorber layer is in the range of 30-100 nm. In an embodiment, the absorber layer 8602 in the frame region 8620 has a thickness of 50 nm, and the combined thickness of the absorber layer 8604 disposed on the phase shifter layer 8606 in the die-frame interface region 8630 is 120 nm, and the frame region The thickness of the absorber in the medium is 70 nm. In an embodiment, the substrate 8600 is quartz and the patterned phase shifter layer comprises materials such as, but not limited to, molybdenum silicide, molybdenum-silicon oxynitride, molybdenum-silicon nitride, silicon oxynitride or silicon nitride, And the absorber material is chromium.The embodiments described herein can be used to fabricate a wide range of different types of integrated circuits or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, microcontrollers, and the like. In other embodiments, a semiconductor memory can be fabricated. In addition, integrated circuits or other microelectronic devices can be used in a wide variety of electronic devices known in the art. For example, in a computer system (e.g., a desktop computer, a laptop computer, a server), a cellular telephone, a personal electronic device, or the like. The integrated circuit can be coupled to the bus and other components in the system. For example, a processor can be coupled to a memory, chipset, etc. by one or more buses. Each of the processor, memory, and chipset can potentially be fabricated using the methods disclosed herein.FIG. 87 shows a computing device 8700 in accordance with an embodiment of the present disclosure. Computing device 8700 houses plate 8702. The board 8702 can include several components including, but not limited to, a processor 7904 and at least one communication chip 8706. Processor 8704 is physically and electrically coupled to board 8702. In some embodiments, at least one communication chip 8706 is also physically and electrically coupled to the board 8702. In other implementations, communication chip 8706 is part of processor 8704.Computing device 8700 can include other components that may or may not be physically coupled to board 8702, depending on its application. These other components include, but are not limited to, volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipsets, antennas, Display, touch screen display, touch screen controller, battery, audio codec, video codec, power amplifier, global positioning system (GPS) device, compass, accelerometer, gyroscope, speaker, camera and mass storage device (eg , hard disk drive, compact disk (CD), digital versatile disk (DVD), etc.).Communication chip 8706 enables wireless communication for transmitting data to and from computing device 8700. The term "wireless" and its derivatives may be used to describe a circuit, apparatus, system, method, technique, communication channel, etc. that can communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any circuitry, although in some embodiments they may not. Communication chip 8706 can implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, EDCT, Bluetooth, derivatives thereof, and any other wireless protocols designated as 3G, 4G, 5G and higher. Computing device 8700 can include a plurality of communication chips 8706. For example, the first communication chip 8706 can be dedicated to short range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 8706 can be dedicated to applications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO or others. Long distance wireless communication.The processor 8704 of the computing device 8700 includes an integrated circuit die packaged within the processor 8704. In some embodiments of embodiments of the present disclosure, an integrated circuit die of a processor includes one or more structures, such as an integrated circuit structure constructed in accordance with embodiments of the present disclosure. The term "processor" may refer to any portion of any device or device that processes electronic data from a register or memory or both to convert the electronic data into other electronic data that may be stored in a register or memory or both.Communication chip 8706 also includes an integrated circuit die packaged within semiconductor chip 8706. According to another embodiment of the present disclosure, an integrated circuit die of a communication chip is constructed in accordance with an embodiment of the present disclosure.In other embodiments, another component housed within computing device 8700 is an integrated circuit die constructed in accordance with an embodiment of an embodiment of the present disclosure.In various embodiments, computing device 8700 can be a laptop, netbook, notebook, ultrabook, smart phone, tablet, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer, Scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In other embodiments, computing device 8700 can be any other electronic device that processes data.FIG. 88 illustrates an interpolator 8800 that includes one or more embodiments of the present disclosure. The interposer 8800 is an intervening substrate for bridging the first substrate 8802 to the second substrate 8804. The first substrate 8802 can be, for example, an integrated circuit die. The second substrate 8804 can be, for example, a memory module, a computer motherboard, or another integrated circuit die. Typically, the purpose of the interpolator 8800 is to extend the connection to a wider spacing or to reroute the connection to a different connection. For example, interposer 8800 can couple the integrated circuit die to a ball grid array (BGA) 8806, which can then be coupled to a second substrate 8804. In some embodiments, the first and second substrates 8802/8804 are attached to opposite sides of the interposer 8800. In other embodiments, the first and second substrates 8802/8804 are attached to the same side of the interposer 8800. And in other embodiments, three or more substrates are interconnected using interposer 8800.The interposer 8800 can be formed of epoxy, fiberglass reinforced epoxy, ceramic material, or a polymeric material such as polyimide. In other embodiments, the interposer may be formed of alternating rigid or flexible materials, which may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other III-V and IV. Family material.The interposer can include a metal interconnect 8808 and a via 8810 including, but not limited to, through silicon via (TSV) 8812. Interposer 8800 can also include embedded device 8814, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices can also be formed on interposer 8000. In accordance with embodiments of the present disclosure, the apparatus or process disclosed herein may be used in the manufacture of interposer 8800 or in the manufacture of components included in interposer 8800.Figure 89 is an isometric view of a mobile computing platform 8900 employing or in accordance with one or more of the processes described herein, or in accordance with one or more features described herein, in accordance with an embodiment of the present disclosure. Integrated circuit (IC).Mobile computing platform 8900 can be any portable device configured for each of electronic data display, electronic data processing, and wireless electronic data transmission. For example, the mobile computing platform 8900 can be any of a tablet, smart phone, laptop, etc., and includes a display 8905, a chip scale (SoC) or package level integrated system 8910, and a battery 8913, in an exemplary implementation. In the example, the display screen 8905 is a touch screen (capacitive, inductive, resistive, etc.). As shown, the higher the level of integration in the system 8910 implemented by the uniform high transistor package density, the greater the portion of the mobile computing platform 8900 that can be occupied by the battery 8913 or a non-volatile storage device such as a solid state drive, or The greater the number of transistor gates used for increased platform functionality. Similarly, the greater the carrier mobility of each transistor in system 8910, the more functional it is. As such, the techniques described herein can achieve improved performance and form factor in the mobile computing platform 8900.Integrated system 8910 is further illustrated in expanded view 8920. In an exemplary embodiment, packaged device 8977 includes at least one memory chip (eg, RAM), or at least one processor chip, fabricated in accordance with one or more processes described herein or comprising one or more features described herein ( For example, a multi-core microprocessor and/or a graphics processor). Packaged device 8977 along with a power management integrated circuit (PMIC) 8915, an RF (wireless) integrated circuit (RFIC) 8925 including a wideband RF (wireless) transmitter and/or receiver (eg, including digital baseband and analog front end modules, including The power amplifier on the transmit path and the low noise amplifier on the receive path) and one or more of its controller 8911 are further coupled to circuit board 8960. Functionally, the PMIC 8915 performs battery power regulation, DC to DC conversion, etc., thus having an input coupled to the battery 8913 and having an output that provides current supply to all other functional modules. As further shown, in an exemplary embodiment, RFIC 8925 has an output coupled to an antenna to provide for implementation of any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, its derivatives, and designated as 3G, 4G Any other wireless protocol, 5G and higher. In an alternate embodiment, each of these board level modules can be integrated into a separate IC coupled to a package substrate of packaged device 8977 or integrated into a single IC (SoC) of a package substrate coupled to packaged device 8977.In another aspect, a semiconductor package is used to protect an integrated circuit (IC) chip or die, and also to provide an electrical interface to an external circuit for the die. As the demand for smaller electronic devices increases, semiconductor packages are designed to be more compact and must support greater circuit density. In addition, the need for higher performance devices has led to the need for improved semiconductor packages that enable thin package outlines and low overall warpage compatible with subsequent assembly processes.In an embodiment, wire bonding to a ceramic or organic package substrate is used. In another embodiment, a die is mounted to a ceramic or organic package substrate using a C4 process. In particular, a C4 solder ball connection can be implemented to provide a flip chip interconnect between the semiconductor device and the substrate. A flip chip or controlled collapse chip connection (C4) is a type of mounting for a semiconductor device such as an integrated circuit (IC) chip, MEMS or component that utilizes solder bumps instead of wire bonding. Solder bumps are deposited on the C4 pads at the top side of the substrate package. In order to mount the semiconductor device to the substrate, it is turned upside down with the active side facing down on the mounting area. Solder bumps are used to connect the semiconductor device directly to the substrate.Figure 90 shows a cross-sectional view of a flip chip mounted die in accordance with an embodiment of the present disclosure.Referring to Figure 90, device 9000 includes a die 9002, such as an integrated circuit (IC) fabricated or in accordance with one or more of the features described herein in accordance with one or more of the processes described herein. A metallization pad 9004 is included on the die 9002. A package substrate 9006, such as a ceramic or organic substrate, includes a connection 9008 thereon. Die 9002 and package substrate 9006 are electrically coupled by solder balls 9010 coupled to metallization pads 9004 and connections 9008. The underfill material 9012 surrounds the solder balls 9010.Processing flip chips can be fabricated similar to conventional ICs with a few additional operations. At the end of the manufacturing process, the attachment pads are metallized to make them more susceptible to solder. This is typically made up of several processes. Solder dots are then deposited on each metallization pad. The chip is then cut from the wafer as is normally the case. To attach the flip chip to the circuit, the chip is inverted to place the solder dots down onto the connector on the lower electronics or board. The solder is then re-melted using ultrasonic or alternatively a reflow soldering process to create an electrical connection. This also leaves a small space between the chip's circuitry and the underlying installation. In most cases, the electrically insulating adhesive is then "underfilled" to provide a stronger mechanical connection, provide a thermal bridge, and ensure that the solder joint is not stressed by the heating of the chip and the rest of the system.In other embodiments, newer packages and die-to-die interconnects, such as through silicon vias (TSVs) and silicon interposers, are implemented in accordance with embodiments of the present disclosure to fabricate the incorporation according to the description herein. A high performance multi-chip module (MCM) and an in-package system (SiP) of an integrated circuit (IC) that manufactures or includes one or more of the features described herein.Thus, embodiments of the present disclosure include advanced integrated circuit structure fabrication.Although a specific embodiment has been described above, the embodiments are not intended to limit the scope of the disclosure. The examples of the features provided in the present disclosure are intended to be illustrative and not limiting, unless otherwise indicated. The above description is intended to cover such alternatives, modifications, and equivalents that are obvious to those skilled in the art.The scope of the present disclosure includes any feature or combination of features (express or implied) disclosed herein, or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Thus, new claims may be devised for any such combination of features during the application (or application for which priority is claimed). In particular, the features of the dependent claims may be combined with the features of the independent claims, and may be combined from the respective independent claims by any suitable means and not by the specific combinations recited in the appended claims. Characteristics.The following examples pertain to other embodiments. The various features of the various embodiments can be combined in various ways with some of the features included and other features are excluded to accommodate a variety of different applications.Exemplary Embodiment 1: An integrated circuit structure includes a fin including silicon having a lower fin portion and an upper fin portion. The insulating structure is directly adjacent to the sidewall of the lower fin portion of the fin. The first gate electrode is over the upper fin portion and over the first portion of the insulating structure. The second gate electrode is over the upper fin portion and over the second portion of the insulating structure. The first dielectric spacer is along a sidewall of the first gate electrode. a second dielectric spacer along a sidewall of the second gate electrode, the second dielectric spacer being spaced apart from the first dielectric over a third portion of the insulating structure between the first gate electrode and the second gate electrode The body is continuous.Exemplary Embodiment 2: The integrated circuit structure of Exemplary Embodiment 1, wherein the first and second dielectric spacers comprise silicon and nitrogen.Exemplary Embodiment 3: The integrated circuit structure of Exemplary Embodiment 1 or 2, further comprising an embedded source on an opposite side of the first gate electrode and on an opposite side of the second gate electrode or Drain structure.Exemplary Embodiment 4: The integrated circuit structure according to the exemplary embodiment 1, 2 or 3, wherein the insulating structure comprises a first insulating layer, a second insulating layer directly on the first insulating layer, and a lateral direction A dielectric fill material directly on the second insulating layer.Exemplary Embodiment 5: The integrated circuit structure according to Exemplary Embodiment 4, wherein the first insulating layer is an undoped insulating layer including nitrogen and oxygen.Exemplary Embodiment 6: The integrated circuit structure according to Exemplary Embodiment 4 or 5, wherein the second insulating layer comprises silicon and nitrogen.Exemplary Embodiment 7: The integrated circuit structure of Exemplary Embodiment 4, 5 or 6, wherein the dielectric fill material comprises silicon and oxygen.Exemplary Embodiment 8: An integrated circuit structure comprising: a first fin comprising silicon, the first fin having a lower fin portion and an upper fin portion. The integrated circuit structure includes a second fin including silicon, the second fin having a lower fin portion and an upper fin portion. The insulating structure is directly adjacent to a sidewall of the lower fin portion of the first fin and directly adjacent a sidewall of the lower fin portion of the second fin. A gate electrode is over the upper fin portion of the first fin, over the upper fin portion of the second fin, and above the first portion of the insulating structure. A first dielectric spacer is along a sidewall of the upper fin portion of the first fin. a second dielectric spacer along a sidewall of the upper fin portion of the second fin, the second dielectric spacer being at the first fin and the second fin of the insulating structure The second portion between the objects is continuous with the first dielectric spacer.Exemplary Embodiment 9: The integrated circuit structure of Exemplary Embodiment 8, wherein the first and second dielectric spacers comprise silicon and nitrogen.Exemplary Embodiment 10: The integrated circuit structure of Exemplary Embodiment 8 or 9, further comprising an embedded source or drain structure on an opposite side of the gate electrode, the embedded source or drain structure a sidewall having upper sidewall portions along the first and second fins lower than a bottom surface of the top surfaces of the first and second dielectric spacers, and the source or drain structure has The sidewalls of the upper fin portions along the first and second fins are higher than the top surfaces of the top surfaces of the first and second dielectric spacers.Exemplary Embodiment 11: The integrated circuit structure according to Exemplary Embodiment 8, 9, or 10, wherein the insulating structure comprises a first insulating layer, a second insulating layer directly on the first insulating layer, and a lateral direction A dielectric fill material directly on the second insulating layer.Exemplary Embodiment 12: The integrated circuit structure of Exemplary Embodiment 11, wherein the first insulating layer is an undoped insulating layer comprising nitrogen and oxygen.Exemplary Embodiment 13: The integrated circuit structure according to Exemplary Embodiment 11 or 12, wherein the second insulating layer comprises silicon and nitrogen.Exemplary Embodiment 14: The integrated circuit structure of Exemplary Embodiment 11, 12 or 13, wherein the dielectric fill material comprises silicon and oxygen.Exemplary Embodiment 15: A method of fabricating an integrated circuit structure includes forming a fin comprising silicon, the fin having a lower fin portion and an upper fin portion. The method also includes forming an insulating structure directly adjacent the sidewall of the lower fin portion of the fin. The method also includes forming first and second gate structures over the upper fin portion and over the first and second portions of the insulating structure, respectively. The method also includes forming a conformal portion with the upper fin portion of the fin, conforming to the first and second gate structures, and the first gate structure and the insulating structure A third portion of the second gate structure is conformal to the dielectric material. The method also includes forming a hard mask material over the dielectric material. The method also includes recessing the hard mask material to expose the dielectric material conformal to the upper fin portion of the fin and conformal to the first and second gate structures A portion of the recessed hard mask material covers a portion of the dielectric material conformal to the third portion of the insulating material between the first gate structure and the second gate structure. The method also includes anisotropically etching the dielectric material and subsequently removing the recessed hard mask material to form a first dielectric spacer along a sidewall of the first gate structure, and along the a sidewall of the two-gate structure forming a second dielectric spacer, the second dielectric spacer being in the third of the insulating structure between the first gate structure and the second gate structure The portion is continuous with the first dielectric spacer.The method of exemplary embodiment 15, wherein recessing the hard mask material comprises wet etching the hard mask material.The method of exemplary embodiment 15, wherein recessing the hard mask material comprises using an ashing, dry etching, or plasma etching process.The method of the exemplary embodiment 15, 16, or 17, wherein forming the hard mask material comprises forming a carbon-based hard mask material.The method of the exemplary embodiment 15, 16, 17, or 18, wherein the first and second gate structures are dummy gate structures, the method further comprising utilizing a permanent gate dielectric and a gate An electrode stack replaces the first and second gate structures.Exemplary embodiment 20: The method of exemplary embodiment 15, 16, 17, 18 or 19, further comprising forming on opposite sides of the first gate structure and on opposite sides of the second gate structure Embedded source or drain structure. |
In at least some embodiments, a system (100) comprises a processor (102) and a direct memory access (DMA) subsystem (122) coupled to the processor (102). The system (100) further comprises a component (102, 162, 164, 166, 168) coupled to the DMA subsystem (122) via an interconnect (116) employing security rules, wherein, if the component (102, 162, 164, 166, 168) requests a DMA channel (134), the DMA subsystem (122) restricts usage of the DMA channel (134) based on the security rules. |
A system, comprising:a processor;a direct memory access (DMA) subsystem coupled to the processor; anda component coupled to the DMA subsystem via an interconnect employing security rules, wherein, if the component requests a DMA channel, the DMA subsystem restricts usage of the DMA channel based on the security rules.The system of claim 1 wherein the security rules are associated with interconnect qualifiers selected from the group of qualifiers consisting of:a secure qualifier that indicates if a DMA channel request is made in a secure mode;a debug qualifier that indicates if a DMA channel request is made in a debug mode;a privilege qualifier that indicates if a DMA channel request is made in a privilege mode; andan instruction qualifier that indicates if a channel is to be used for transferring data to an executable memory space.The system of claim 1 wherein the DMA channel is selectively used to transfer data to one of a non-executable memory space and an executable memory space based on an instruction qualifier.The system of claim 1 wherein the DMA channel can be used as an instruction channel selected from the group of instruction channels consisting of:a public user instruction channel if the DMA channel request is one of a public privilege mode access, a secure user mode access, and a secure privilege mode access;a public privilege instruction channel if the DMA channel request is one of a secure user mode access and a secure privilege mode access;a secure user instruction channel if the DMA channel request is a secure privilege mode access; anda secure privilege instruction channel if the DMA channel request is a secure privilege mode access.The system of claim 1 wherein the DMA subsystem asserts a security violation signal if at least one security violation occurs, the security violations selected from the group consisting of:a public user mode access attempting to configure a DMA channel as a public privilege channel;one of a public user mode access and a public privilege mode access attempting to configure a DMA channel as a secure user channel;one of a public user mode access, a public privilege mode access, and a secure user mode access attempting to configure a DMA channel as a secure privilege channel;a debug mode access attempting to configure a DMA channel as a functional channel;a public user mode access attempting to configure a DMA channel as a public user instruction channel;one of a public user mode access and a public privilege mode access attempting to configure a DMA channel as a public privilege instruction channel;one of a public user mode access, a public privilege mode access, and a secure user mode access attempting to configure a DMA channel as a secure user instruction channel; andone of a public user mode access, a public privilege mode access, and a secure user mode access attempting to configure a DMA channel as a secure privilege instruction channel.A method, comprising:accessing a direct memory access (DMA) subsystem;determining if the access is a secure mode access; andif the access is determined to be a secure mode access, allowing a DMA channel to be configured as either one of a secure channel and a public channel.The method of claim 6 further comprising, allowing a DMA channel to be used as an instruction channel selected from the group of instruction channels consisting of:a public user instruction channel if the access is one of a privilege mode access, a secure mode access, and a secure privilege mode access;a public privilege instruction channel if the access is one of a secure mode access and a secure privilege mode access;a secure user instruction channel if the access is a secure privilege mode access; anda secure privilege instruction channel if the access is a secure privilege mode access.The method of claim 6 further comprising asserting a security violation signal if a security violation occurs, the security violations selected from the group consisting of:attempting to configure a DMA channel as a public privilege channel using a public user mode access;attempting to configure a DMA channel as a secure channel using one of a public user mode access and a public privilege mode access;attempting to configure a DMA channel as a secure privilege channel using one of a public user mode access, a public privilege mode access, and a secure user mode access;attempting to configure a DMA channel as a functional channel using a debug mode access;attempting to configure a DMA channel as a public user instruction channel using a public user mode access;attempting to configure a DMA channel as a public privilege instruction channel using one of a public user mode access and a public privilege mode access;attempting to configure a DMA channel as a secure user instruction channel using one of a public user mode access, a public privilege mode access, and a secure user mode access; andattempting to configure a DMA channel as a secure privilege instruction channel using one of a public user mode access, a public privilege mode access, and a secure user mode access.The method of claim 6 further comprising locking a DMA channel's use of qualifiers during a DMA channel operation and unlocking a DMA channel's use of qualifiers after the DMA channel operation is completed.The method of claim 6 further comprising, when a DMA channel is started, generating at least one qualifier that corresponds to access rights of the DMA channel. |
BACKGROUND OF THE INVENTIONMobile electronic devices such as personal digital assistants (PDAs) and digital cellular telephones are increasingly used for electronic commerce (e-commerce) and mobile commerce (m-commerce). Programs that execute on the mobile devices to implement e-commerce and/or m-commerce functionality may need to operate in a secure mode to reduce the likelihood of attacks by malicious programs (e.g., virus programs) and to protect sensitive data.For security reasons, at least some processors provide two levels of operating privilege: a first level of privilege for user programs; and a higher level of privilege for use by the operating system. However, the higher level of privilege may or may not provide adequate security for m-commerce and e-commerce, given that this higher level relies on proper operation of operating systems with highly publicized vulnerabilities. In order to address security concerns, some mobile equipment manufacturers implement yet another third level of privilege, or secure mode, that places less reliance on corruptible operating system programs, and more reliance on hardware-based monitoring and control of the secure mode. An example of one such system may be found in U.S. Patent Publication No. 2003/0140245 , entitled "Secure Mode for Processors Supporting MMU and interrupts."In addition to this secure mode, various hardware-implemented security firewalls and other security monitoring components have been added to the processing systems used in mobile electronic devices to further reduce the vulnerability to attacks. Examples of these security improvements may be found in U.S Patent Applications 10/961,756 , entitled "System and Method for Secure Mode for Processors and Memories on Multiple Semiconductor Dies Within a Single Semiconductor Package," 10/961,755, entitled "Method and System of Ensuring Integrity of a Secure Mode Entry Sequence," 10/961,344, entitled "System and Method of Identifying and Preventing Security Violations Within a Computing System," 10/961,748, entitled "Method and System of Verifying Proper Execution of a Secure Mode Entry Sequence," and European Patent Application EP 04292405.0 , entitled "Method and System for Detecting a Security Violation Using an Error Correction Code,".In some systems, Direct Memory Access (DMA) components are implemented to enable subsystems (e.g.,a display subsystem, a camera subsystem, a modem subsystem or a processing subsystem) to communicate with each other via DMA "channels". Unfortunately, DMA channels are subject to security attacks that enable malicious hackers to break the secure mode described above without being detected by the hardware firewalls. Breaking the secure mode enables a malicious user to change a mobile electronic device's International Mobile Equipment Identity (IMEI) or defeat a Subscriber Identity Module Lock (SIMLOCK) mechanism.SUMMARY OF THE INVENTIONAccordingly, there are disclosed herein systems and methods for restricting DMA channel configurations. In at least some embodiments, a system comprises a processor and a direct memory access (DMA) subsystem coupled to the processor. The system further comprises a component coupled to the DMA subsystem via an interconnect employing security rules, wherein, if the component requests a DMA channel, the DMA subsystem restricts usage of the DMA channel based on the security rules.In at least some embodiments, a DMA subsystem comprises a configuration firewall configured to receive DMA channel configuration requests. The DMA subsystem further comprises a violation handler coupled to the configuration firewall, wherein, if a received DMA channel configuration request violates security rules of the configuration firewall, a security violation signal is asserted by the configuration firewall to the violation handler.In at least some embodiments, a method comprises accessing a direct memory access (DMA) subsystem. The method further comprises determining if the access is a secure mode access and, if the access is determined to be a secure mode access, allowing a DMA channel to be configured as either one of a secure channel and a public channel.BRIEF DESCRIPTION OF THE DRAWINGSFor a detailed description of exemplary embodiments of the invention, reference will now be made to the accompanying drawings in which:Figure 1 shows a system in accordance with one or more embodiments;Figure 2 shows the system of Figure 1 with qualifiers in accordance with one or more embodiments;Figures 3A-3B show a DMA subsystem in accordance with one or more embodiments; andFigures 4, 5, 6, 7, 8 and 9 show methods of restricting DMA channel configurations in accordance with one or more embodiments.NOTATION AND NOMENCLATURECertain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms "including" and "comprising" are used in an open-ended fashion, and thus should be interpreted to mean "including, but not limited to...." Also, the term "couple" or "couples" is intended to mean either an indirect, direct, optical or wireless electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical electrical connection, or through a wireless electrical connection.DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTIONThe following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment.Embodiments of the inventions implement a hardware security architecture that interconnects a plurality of components compatible with Direct Memory Access (DMA) techniques. As used herein, the term "hardware security architecture" is intended to mean the mechanisms and/or methodologies that connect several initiators (e.g., Advanced RISC Machine (ARM) components, Digital Signal Processor (DSP) components, Direct Memory Access DMA components, or Universal Serial Bus (USB) components) to several targets (e.g.,memory components or peripherals) while complying with security rules that guarantee or at least increase the security robustness of a system. In at least some embodiments, DMA-compatible components are integrated with the hardware security architecture such that DMA channel configurations support the security rules and/or hardware constraints of the hardware security architecture.Inasmuch as the systems and methods described herein were developed in the context of a mobile computing system, at least some of the description herein is based on a mobile computing environment. However, the discussion of the various systems and methods in relation to a mobile computing environment should not be construed as a limitation as to the applicability of the systems and methods described herein to only mobile computing environments. One of ordinary skill in the art will appreciate that these systems and methods may also be implemented in other computing environments such as desktop computers, laptop computers, network servers, and mainframe computers.Figure 1 shows a system 100 in accordance with one or more embodiments of the invention. In accordance with at least some embodiments, the system 100 shows components of a mobile device such as a cellular telephone, personal digital assistant (PDA), text messaging system, or a device that combines the functionality of a messaging system, personal digital assistant and a cellular telephone.As shown in Figure 1, the system 100 includes a multiprocessing unit (MPU) subsystem 102 having a MPU 104 coupled to an interrupt handler 106. The MPU 104 includes a processor core 110 that executes programs and a core security controller (CSC) 112, which aids the MPU 104 in entering a secure mode for execution of secure programs on the core 110. The core 110 may be any processor suitable for integration into a system on a chip (SoC), such as the ARM 1136 series of processors. In other embodiments, the core 110 may be a processor that includes some or all of the functionality of the core security controller 112 as described herein, such as the ARM 1176 series of processors. The ARM 1136 and 1176 technology may be obtained from ARM Holdings plc of Cambridge, United Kingdom, and/or ARM, Inc. of Austin, Texas, USA.As shown, the MPU subsystem 102 couples to a DMA subsystem 122 that enables memory accesses between DMA-compatible components ("targets") of the system 100. The DMA subsystem 122 has a DMA engine 124 with programmable DMA channels 134. The DMA subsystem 122 also has internal registers 126 such as DMA channel configuration registers 128 and DMA channel rights registers 130. The DMA channel configuration registers 128 are implemented to configure the DMA channels 134 as read channels or as read/write channels during DMA requests. The DMA channel rights registers 130 control the access rights of each DMA channel 134. As previously mentioned, these access rights are based on the security rules and/or hardware constraints of the system's hardware security architecture (e.g., as determined by interconnect qualifiers). As used herein, the term "interconnect qualifier" or "qualifier" is intended to mean a signal embedded in an access (e.g., an Open Core Protocol (OCP) access). The qualifier reflects the state of the component that initiated the access at the time the access was initiated.The DMA subsystem 122 also may comprise DMA status registers, source address registers, destination address registers, DMA length registers, DMA control registers, or other registers (not shown for convenience). In some embodiments, the DMA subsystem 122 is interconnected to DMA-compatible components (i.e.,the source locations or destination locations) via a hardware security architecture such as an L3 interconnect 116 having firewalls 150, 152, 154, and 156 and an L4 interconnection having a firewall 158. The DMA subsystem 122 also comprises a configuration firewall 132 that allows and restricts the usage of DMA channel qualifiers as will later be described. Although the L3 interconnect 116 and the L4 interconnect 140 described herein are implemented in some embodiments, alternative embodiments may implement other existing or future interconnect architectures.In at least some embodiments, the DMA-compatible components mentioned previously comprise a SDRAM Memory Scheduler (SMS) component 160 having a firewall 170, a General Purpose Memory Controller (GPMC) component 162, an on-chip read-only memory (ROM) 164, an on-chip random access memory (RAM) 166, and an Image Video Accelerator (IVA2) component 168. In alternative embodiments, additional components, fewer components or different DMA-compatible components may be included.The system 100 further comprises an L4 interconnect core component 142 having logic that supports functions such as the Advanced Encryption Standard (AES), the Data Encryption Standard (DES), the Secure Hash Algorithm 1 (SHA1), Public Key Authentication (PKA), Random Number Generators (RNG), Universal Asynchronous Receiver/Transmitters (UARTs), and General Purpose Input/Outputs (GPIOs), In alternative embodiments, the L4 interconnect core component 142 may support additional functions, fewer functions or different functions. The system 100 further comprises a control module 144 that interfaces the L4 interconnect 140 to the DMA subsystem 122. As shown, the firewall 132 of the DMA subsystem 122 is configured to assert a security violation signal 136 to the control module 144 if a security violation occurs.To comply with the system's hardware security architecture, the DMA channels 134 support usage of interconnect "qualifiers" that determine access rights to different protected memory spaces of the DMA-compatible components. Enforcement of the access rights associated with the interconnect qualifiers is based on firewalls such as the firewalls 150, 152, 154, 156, 158 and 132. In at least some embodiments, interconnect qualifiers such as "MReqType", "MReqPrivilege", "MReqDebug" and "MReqSecure" are used. Table 1 shows a definition and description of these qualifiers.Table 1MReqType0 Data modeIndicates if an associated access request is an data access or an instruction (Opcode) fetch1 Instruction fetch modeMReqPrivilege0 Public mode accessIndicates if an associated access request is made in a public access mode or a privilege mode.1 Privilege mode accessMReqDebug0 Functional modeIndicates if an associated access request is made in a functional mode or a debug mode1 Debug modeMReqSecure0 Normal transaction modeIndicates if an associated access request is part of a normal transaction or a secure transaction1 Secure transaction modeIf present, the MReqType qualifier shown in Table 1 comprises a logic "0" or "1". If the MReqType qualifier = 0, an access request (channel configuration request) associated with the MReqType qualifier is part of a data access mode that transfers data to a non-executable memory space. If the MReqType qualifier = 1, an access request associated with the MReqType qualifier is part of an instruction (Opcode) access mode that transfers data to an executable memory space.If present, the MReqPrivilege qualifier comprises a logic "0" or "1". If the MReqPrivilege qualifier = 0, an access request (channel configuration request) associated with the MreqPrivilege qualifier is a user mode access. If the MReqPrivilege qualifier = 1, an access request associated with the MReqPrivilege qualifier is a privilege mode access. For example, in embodiments that implement ARM components, a plurality of privilege mode accesses are possible such as a "supervisor" mode access, a "system" access, an "interrupt request" (IRQ) access, a "fast interrupt request" (FIQ) access, an "abort" access, an "undefined" access or a "monitor" access. A privilege mode access enables operations that are not available to user mode accesses.If present, the MReqDebug qualifier comprises a logic "0" or "1". If the MReqDebug qualifier = 0, the access request (channel configuration request) associated with the MReqDebug qualifier is a functional mode access. If the MReqDebug qualifier = 1, the access request associated with the MReqDebug qualifier is a debug mode access. In at least some embodiments, the functional mode involves executing instructions using a processor and the debug mode involves executing instructions using an emulator.If present, the MReqSecure qualifier comprises a logic "0" or "1". If the MReqSecure qualifier = 0, an access request (channel configuration request) associated with the MReqSecure qualifier is a normal transaction mode access. If the MReqSecure qualifier = 1, an access request associated with the MReqSecure qualifier is a secure transaction mode access. Qualifiers may be used together or separately to enable a variety of access rights. For more information regarding the use and enforcement of interconnect qualifiers, reference may be made to European Pat. App. No. EU 05 291 479.3 , filed on 07/07/2005 and entitled "Method and System For a Multi-Sharing Security Firewall".While one or more of the previously described qualifiers are implemented in some embodiments, other embodiments may implement different qualifiers. The qualifiers MReqType, MReqPrivilege, MReqDebug and MReqSecure are simply used for convenience in describing embodiments that implement ARM components. However, these qualifiers should also be understood as being applicable to any system with different modes and different security levels. To support the different security levels, the DMA channels 134 are configured based on the different interconnect qualifiers. The DMA channel configuration process may occur once, periodically, or randomly (as needed).In the embodiment of Figure 1, the configuration firewall 132 is implemented to allow or restrict certain qualifiers on the DMA channels 134. The firewall 132 is accessible and the DMA channels 134 are configurable (or re-configurable) via the L4 interconnect 140. In some embodiments, the MPU 104 accesses the firewall 132 via the L4 interconnect 140 to configure the DMA channels 134. If the MPU 104 attempts to perform a DMA channel configuration that is not allowed (e.g., some channel configurations may be "locked"), in-band errors are sent back to the initiator that accessed the firewall 132 (e.g., the MPU 104) and out-band errors (e.g., the security violation signal 136) are generated to the control module 144 and later converted into an MPU interrupt 138. As used herein, "in-band errors" refer to errors that are embedded in a response transaction to the initiator. For example, a response transaction may include status information such as an "OK" indicator or a "FAIL" indicator that is returned to the initiator. As used herein, "out-band errors" refer to errors that are out of the initiator execution flow. For example, the firewalls may generate error signals that are outside the initiator execution flow. The out-band errors can be used to update status registers and/or to cause an interrupt such as the MPU interrupt 138 previously mentioned. The MPU interrupt 138 can be used to notify a user of the system 100, to disable one or more functions of the system 100, or to perform some other action in response to a security violation. For more information regarding detecting and responding to security violations, reference may be made to U.S. Pat. App. Ser. No. 10/961,344, filed on 10/08/2004 and entitled "System and Method of Identifying and Preventing Security Violations Within a Computing System".Figure 2 shows the system 100 of Figure 1 with qualifiers in accordance with one or more embodiments. As shown in Figure 2, the qualifiers (e.g., MReqType, MReqPrivilege, MReqDebug and MReqSecure) are represented by block 182. In at least some embodiments, the MPU 104 issues access requests or transactions with the qualifiers 182 to the L3 interconnect 116. Additionally or alternatively, other components 108 (e.g., Digital Signal Processors (DSPs), modems, or videos accelerators (IVA2)) are able to issue access requests or transactions with the qualifiers 182 to the L3 interconnect 116 (i.e.,the MPU 104 does not necessarily issue all access requests or transactions). These access requests or transactions along with the qualifiers 182 are propagated through the L4 interconnect 140 to the firewall 132 of the DMA subsystem 122. The function of the firewall 132 is shown in Figures 3A-3B in greater detail.Figures 3A-3B show a DMA subsystem 122 in accordance with one or more embodiments. In Figures 3A-3B, the DMA subsystem 122 comprises the firewall 132 which, in some embodiments, couples to the L4 interconnect 140 via an on-chip protocol (OCP) bus 148. As shown, the firewall 132 receives an address signal "MADDR", a data signal "MDATA", qualifier bits signals "MREQINFO", and a command signal "MCMD" from the L4 interconnect 140. In some embodiments, the MREQINFO signals include the qualifiers MReqSecure, MReqPrivilege, MReqType, and MReqDebug as will later be described.The MADDR, MDATA, MREQINFO, and MCMD signals are representative of access requests or transactions that are issued by the MPU 104 or other components 108 that support the system's hardware security architecture. The firewall 132 uses these signals to configure DMA channel 0 for use with a given access request or transaction or to assert the security violation signal 136. The function of the firewall 132 with respect to DMA channel 0 is illustrated using pseudo-code. The pseudo-code represents software, firmware, or hardware functions of the firewall 132 in response to the MADDR, MDATA, MREQINFO, and MCMD signals and the contents of the DMA registers 126. The function of the firewall 132, in accordance with at least some embodiments, is additionally or alternatively described hereafter.The security violation signal 136 is used to generate in-band errors and/or out-band errors. For example, in some embodiments, the violation handler 192 receives the security violation signal 136 from the firewall 132 and asserts an in-band error "SRESP" 137 to the initiator that accessed the firewall 132 (via the L4 interconnect 140). Any out-band error generated due to the security violation signal 136 may be converted into an MPU interrupt 138 that enables the system 100 to appropriately handle the security violation.As shown in Figure 3A, DMA subsystem 122 also comprises DMA registers 126 that support the configuration and access rights of DMA channels 0 to "n". Specifically, Figures 3A-3B illustrate DMA registers associated with DMA channel 0. In other words, the DMA channel configuration registers 128A-128N and DMA channel rights register 130 are associated with DMA channel 0. Additional DMA registers associated with DMA channels 1 to n are not shown for convenience. Accordingly, parts of the description herein focus on one channel (DMA channel 0), though the discussion should be understood as applying to other channels as well. Also, other logic besides registers could be implemented in some embodiments. Using registers is simple one way to configure DMA channels to support the different security levels of the system 100. Table 2 shows a summary of DMA registers 126 in accordance with some embodiments.Table 2DMA_CHANNELn_ACCESS= MReqDebug, MReqPrivilege, MReqType, MReqSecure!=N/ADMA_CHANNELn_CONFnConfigure CHANNELn32R/WRN/ADMA_CHANNELn_CONF1Configure CHANNELn32R/WRN/ADMA_CHANNELn_CONF0Configure CHANNELn32R/WRDMA_CHANNELn.LOCK01N/ADMA_CHANNELnConfigure CHANNELn rights5R/WRDMA_CHANNEL1_ACCESS= MReqDebug, MReqPrivilege, MReqType, MReqSecure! =N/ADMA_CHANNEL1_CONFnConfigure CHANNEL132R/WRN/ADMA_CHANNEL1_CONF1Configure CHANNEL132R/WRN/ADMA_CHANNEL1_CONF0Configure CHANNEL132R/WRDMA_CHANNEL1.LOCK01N/ADMA_CHANNEL1Configure CHANNEL1 rights5R/WRDMA_CHANNEL0_ACCESS= MReqDebug, MReqPrivilege, MReqType, MReqSecure!=N/ADMA_CHANNEL0_CONFnConfigure CHANNEL032R/WRN/ADMA_CHANNEL0_CONF1Configure CHANNEL032R/WRN/ADMA_CHANNEL0_CONF0Configure CHANNEL032R/WRDMA_CHANNEL0.LOCK01N/ADMA_CHANNEL0Configure CHANNEL0 rights5R/WRAs shown in Table 2, a plurality of DMA channels (e.g., channels 0 to n) are illustrated. For each channel, parameters such as offset, register name, description, size and access are described. For example, the DMA channel 0 is associated with a plurality of configuration registers 128A-128N having the names "DMA_CHANNEL0_CONF0" to "DMA_CHANNEL0_CONFn ". These configuration registers enable read or read/write operations on DMA channel 0. For example, if an access request or transaction on the DMA channel 0 includes a MreqDebug, MReqPrivilege, MReqType or MReqSecure qualifier, the DMA channel (e.g., DMA channel 0) is configured for read/write operations. Otherwise, the DMA channel 0 is configured for read operations only. In some embodiments, the MReqType qualifier is not used to configure the channel configuration registers 128A-128N and the DMA channel rights register 130, but is provided during DMA channel operations by the DMA subsystem 122 to distinguish between data accesses intended to store data in a non-executable memory space and instruction accesses intended to store data in an executable memory space. Table 3 shows a summary of bits in a DMA_CHANNEL0_CONFn register in accordance with some embodiments.Table 3DMA_CHANNEL0_CONFnDMA_CHANNEL0_ACCESS= MreqDebug, MreqPrivilege, MReqType, MreqSecure!=31:0DMA_CHANN EL0_CONFnConfigure DMA CHANNEL00x0R/WRIn Table 3, DMA_CHANNEL_CONFn register parameters such as bits, field name, function, reset value and access are illustrated. In at least some embodiments, the DMA_CHANNEL0_CONFn register is a 32-bit register. The DMA_CHANNEL0_CONFn register is configured such that if an access request or transaction on the DMA channel 0 includes a MreqDebug, MReqPrivilege, MReqType or MReqSecure qualifier, the DMA channel 0 allows read/write operations. Otherwise, the DMA channel 0 allows read operations only. As shown, the DMA_CHANNEL0_CONFn register may have a reset value 0x0.The access rights of DMA channel 0 (i.e.,the memory spaces that can be accessed by DMA channel 0) are set by a DMA channel rights register 130 (named "DMA_CHANNEL0"). As shown, the DMA_CHANNEL0 register controls the access rights based on data bits which correspond to qualifiers such as MReqType, MReqPrivilege, MReqDebug and MReqSecure. The DMA_CHANNEL0 register also controls a lock bit which enables/disables the qualifier bits associated with DMA_CHANNEL0 qualifier bits from being modified. In other words, if the lock bit "DMA_CHANNEL0.LOCK" = 0, the qualifier bits associated with DMA_CHANNEL0 are modifiable via a read/write operation. If DMA_CHANNEL0.LOCK = 1, the qualifier bits associated with DMA_CHANNEL0 are locked or non-modifiable. In some embodiments, locked qualifier bits can still be read. Table 4 shows a summary of bits in the DMA_CHANNEL0 register in accordance with some embodiments.Table 4DMA_CHANNEL0LOCK = 0LOCK = 131:5RESERVEDRESERVED0x0RR4LOCKLOCK this register0R/WR3MreqDebugIssue MreqDebug on DMA transaction0R/WR2MreqTypeIssue MreqType on DMA transaction0R/WR1MreqPrivilegeIssue MreqPrivilege on DMA transaction0R/WR0MreqSecureIssue MreqSecure on DMA transaction0R/WRIn Table 4, DMA_CHANNEL0 register bits such as bit#, field name, function, reset value and access are illustrated. Bit 0 corresponds to an MReqSecure qualifier bit. If the MReqSecure bit is set to 0, the DMA channel 0 is a public channel. In some embodiments, the MReqSecure bit can be set to 0 if the access to program the DMA channel 0 is made by a public user access, a public privilege access, a secure user access or a secure privilege access. If the MReqSecure bit is set to 1, the DMA channel 0 is a secure channel. In some embodiments, the MReqSecure bit can only be set to 1 if the access to program the DMA channel 0 is made by a secure user access or a secure privilege access. If a public user access or public privilege access attempts to program the DMA channel 0 as a secure channel, the action is discarded or is otherwise nullified.Bit 1 corresponds to a MReqPrivilege bit (or "privilege" bit). If the MReqPrivilege bit is set to 0, the DMA channel 0 is a public user channel. In some embodiments, the MReqPrivilege bit can be set to 0 if the access to program the DMA channel 0 is made by a public user access, a public privilege access, a secure user access or a secure privilege access. If the MReqPrivilege bit is set to 1, the DMA channel 0 is a privilege channel. In embodiments that implement ARM components, some examples of privilege channels include "supervisor" channels, a "system" channels, an "interrupt request" (IRQ) channels, "fast interrupt request" (FIQ) channels, "abort" channels, "undefined" channels or "monitor" channels. Alternative embodiments could, of course, implement other privilege channels and qualifiers. A privilege channel enables operations that are not available to public user channels.In some embodiments, the MReqPrivilege bit can be set to 1 if the access to program the DMA channel 0 is made by a public privilege access and if the DMA channel 0 is a public channel (i.e.,MReqSecure = 0). If the DMA channel 0 is a secure (non-public) channel (i.e., MReqSecure = 1), the MReqPrivilege bit can be set to 1 if the access to program the DMA channel 0 is made by a secure user access or a secure privilege access. If a public user access attempts to program the DMA channel 0 as a public privilege channel, a secure user channel or a secure privilege channel, the action is discarded or is otherwise nullified. Additionally or alternatively, if a secure user access attempts to program the DMA channel 0 as a secure privilege channel, the action is discarded or is otherwise nullified.In some embodiments, both the MReqPrivilege and MReqSecure bits are used. In such embodiments, these bits allow the DMA channel 0 to be configurable as a public user channel, a public privilege channel, a secure user channel or a secure privilege channel. For example, if both the MReqSecure and MReqPrivilege bits are set to 0, the DMA channel 0 is a public user channel. If the MReqSecure bit is set to 0 and the MReqPrivilege bit is set to 1, the DMA channel 0 is a public privilege channel. If the MReqSecure bit is set to 1 and the MReqPrivilege bit is set to 0, the DMA channel 0 is a secure user channel. The rules previously described regarding which accesses can configure DMA channel 0 as a public user channel, a public privilege channel, or a secure user channel apply.If both the MReqSecure and MReqPrivilege bits are set to 1, the DMA channel 0 is a secure privilege channel. In some embodiments, both the MReqSecure and MReqPrivilege bits can only be set to 1 (simultaneously) if the access to program the DMA channel 0 is made by a secure privilege access.Bit 2 corresponds to a MReqType qualifier bit. This qualifier bit can be used together with the MReqSecure and MReqPrivilege bits. If the MReqType bit is set to 0, the DMA channel 0 is a data access channel, intended for transferring data to a non-executable memory space (e.g., a peripheral or a memory data buffer) by an initiator. In some embodiments, the MReqType bit can be set to 0 for use with a public user access, a public privilege access, a secure user access or a secure privilege access. If the MReqType bit is set to 1, the DMA channel 0 is an instruction channel, intended for transferring data to an executable memory space by an initiator. For example, the DMA channel 0 may be a public user instruction channel, a public privilege instruction channel, a secure user instruction channel or a secure privilege instruction channel. The type of instruction channel depends on the other qualifier bits (e.g., if MReqType = 1, MReqSecure = 1 and MReqPrivilege = 1, channel 0 is a secure privilege instruction channel). As previously mentioned, in some embodiments, the MReqType qualifier is not used to configure DMA channels, but is provided during DMA transactions to distinguish between data that is intended to be stored in non-executable memory space and data that is intended to be stored in executable memory space.In some embodiments, DMA channel 0 can be set as a public user instruction channel only if the access to program the DMA channel 0 is made by a public privilege access, a secure user access or a secure privilege access. The DMA channel 0 can be set as a public privilege instruction channel only if the access to program the DMA channel 0 is made by a secure user access or a secure privilege access. The DMA channel 0 can be set as a secure user instruction channel only if the access to program the DMA channel 0 is made by a secure privilege access. The DMA channel 0 can be set as a secure privilege instruction channel only if the access to program the DMA channel 0 is made by a secure privilege access. If an access attempts to use to a DMA channel improperly (e.g., if a secure user access attempts to use the DMA channel 0 as secure user instruction channel), the action is discarded or is otherwise nullified. In some embodiments, using the DMA channel 0 as an instruction channel requires a higher security level than using the DMA channel 0 as a data access channel. For example, a secure user access is able to use the DMA channel 0 as a secure user channel (for data access), but is not able to use the DMA channel 0 as a secure instruction channel (i.e.,only a secure privilege access is able to use the DMA channel as a secure user instruction channel).Bit 3 corresponds to a MReqDebug qualifier bit. If MReqDebug = 0, the DMA channel 0 is a functional (application) channel. If MReqDebug = 1, the DMA channel 0 is a debug channel. As previously mentioned, the functional mode may involve executing instructions using a processor and the debug mode may involve executing instructions using an emulator. In some embodiments, the DMA channel 0 is able to be configured as a functional channel only if the access to program DMA channel 0 is a functional access (e.g., in a secure mode, accesses issued by authorized software such as the secure kernel is considered a functional access). The DMA channel 0 is able to be configured as a debug channel by a debug access or a functional access. However, if a debug access attempts to configure channel 0 as a functional channel, the action is discarded or is otherwise nullified.Bit 4 corresponds to a lock bit. If the lock bit = 0, the qualifier bits (bits 0-3) are modifiable using a read/write operation. If the lock bit = 1, the qualifier bits are "locked" or are non-modifiable (but may still be readable). In some embodiments, the lock bit may be set to 0 before a DMA channel is configured or after a DMA channel operation is completed. The lock bit may be set to 1 after the DMA channel rights of a given DMA channel are successfully configured in preparation of a DMA channel operation. Once a DMA channel is successfully configured, the qualifiers of the accesses that made the configuration are recorded and used to allow full programming of the DMA channel by setting DMA parameters such as source address, destination address or other parameters. After the DMA channel operation has started, the DMA subsystem 122 uses (e.g., generates as necessary) the qualifiers set in the channel rights register 130 to issue transactions on the hardware security architecture. The lock bit should be set to 1 throughout the DMA channel operation to ensure that changes are not made to DMA channel's configuration or access rights during the DMA channel operation. When a DMA channel operation is completed, the DMA channel configuration registers 128 and DMA channel rights registers 130 related to the completed DMA channel operation are cleared (set to 0). As shown, bits 5-31 of the DMA_CHANNEL0 register are reserved for future use.Figures 4, 5, 6, 7, 8 and 9 show methods of restricting DMA channel configurations in accordance with one or more embodiments. As shown in Figure 4, a method 400 comprises accessing a system DMA (block 402). If the access is determined to be a public access (determination block 404), the method 400 allows a DMA channel to be configured as a public channel, but not a secure channel (block 408). If the access is determined to be a secure access (determination block 404), the method 400 allows a DMA channel to be configured as either a secure channel or a public channel (block 406).As shown in Figure 5, a method 500 comprises accessing a system DMA (block 502). If the access is determined to be a user access (determination block 504), the method 500 allows a DMA channel to be configured as a user channel, but not a privilege channel (block 508). For example, the user access may be a public user access or a secure user access. If the access is determined to be a privilege access (determination block 504), the method 500 allows a DMA channel to be configured as either a privilege channel or a user channel (block 506). For example, if the access is a public privilege access, the method 500 allows a DMA channel to be configured as either a public privilege channel or a public user channel. If the access is a secure privilege access, the method 500 allows a DMA channel to be configured as either a secure privilege channel, a public privilege channel, a secure user channel or a public user channel.As shown in Figure 6, a method 600 comprises accessing a system DMA (block 602). If the access is determined to be a debug access (determination block 604), the method 600 allows a DMA channel to be configured as a debug channel, but not a functional channel (block 608). If the access is determined to be a functional access (determination block 604), the method 600 allows a DMA channel to be configured as either a functional channel or a debug channel (block 606).As shown in Figure 7, a method 700 comprises accessing a system DMA (block 702). If the access includes a qualifier for data accesses (determination block 704), the method 700 allows a DMA channel to be used as a data channel, but not an instruction channel (block 708). If the access includes a qualifier for instruction accesses (determination block 704), the method 700 allows a DMA channel to be used as either a data channel or an instruction channel (block 706).As shown in Figure 8, a method 800 comprises accessing a system DMA (block 802). If an access is determined to be a public privilege access (determination block 804), the method 800 allows a DMA channel to be configured as a public user channel or a public privilege channel (block 806). If an access is determined to be a secure user access (determination block 808), the method 800 allows a DMA channel to be configured as a public user channel, public privilege channel or a secure user channel (block 810). If an access is determined to be a secure privilege access (determination block 812), the method 800 allows a DMA channel to be configured as a public user channel, a public privilege channel, a secure user channel or a secure privilege channel (block 814). If an access is determined to be neither a public privilege access, a secure user access, or a secure privilege access (determination blocks 804, 808, 812), the method 800 allows a DMA channel to be configured as a public user channel (block 816).As shown in Figure 9, a method 900 comprises accessing a system DMA (block 902). If an access is determined to be a public privilege access (determination block 904), the method 900 allows a DMA channel to be used as a public user instruction channel (block 906). If an access is determined to be a secure user access (determination block 908), the method 900 allows a DMA channel to be used as a public user instruction channel or a public privilege instruction channel (block 910). If an access is determined to be a secure privilege access (determination block 912), the method 900 allows a DMA channel to be used as a public user instruction channel, a public privilege instruction channel, a secure user instruction channel or a secure privilege instruction channel (block 914). If an access is determined to be neither a public privilege access, a secure user access, or a secure privilege access (determination blocks 904, 908, 912), the method 900 does not allow a DMA channel to be used as an instruction channel (block 916).The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example, the methods described previously may implement a locking mechanism before or after a DMA channel is configured. The locking mechanism may restrict configuration of a DMA channel or restrict changes to a DMA channel after a valid configuration occurs. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
An apparatus including a contact point on a substrate; a first dielectric layer comprising a material having a dielectric constant less than five formed on the contact point, and a different second dielectric layer formed on the substrate and separated from the contact point by the first dielectric layer. Collectively, the first and second dielectric layers comprise a composite dielectric layer having a composite dielectric constant value. The contribution of the first dielectric layer to the composite dielectric value is up to 20 percent. Also, a method including depositing a composite dielectric layer over a contact point on a substrate, the composite dielectric layer comprising a first material having a dielectric constant less than 5 and a second different second material, and forming a conductive interconnection through the composite dielectric layer to the contact point. |
What is claimed is: 1. An apparatus comprising:a contact point formed on a substrate or interconnect layer; a first dielectric layer comprising cubic boron nitride on the contact point; and a different second dielectric layer formed on the substrate and separated from the contact point by the first dielectric layer. 2. The apparatus of claim 1, wherein collectively the first dielectric layer and the second dielectric layer comprise a composite dielectric layer having a composite dielectric constant value and the contribution of the first dielectric layer to the composite dielectric constant value is up to 20 percent, and the composite dielectric value is less than 3.0.3. The apparatus of claim 1, wherein, collectively, the first dielectric layer and the second dielectric layer comprise a composite dielectric layer, the apparatus further comprising:an interconnection line formed on the second dielectric layer and coupled to the first interconnection line by a contact through the composite dielectric layer. 4. The apparatus of claim 3, wherein the contact has a body with a length dimension extending through the composite dielectric layer and a third dielectric material comprising a dielectric constant similar to the material of the first dielectric layer is formed on the length dimension of the body between the contact and the second dielectric layer.5. The apparatus of claim 4, wherein the material of the third dielectric layer comprises cubic boron nitride.6. The apparatus of claim 5, further comprising a fourth dielectric layer formed on the substrate and separated from the first dielectric layer by the second dielectric layer.7. The apparatus of claim 6, wherein the fourth dielectric layer comprises a material similar to the material of the first dielectric layer.8. The apparatus of claim 7, wherein the material of the fourth dielectric layer comprises cubic boron nitride.9. The apparatus of claim 1, wherein the second dielectric layer comprises an aerogel.10. An apparatus comprising:a contact point formed on a substrate; a dielectric layer formed on substrate; and an interconnection formed through the dielectric layer to the contact point wherein the dielectric layer comprises a first dielectric material comprising cubic boron nitride and a second different dielectric material, the first dielectric material encapsulating the second dielectric material. 11. The apparatus of claim 10, wherein the first dielectric material comprises a dielectric constant less than 5.12. The apparatus of claim 10, wherein the interconnection is in one of multiple levels of interconnections on the substrate other than an initial level adjacent the substrate.13. The apparatus of claim 10, wherein the second dielectric material comprises an aerogel.14. A method comprising:depositing a composite dielectric layer over a contact point on a substrate, the composite dielectric layer comprising a first material comprising cubic boron nitride and a second different material; and forming a conductive interconnection through the composite dielectric layer to the contact point. 15. The method of claim 14, wherein depositing a composite dielectric layer comprises depositing a material comprising a dielectric constant similar to the first material on the interconnection.16. The method of claim 14, wherein depositing a composite dielectric layer comprises encapsulating the second material with the first material. |
BACKGROUND1. FieldIntegrated circuit processing and, more particularly, to the patterning of interconnections on an integrated circuit.2. BackgroundModern integrated circuits use conductive interconnections to connect the individual devices on a chip or to send or receive signals external to the chip. Popular types of interconnection include aluminum alloy interconnections and copper interconnections.One process used to form interconnections, particularly copper interconnections, is a damascene process. In a damascene process, a trench is cut in a dielectric and filled with copper to form the interconnection. A via may be in the dielectric beneath the trench with a conductive material in the via to couple the interconnection to underlying integrated circuit devices or underlying interconnections. In one damascene process (a "dual damascene process"), the trench and via are each filled with copper material by, for example, a single deposition.A photoresist is typically used over the dielectric to pattern a via or a trench or both in the dielectric for the interconnection. After patterning, the photoresist is removed. The photoresist is typically removed by an oxygen plasma (oxygen ashing). The oxygen used in the oxygen ashing can react with an underlying copper interconnection and oxidize the interconnection. Accordingly, damascene processes typically employ a barrier layer of silicon nitride Si3N4 directly over the copper interconnection to protect the copper from oxidation during oxygen ashing in the formation of a subsequent level interconnection. In intelayer interconnection levels (e.g., beyond a first level over a device substrate), the barrier layer also protects against misguided or unlanded vias extending to an underlying dielectric layer or level.In general, the Si3N4 barrier layer is very thin, for example, roughly 10 percent of the thickness of the pre-metal dielectric (PMD) layer or interlayer dielectric (ILD) layer. A thin barrier layer is preferred primarily because Si3N4 has a relatively high dielectric constant (k) on the order of 6-7. The dielectric constant of a dielectric material, such as an interlayer dielectric, generally describes the parasitic capacitance of the material. As the parasitic capacitance is reduced, the cross-talk (e.g., a characterization of the electric field between adjacent interconnections) is reduced, as is the resistance-capacitance (RC) time delay and power consumption. Thus, the effective dielectric constant (keff) of a PMD layer or ILD layer is defined by the thin barrier layer and another dielectric material having a lower dielectric constant so that the effect of the high dielectric material typically used for the barrier layer (e.g., Si3N4) is minimized. Representative dielectric materials for use in combination with a barrier layer to form PMD or ILD layers include silicon dioxide (SiO2), fluorinated silicate glass (FSG), and carbon-doped oxide (CDO).As technologies advance, the distance (e.g., pitch) between interconnections decreases as more devices and more interconnections (e.g., interconnect lines) are formed on a structure. Thus, the effective dielectric constant (keff) of a PMD or ILD layer is significant.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic side view of a portion of a circuit substrate or interconnect layer on a substrate including a contact point and a barrier layer formed over the contact point.FIG. 2 shows the structure of FIG. 1 following the formation of a dielectric layer on the barrier layer.FIG. 3 shows the structure of FIG. 2 following the formation of an interconnection to the contact point.FIG. 4 is a schematic side view of a portion of a circuit substrate showing a contact point and a barrier layer overlying the contact point.FIG. 5 shows the structure of FIG. 4 following the introduction of a sacrificial layer and the formation of an interconnection to the contact point.FIG. 6 shows the structure of FIG. 5 following the removal of the sacrificial layer.FIG. 7 shows the structure of FIG. 6 following the introduction of a barrier layer around the interconnection.FIG. 8 shows the structure of FIG. 7 following the introduction of a dielectric layer on the substrate.FIG. 9 shows the structure of FIG. 8 following the introduction of a barrier layer on the substrate.DETAILED DESCRIPTIONFIGS. 1-3 illustrate a dual damascene process for forming an interconnection over a contact point. A contact point is, for example, a device on a substrate (e.g., gate, junction, etc.). Alternatively, in a multi-level interconnection device configuration, the contact point also includes an underlying interconnection (e.g., an interconnection line). A typical integrated circuit of a microprocessor may have, for example, five or more interconnection layers or lines stacked on one another, each insulated from one another by dielectric material.FIG. 1 illustrates a cross-sectional, schematic side view of a portion of a circuit substrate structure. Structure 100 includes substrate 110 of, for example, a semiconductor material such as silicon or a semiconductor layer on an insulator such as glass. Substrate 110 includes contact point 120 on a surface thereof. In one embodiment, contact point 120 is a portion of an underlying interconnect line (e.g., a metal trench). A representative interconnect line is shown in dashed lines. Overlying contact point 120 and substrate 110, in one embodiment, is barrier layer 130. Barrier layer 130 is selected, in one embodiment, to be a material having a dielectric constant (k) less than on the order of about five. In the context of a contact point that is a copper interconnection (e.g., interconnection line), barrier layer 130 is selected to have relatively good copper diffusion characteristics (i.e., to inhibit copper diffusion). Barrier layer 130 is also selected such that it is a material that has an etch characteristic such that it may be selectively etched or retained during an etch operation involving barrier layer 130 or a subsequently introduced dielectric material, such as a dielectric material that, together with barrier material 130, will serve as a pre-metal dielectric (PMD) or interlayer dielectric (ILD) layer dielectric material. One material for barrier layer 130 is cubic boron nitride (CBN). Cubic boron nitride has a dielectric constant on the order of 4-4.5. Cubic boron nitride may be introduced by chemical vapor deposition (CVD) and tends to serve as an inhibitor of copper diffusion when used as the barrier material in the context of copper. Further, cubic boron nitride is selectively etchable in, for example, a fluorine plasma. Still further, cubic boron nitride is a relatively high compressive stress material allowing, in one example, its use in conjunction with high tensile stress materials to minimize the effect of the tensile stress.In one embodiment, barrier layer 130 of cubic boron nitride is introduced, according to current technologies, to a thickness on the order of 40 nanometers (nm) to 100 nm. The thickness is selected, in one example, to be sufficient to protect an underlying contact point 120 (e.g., copper interconnection line), but not to unacceptably increase the capacitance between contact point 120 and, for example; an overlying or adjacent interconnection (e.g., thickness selected to minimize the contribution of barrier layer 130 to keff).Overlying barrier layer 130 in the illustration shown in FIG. 2 is dielectric layer 140. Dielectric layer 140 is, for example, a tetraethyl orthosilicate (TEOS), a plasma enhanced CVD (PECVD), SiO2, a fluorinated silicate glass (FSG), or a carbon-doped oxide (CDO) deposited to a thickness on the order of approximately 700 nanometers according to current technologies. As described in more detail with reference to FIGS. 4-9 and the accompanying text, dielectric layer 140 may also be an aerogel. The thickness of dielectric layer 140 will depend, in part, on size characteristics and scaling considerations for the device. Collectively, barrier layer 130 and dielectric layer 140 define a composite dielectric layer (e.g., PMD or ILD layer) having a composite or an effective dielectric constant (keff). In one embodiment, the contribution of the material selected for barrier layer 130 is less than 20 percent, in another embodiment, less than 10 percent of the keff. Once dielectric layer 140 is deposited and formed, the material may be planarized, for example, with a polish (e.g., chemical-mechanical polish).Referring to FIG. 3, following the introduction of dielectric layer 140, an opening is made to contact point 120. In one embodiment, the opening includes via 160 and trench 170 formed, for example, by sequential photolithographic patterning and etching operations. Representatively, what is shown is a dual damescene process where via 160 and trench 170 are formed as the opening and are filled with conductive material 150 such as a copper material and the conductive material in trench 170 serves as an interconnection line. Thus, although not shown in the cross sectional view of FIG. 3, trench 170 may extend into the page as viewed to act as a trench for a conductive material interconnection line to reside therein. In addition to conductive material of, for example, a copper material in via 160 and trench 170, one or more layers may be deposited along the sidewalls of via 160 and trench 170 to, for example, inhibit diffusion of the conductive material and/or improve adhesion of the conductive material.Via 160 opening is made through dielectric layer 140 and barrier material 130. To form an opening through dielectric layer 140, a suitable etchant is selected that does not substantially react or disrupt underlying barrier material 130. In the case of a dielectric layer 140 of FSG and barrier layer 130 of cubic boron nitride, a suitable etchant to etch FSG is, for example, a SiCl4 etch chemistry. With such an etchant, an etch of dielectric layer 140 will proceed through the material and substantially stop when barrier material 130 is exposed. A subsequent etch chemistry, such as a fluorine-based etch chemistry (e.g., HF, CF4) can then be used to form an opening through barrier material 130 and expose contact point 120.After exposing contact point 120, conductive material 150 is deposited in trench 170 and via 160. A suitable conductive material is, for example, a copper material deposited by a damascene process. Once conductive material 150 is deposited in trench 170 and via 160, the substrate may be planarized. The process described above may then be repeated for a subsequent interconnection layer or layers.FIGS. 4-9 describe a second embodiment. Referring to FIG. 4, structure 200 in this embodiment includes substrate 210 having a contact point 220 on a surface thereof. Contact point may be, for example, a device or an interconnection formed over a substrate to one or more devices formed on or near the substrate.Overlying contact point 220 (as viewed) on a surface of substrate 210 in the structure of FIG. 4 is barrier layer 230. Barrier layer 230 may be deposited, representatively, as a blanket over a portion, including the entire portion of the surface of substrate 210. Barrier layer 230 is selected to be a material having a dielectric constant (k) less than 5. In another embodiment, barrier layer 230 is a material selected to have good copper diffusion characteristics and etch selectivity. Cubic boron nitride is one material that has such characteristics. Barrier layer 230 of cubic boron nitride, in one example, has a thickness on the order 40 nm to 100 nm.FIG. 5 shows the structure of FIG. 4, following the introduction of sacrificial layer 240 and the formation of trench 270, via 260 and conductive material 250 within the trench and via. Sacrificial layer 240 may be, for example, a dielectric material such as SiO2 or other material that may be patterned to the exclusion of barrier material 230. Sacrificial layer 240 is deposited to a thickness sufficient (perhaps after planarization) to accommodate a properly sized interconnection line (in trench 270) and contact (in via 260). One suitable thickness for sacrificial layer 240 according to current techniques is on the order of about 700 nanometers. As shown in FIG. 5, conductive material 250 of, for example, copper material is formed in trench 270 and via 260 and contacts contact point 220.FIG. 6 shows the structure of FIG. 5 following the removal of sacrificial material 240. In the embodiment, where sacrificial material 240 is an oxide (e.g., SiO2) and barrier material 230 is cubic boron nitride, sacrificial material 240 may be removed, by dipping structure 200 in hydrofluoric acid. Alternatively, an etchant introduced without a photolithographic mask overlying a portion of sacrificial material 240 to protect the material from an etch chemistry may be used. As shown in FIG. 6, following the removal of sacrificial material 240, conductive material 250, such as a copper material, remains exposed on substrate 210 of structure 200. In an embodiment where conductive material 250 is copper, the exposure of conductive material 250 by removal of sacrificial material 240 may be done in an inert or oxygen-free environment to prevent oxidation of the copper material.FIG. 7 shows the structure of FIG. 6 following the introduction of barrier layer 280. In one embodiment, barrier layer 280 is selected of a material having a dielectric constant less than 5. One suitable material is cubic boron nitride. In this case, both barrier layer 230 and barrier layer 280 are cubic boron nitride. As shown in FIG. 7, barrier layer 280 completely surrounds conductive material 250.In one embodiment, there may be many conductive structures such as conductive material 250 formed to various contact points on substrate 210. A representative pitch between such structures (reference number 255 in FIG. 6) may be on the order of about 70 nanometers according to current technologies for interconnection lines. Accordingly, the thickness of barrier layer 280 is selected, in one embodiment, to be sufficient to surround conductive material 250 but thin enough to leave an area between conductive material structures exposed so as, for example, not to create voids between the structures. Where a pitch between conductive structures is on the order of about 70 nanometers, a thickness of barrier layer 280 may representatively be on the order of 30 to 40 nanometers.FIG. 8 shows the structure of FIG. 7 following the introduction of dielectric layer 290. In one embodiment, dielectric layer 290 is introduced as a blanket layer over a portion, including the entire portion of the structure. Dielectric layer 290, in one embodiment, is selected to have a low dielectric constant, preferably a dielectric constant less than two (2). In one embodiment, dielectric layer 290 is an aerogel (XLK). Aerogel is described as a porous glass and can have a dielectric constant on the order of 1.1. Although aerogel has a low dielectric constant, it is known to have inferior mechanical properties, being weak and brittle.In one embodiment, dielectric layer 290 of aerogel may be introduced as a liquid, possibly through the use of a solvent. The material may then be dried (supercritical drying) to evaporate the solvent and form a solid dielectric material layer. Planarization may also be necessary to expose barrier layer 280 over conductive material 250 or to expose conductive material 250. In one embodiment, dielectric layer 290 of, for example, aerogel, and barrier layer 280 act as the substrate surface for additional layers. Collectively, barrier layer 230, barrier layer 280, and dielectric layer 290 define a composite dielectric layer (e.g., PMD or ILD layer) having a composite or an effective dielectric constant (keff). In one embodiment, the contribution of the material selected for barrier layer 280 and the material Selected for barrier layer 280 is less than 20 percent, in another embodiment less than 10 percent, of the keff.FIG. 9 shows the structure of FIG. 8 following the introduction of barrier layer 295 as, for example, a blanket over a portion, including the entire portion, the substrate surface. In one embodiment, barrier layer 295 is similar to barrier layer 230 in that it has a dielectric constant less than about 5 and acts as a suitable diffusion barrier for a conductive structure for which it may be in contact. It also may have relatively good etch selectivity relative to a dielectric material that, together with barrier layer 295, forms an ILD. In one embodiment, barrier layer 295 is cubic boron nitride as is barrier layer 280 and barrier layer 230. In this manner, dielectric 290 of, for example, aerogel is encapsulated by cubic boron nitride.In the preceding detailed description, specific embodiments are described. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
The present invention provides a method, system and apparatus of interfacing a CardBay device (520) using existing Card and Socket Services (CSS) software (530). A CardBay controller (515) responds to card queries to indicate a pseudo card configuration which the CSS software (530) will recognize and support without the need for modification. The CardBay controller (515) further overrides the power control request generated in response to the pseudo card configuration information and signals a power control based on the voltage combination associated with the actual CardBay device (520). The controller also intercepts CIS read commands and responds with a CIS recognizable and specific to the inserted card. The controller (515) can further intercept legacy driver accesses and convert them into CardBay recognized accesses. |
What is claimed is: 1. A method of interfacing an advanced card device with a PC card system, said method comprising:inserting said advanced card device into a PC card slot associated with said PC card system; and signaling a pseudo card configuration subsequent to a host query to determine a type of card inserted, wherein said pseudo card configuration is of a configuration recognizable to said PC card system. 2. The method of claim 1 further comprising:signaling a power control request, to said PC card system, based on a voltage combination associated with said advanced card device; and signaling a recognizable card information structure for operation support from card and socket services software associated with said PC card system. 3. The method of claim 1, wherein said advanced card device comprises a CardBay card.4. The method of claim 1, wherein said advanced card device comprises a USB, Smart Media, MMC/SD, Memory Stick or Smart Card.5. The method of claim 1, wherein said pseudo card configuration comprises a 16-bit card type configuration by setting a 16BITCARD bit in a socket present state register associated with said PC card system.6. The method of claim 1, wherein said pseudo card configuration comprises a CardBus card type configuration by setting a CBCARD bit in a socket present state register associated with said PC card system.7. The method of claim 5 further comprising:intercepting 16-bit accesses from said PC card system to said advanced card device; and converting said 16-bit accesses into accesses recognizable to said advanced card device. 8. The method of claim 6 further comprising:intercepting CardBus accesses from said PC card system to said advanced card device; and converting said CardBus accesses into accesses recognizable to said advanced card device. 9. The method of claim 1, wherein said PC card system comprises a notebook computer.10. The method of claim 1 further comprising overriding a power control request from card and socket services software associated with said PC card system which is based on said pseudo card configuration.11. The method of claim 1 further comprising intercepting a card information structure read command from said PC card system.12. A system for interfacing an advanced card device with a legacy PC device, said system comprising:a PC card slot associated with said legacy PC device for receiving said advanced card device which is configured for insertion into said PC card slot; and a controller associated with said advanced card device configured to signal a pseudo card configuration subsequent to a host query to determine a type of card inserted into said PC card slot, wherein said pseudo card configuration is of a configuration recognizable to said legacy PC device. 13. The system of claim 12, wherein said controller further configured to signal a power control request, to said legacy PC device, based on a voltage combination associated with said advanced card device and to signal a recognizable card information structure for operation support from card and socket services software associated with said legacy PC device.14. The system of claim 12, wherein said advanced card device comprises a CardBay card.15. The system of claim 12, wherein said advanced card device comprises a USB, SmartMedia, MMC/SD, Memory Stick or Smart Card.16. The system of claim 12, wherein said pseudo card configuration comprises a 16-bit card type configuration by setting a 16BITCARD bit in a socket present state register associated with said legacy PC device.17. The system of claim 12, wherein said pseudo card configuration comprises a CardBus card type configuration by setting a CBCARD bit in a socket present state register associated with said legacy PC device.18. The system of claim 16, wherein said controller is further configured to intercept 16-bit accesses from said legacy PC device to said advanced card devices and convert said 16-bit accesses into accesses recognizable to said advanced card device.19. The system of claim 17, wherein said controller is further configured to intercept CardBus accesses from said legacy PC device to said advanced card devices and convert said CardBus accesses into accesses recognizable to said advanced card device.20. The system of claim 12, wherein said legacy PC device comprises a notebook computer.21. The system of claim 12, wherein said controller is further configured to override a power control request from card and socket services software associated with said legacy PC device which is based on said pseudo card configuration.22. The system of claim 12, wherein said controller is further configured to intercept a card information structure read command from said legacy PC device.23. An apparatus for providing additional functionality to a legacy PC device using existing card and socket services software associated with said legacy PC device, said apparatus comprising:an advanced card device configured to be insertable into a PC card slot associated with said legacy PC device, wherein said advanced card device comprises instructions for enabling said additional functionality; and a controller associated with said advanced card device configured to signal a pseudo card configuration subsequent to a host query to determine a type of card inserted into said PC card slot, wherein said pseudo card configuration is of a configuration recognizable to said legacy PC device. 24. The apparatus of claim 23, wherein said controller further configured to signal a power control request, to said legacy PC device, based on a voltage combination associated with said advanced card device and to signal a recognizable card information structure for operation support from card and socket services software associated with said legacy PC device. |
BACKGROUND OF THE INVENTION1. Technical Field of the InventionThe present invention relates generally to the field of computer peripherals and, more particularly, to a method, system and apparatus to interface a CardBay card through the standard PC card form factor.2. Description of Related ArtToday, PC card technology is used in mobile computing platforms ranging from high performance full size notebook computers to ultra-portable specialized function devices, such as personal organizers and cameras. As portable platforms continue to diversify in form factor and decrease in power consumption, add-in card technologies must mirror the compactness and energy efficiency of the portable host system. Future mobile systems will need user installable, modular add-in capabilities, in both card and storage form factors, in order to support standardized system configurations.Future mobile systems are also evolving to a new I/O technology based on more modern, popular serial buses. The industry's evolution is a drive to "layer" the future add-in capabilities atop these newer buses. CardBay has emerged as the standard embraced by the mobile industry as the best means of meeting these evolving market needs. This emerging standard outlines how add-ins will be direct evolutionary cousins of today's PC cards offering the expected set of PC card functions, along with enhancements that meet the changing needs of mobile technology.Personal Computer Memory Card International Association ("PCMCIA") compatible devices such as modems, memory modules, and disc controllers, for example, are well known. The PCMCIA standard PC card interface defines a physical size and electrical interconnection for each class of computer peripherals. The size of each PCMCIA device is approximately that of a credit card, and each device connects mechanically and electrically through a standard connector to a host computer such as a notebook PC.The PC card standard further defines a software architecture to provide "plug and play" capabilities across of range of products. For example, host software known as Card and Socket Services ("CSS") is an interface that masks the hardware implementation from card vendor drivers (ie, avoids requiring that the card driver communicate directly with any particular chip) and manages system resources such as interrupt assignment and memory windows for PC cards.Although CardBay can offer new functionality to the PC Card Slot, existing CSS software, for example, will not recognize or support the new functionalities offered by CardBay or CardBay type technology. New PCMCIA compatible peripherals generally require specialized software and/or hardware, to interact with a peripheral access to the PC card interface, leading to large development cost, long development schedules and/or expensive adapters.SUMMARY OF THE INVENTIONThe present invention achieves technical advantages as a method, system and apparatus for interfacing a CardBay device using existing CardBus and 16-bit card CSS software. A CardBay controller responds to card queries to indicate a pseudo card configuration which the host will recognize and support without the need for modified CSS software. The CardBay controller further overrides the power control request generated in response to the pseudo card configuration information and signals a power control request based on the voltage combination associated with the actual CardBay device. The controller further intercepts CIS read commands and responds with a CIS which is specific for the inserted card. The controller further intercepts driver accesses and converts them into accesses that the CardBay device recognizes.BRIEF DESCRIPTION OF THE DRAWINGSFor a more complete understanding of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawings wherein:FIG. 1 illustrates a block diagram of a current media card adapter interface system;FIG. 2 illustrates a block diagram of a current media and PC Card interface system;FIG. 3 shows a method of employing a CardBay device in accordance with an embodiment of the present invention;FIG. 4 illustrates a block diagram of a CardBay controller and card interface in accordance with the present invention; andFIG. 5 illustrates a block diagram of a system employing a CardBay device in accordance with an embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe numerous innovative teachings of the present application will be described with particular reference to the presently preferred exemplary embodiments. However, it should be understood that this class of embodiments provides only a few examples of the many advantageous uses and innovative teachings herein. In general, statements made in the specification of the present application do not necessarily delimit any of the various claimed inventions. Moreover, some statements may apply to some inventive features, but not to others.In a current system, there are two basic ways for a user to access a media card, depending on the system implementation. The most common way is through the use of an adapter card that contains logic to translate from the media card interface to the PC Card interface as shown in FIG. 1. This method is good in that it uses the existing PC Card slot on the notebook; however, these adapter cards are very expensive because of the translation logic on the adapter card.FIG. 2 illustrates the other basic method in which the system has a dedicated slot specifically for the media card. In such a system, the media card controller is either a dedicated chip or a separate function of an existing chip. The problem with this method is that the system must have the additional space for the separate chip and media card slot. Since there are several different types of media cards available, this solution is undesirable because it restricts the user to a certain type of media card.In accordance with an aspect of the present invention, logic similar to that currently used on media card adapter cards is integrated into a PC card controller which is integral to the present CardBay system and method. Among other advantages, the present approach allows for inexpensive passive CardBay adapters for media cards. Media cards include SmartMedia cards, MultiMedia cards, Secure Digital cards, Memory Stick cards, Smart Cards and other media type cards. The present approach also allows for CardBay cards with a USB interface.Currently, when a 16-bit or CardBus card is inserted into a PC card slot, the host controller determines the type of card inserted. If the inserted card is a 16-bit or CardBus card, Card and Socket Services (CSS) applies power to the inserted card by writing the appropriate voltage combination to a Socket Control Register. However, if a CardBay card is inserted, the legacy CSS will not recognize or support the operation of the new card.Referring now to FIG. 3 there is shown a flow diagram of a method of employing a CardBay card using legacy software in accordance with an aspect of the of the present invention. The CardBay card is first coupled 110 to the host system. Subsequent to insertion, the inserted card is queried 120 to determine the card characteristics for further operation. If the inserted card is a CardBay type card, a pseudo card configuration 130 is enabled. The pseudo card configuration is a configuration which the legacy software is known to recognize and support. In an exemplary embodiment, the pseudo card configuration is a configuration indicating that a 16-bit card or a CardBus card is inserted in the PC card slot.The method further includes: overriding 140 a power control request sent in response to and based on the pseudo card configuration; and enabling a power control request based on the voltage combination indicated by the true card configuration 150 of the inserted CardBay card. Subsequently, the appropriate power is applied to the inserted CardBay card. Additionally, the legacy CSS software enables CIS read commands to the inserted card to further determine the inserted card characteristics. However, CardBay cards do not include a CIS which the legacy software can read and/or recognize. Therefore, in accordance with the present invention, CIS read commands sent to the inserted card are intercepted and a CIS that is specific for the inserted CardBay card and that appears to the host system to come from the CardBay card is sent in response to the intercepted CIS read command. Further, driver accesses to the inserted card are intercepted by the card controller and converted into accesses which the CardBay card can recognize.Referring now to FIG. 4 there is illustrated a block diagram of a CardBay controller 410 in accordance with an embodiment of the present invention. As shown, the CardBay controller 410 is coupled to a system bus (such as a PCI bus) for communication with other system components. A CardBay card or specific media card 420 is coupled to the CardBay controller 410 through a standard 68-pin PC card interface 430, for example, typically used for CardBus and 16-bit cards. For insertion into the PC card interface 430, a simple adapter 440 may be required and may be specific to the type of media card used. A type of media card interface logic used in the more expensive and complex media card adapters (see FIG. 1) is integrated on the present CardBay controller 410. The CardBay controller 410 also includes card detect logic which enables the CardBay controller 410 to determine if an inserted card is a 16-bit, CardBus, or CardBay card. The CardBay controller 410 is further configured to route the appropriate signals (determined by the type of card detected) to the 68-pin PC card interface and the other system components.Referring now to FIG. 5 there is illustrated a block diagram of a CardBay system in accordance with an embodiment of the present invention. As shown, the CardBay system includes a host computer 505 having a PCMCIA compatible socket or PC card slot 510 into which a CardBay card 520 can be inserted. The host computer 505 can comprise, for example, a notebook computer, or an ultra-portable specialized function device, such as a personal organizer. When a card is inserted into the PC card slot 510, a CardBay controller 515 queries the card to determine the type of card and interfacing details and hopefully the card is of a type recognizable and supportable by the host computer 505. The CardBay controller 515 is an integrated device and in this exemplary embodiment includes a type of media card interface logic 525 used in the more expensive and complex media card adapters (see FIG. 1) and a Socket Control Register 540, but can include other support devices. The CardBay controller 515 can be hardware, software, firmware, or a combination thereof. For example, when a CardBay card is inserted and signals to indicate its actual characteristic, the CardBus and 16-bit CSS software 530 is not able to recognize or support the CardBay card without additional special software or software modification (such as the media card interface logic integrated in prior media card adapters.) However, with the present CardBay system, the CardBay controller 515 detects a CardBay card 520 and sets a register(s) which will indicate to the old CardBus and 16-bit CSS software 530 that a 16-bit card or CardBus card is inserted instead of the actual CardBay Card 520. More specifically, the CardBay controller 515 indicates to the CSS software 530 that the inserted CardBay card 520 is a 16-bit card by setting the 16BITCARD bit in a Socket Present State Register and further signals an interrupt. Alternatively, the CardBay controller 515 indicates to the CSS software 530 that the inserted CardBay card 520 is a CardBus card by setting the CBCARD bit in a Socket Present State Register and further signals an interrupt. The CSS software 530 subsequently services the interrupt and interprets that a "16-bit card" or a "CardBus card" has been inserted.Having serviced the interrupt and interpreted the inserted card as a "16-bit card," the CSS software 530, enables applied power to the "16-bit card" by writing the appropriate voltage combination to a Socket Control Register 540 associated with the PC Card slot 510 that the CardBay card 520 is inserted into. Alternatively, having serviced the interrupt and interpreted the inserted card as a "CardBus card," the CSS software 530, enables applied power to the "CardBus card" by writing the appropriate voltage combination to a Socket Control Register 540 associated with the PC Card slot 510 that the CardBay card 520 is inserted into. Normally, writes to this register correspond directly to power control request to a PC Card power switch 535. However, since the voltage combination of a 16-bit card or CardBus card does not correspond to the appropriate voltage combination of a CardBay card, the CardBay controller 515 is further configured to override the power control request from the old CSS software 530 and further signals a power control request to the PC Card power switch 535 based upon the voltage combination indicated by the actual inserted CardBay card 520.Further, to determine the characteristics of the "16-bit card" or "CardBus card," the CSS software will attempt to read the card's Card Information Structure (CIS). Generally, PC Cards include a CIS that the CSS software 530 reads to determine the characteristics of the card, such as type of card, operating voltage, and configuration modes. CardBay cards do not include a CIS; therefore, the CardBay controller 515 includes the CIS or a type of CIS necessary for the CSS software 530 to further determine the characteristics of the card. In accordance with the present invention, the CardBay controller 515 is further configured to intercept the CIS read command to the CardBay card 520 and responds with a CIS that is specific for the inserted CardBay card 520 including, for example, each of the five types of CardBay cards (USB, SmartMedia adapters, MMC/SD adapters, Memory Stick adapters, or Smart Card adapters).Once the CSS software 530 reads the CIS from the CardBay controller 515, it then loads the appropriate driver based upon the CIS contents. Normally, the driver accesses directly to the card, however, in accordance with the present invention, the CardBay controller 515 is further configured to intercept the 16-bit accesses or CardBus accesses to the CardBay card 520 and convert them into card accesses that the CardBay card 520 recognizes. The converted card accesses are specific to the type of CardBay card inserted.Although a preferred embodiment of the method and system of the present invention has been illustrated in the accompanied drawings and described in the foregoing Detailed Description, it is understood that the invention is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions without departing from the spirit of the invention as set forth and defined by the following claims. |
Methods, systems, computer-readable media, and apparatuses for selecting an Augmented Reality (AR) object on a head mounted device (HMD) are presented. In some embodiments, an HMD may define a Region-of-Interest (ROI) based on a gesture formed by at least one hand of a user. Subsequently the HMD may display to the user a shape on the HMD. In at least one arrangement, the shape outlines the ROI. Additionally, the HMD may display to the user a plurality of AR objects, each of the plurality of AR objects being associated with a target inside the ROI. Furthermore, the HMD may reduce the size of the ROI based on a first movement of the at least one hand of the user to form a reduced-sized ROI. In at least one arrangement, the reduced-sized ROI is used to select a specific AR object from the plurality of AR objects. |
A method for selecting an Augmented Reality (AR) object on a head mounted display device, comprising:obtaining, by the device, a first input associated with a first gesture performed by a user of the device;defining, based on the first input, a Volume-of-Interest (VOI), wherein the VOI comprises a Region-of-Interest (ROI) within one or more images of a scene, wherein the ROI includes an object within the scene, the object being associated with a plurality of displayable AR objects;displaying, by the device, a first AR object, the first AR object being selected for display from the plurality of displayable AR objects associated with the object;obtaining, by the device, a second input associated with a second gesture performed by the user of the device, the second gesture being performed in a third dimension relative to a two-dimensional plane of the VOI;selecting, by the device based on the second input, a second AR object for display from the plurality of displayable AR objects associated with the object; anddisplaying, by the device, the second AR object.The method of claim 1, wherein the second AR object is selected for display from the plurality of displayable AR objects when the second gesture is made at a first depth relative to the two-dimensional plane.The method of claim 2, further comprising:obtaining, by the device, a third input associated with a third gesture performed by the user of the device, the third gesture being performed in the third dimension relative to the two-dimensional plane;selecting, by the device based on the third input, a third AR object for display from the plurality of displayable AR objects associated with the object, wherein the third AR object is selected when the third gesture is made at a second depth relative to the two-dimensional plane, the first depth being different from the second depth; anddisplaying, by the device, the third AR object.The method of claim 1, wherein the second gesture includes movement of at least one hand of the user in the third dimension.The method of claim 4, wherein the at least one hand of the user is observed in a field of view of a camera of the device during the second gesture.The method of claim 1, wherein the first AR object and the second AR object are displayed in the VOI at a same time, and wherein a position of the first AR object is displayed relative to a position of the second AR object based on a depth of the second gesture.The method of claim 6, wherein the position of the first AR object is displayed relative to the position of the second AR object based on a change in depth between the first gesture and the second gesture.The method of claim 1, wherein the first gesture defining the VOI is performed by at least one hand of the user.The method of claim 8, wherein the second gesture for selecting the second AR object for display includes moving the at least one hand in the third dimension.A head mounted display device for selecting an Augmented Reality (AR) object, the device comprising:means for obtaining a first input associated with a first gesture performed by a user of the device;means for defining, based on the first input, a Volume-of-Interest (VOI), wherein the VOI comprises a Region-of-Interest (ROI) within one or more images of a scene, wherein the ROI includes an object within the scene, the object being associated with a plurality of displayable AR objects;means for displaying a first AR object, the first AR object being selected for display from the plurality of displayable AR objects associated with the object;means for obtaining a second input associated with a second gesture performed by the user of the device, the second gesture being performed in a third dimension relative to a two-dimensional plane of the VOI;means for selecting, based on the second input, a second AR object for display from the plurality of displayable AR objects associated with the object; andmeans for displaying the second AR object.The device of claim 10, wherein the second AR object is selected for display from the plurality of displayable AR objects when the second gesture is made at a first depth relative to the two-dimensional plane.The device of claim 11, further comprising:means for obtaining a third input associated with a third gesture performed by the user,the third gesture being performed in the third dimension relative to the two-dimensional plane; andmeans for selecting, based on the third input, a third AR object for display from the plurality of displayable AR objects associated with the object, wherein the third AR object is selected when the third gesture is made at a second depth relative to the two-dimensional plane, the first depth being different from the second depth.The device of claim 10, wherein the second gesture includes movement of at least one hand of the user in the third dimension.The device of claim 13, wherein the at least one hand of the user is observed in a field of view of a camera during the second gesture.A computer program comprising instructions for implementing a method in accordance with any of the claims 1 to 9. |
CROSS-REFERENCES TO RELATED APPLICATIONSThis present application is a continuation of U.S. Patent Application No. 13/767,820, filed February 14, 2013 , which is hereby incorporated by reference in its entirety.BACKGROUNDAspects of the disclosure relate to selecting an augmented reality (AR) object on a head mounted display (HMD) using human body gestures.An HMD can now be developed as a light, affordable device with some degree of computing power either built in or provided by a host device that is connected to the HMD, such as a smartphone.An HMD can include one or more egocentric cameras mounted on the glass frame. An HMD can also include optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID and wireless sensors. An HMD can have a transparent display area within a user's field of view in which a user can view both physical objects and virtual objects on the display.Using the built-in cameras, an HMD can capture images and videos according to user's input. Conventional methods include a user touching a button on the HMD to capture an image in the user's field of view.BRIEF SUMMARYCertain embodiments are described that allow a user to select one or more augmented reality (AR) objects on a head mounted display (HMD) without a user physically touching the HMD.In some embodiments, an HMD may define a region-of-interest (ROI) based on a gesture formed by at least one hand of a user. Subsequently the HMD may display to the user a shape on the HMD. In at least one arrangement, the shape outlines the ROI. Additionally, the HMD may display to the user a plurality of AR objects, each of the plurality of AR objects being associated with a target inside the ROI. Furthermore, the HMD may reduce the size of the ROI based on a first movement of the at least one hand of the user to form a reduced-sized ROI. In at least one arrangement, the reduced-sized ROI is used to select a specific AR object from the plurality of AR objects. In one or more arrangements, the method for reducing the size of the ROI comprises of moving the user's hands closer to each other. Additionally, the HMD may disengage the reduced-sized ROI based on a disengagement event. For example, the disengagement event may occurs when the at least one hand of the user is away from the ROI, when at least one finger and thumb of the user are closed together, or when a voice command by the user.In another arrangement, wherein multiple augmentations are associated with the specific AR object, the HMD may further display to the user a corresponding augmentation from the multiple augmentations associated with the specific AR object based on a second movement of the at least one hand of the user in the direction of the specific AR object.In another arrangement, the HMD may further capture text inside the reduced-sized ROI; and initialize translation based on the captured text. The HMD can also perform automatic visual recognition and visual search of the ROI or reduced-sized ROI.In another arrangement, the HMD may use the reduced-sized to narrow a field of view during a video sharing with one or more other user.In some embodiments, a head mounted device (HMD) for selecting an augmented reality (AR) object may comprise: one or more processors; and memory storing computer-readable instructions that, when executed by the one or more processors, cause the HMD to: define a region-of-interest (ROI) based on a gesture formed by at least one hand of a user; display to the user a shape on the HMD, wherein the shape outlines the ROI; display to the user a plurality of AR objects, each of the plurality of AR objects being associated with a target inside the ROI; and reduce the size of the ROI based on a first movement of the at least one hand of the user to form a reduced-sized ROI, wherein the reduced-sized ROI is used to select a specific AR object from the plurality of AR objects.In some embodiments, one or more computer-readable media storing computer-executable instructions for selecting an augmented reality (AR) object on a head mounted device (HMD) that, when executed, may cause one or more computing devices included in a HMD to: define a region-of-interest (ROI) based on a gesture formed by at least one hand of a user; display to the user a shape on the HMD, wherein the shape outlines the ROI; display to the user a plurality of AR objects, each of the plurality of AR objects being associated with a target inside the ROI; and reduce the size of the ROI based on a first movement of the at least one hand of the user to form a reduced-sized ROI, wherein the reduced-sized ROI is used to select a specific AR object from the plurality of AR objects.In some embodiments, a head mounted device (HMD) for selecting an Augmented Reality (AR) object may comprise: a means for defining a region-of-interest (ROI) based on a gesture formed by at least one hand of a user; a means for displaying to the user a shape on the HMD, wherein the shape outlines the ROI; means for displaying to the user a plurality of AR objects, each of the plurality of AR objects being associated with a target inside the ROI; and a means for reducing the size of the ROI based on a first movement of the at least one hand of the user to form a reduced-sized ROI, wherein the reduced-sized ROI is used to select a specific AR object from the plurality of AR objects.BRIEF DESCRIPTION OF THE DRAWINGSAspects of the disclosure are illustrated by way of example. In the accompanying figures, like reference numbers indicate similar elements, and:FIGS. 1A and 1B illustrate simplified diagrams of an HMD that may incorporate one or more embodiments;FIG. 2 illustrates a flowchart describing a touch-less method of interacting with HMDs to select AR targets, according to an embodiment;FIGS. 3A and 3B illustrate methods for selecting a region-of-interest (ROI), according to an embodiment;FIG. 4 illustrates a ROI with five targets displayed on the HMD, according to an embodiment;FIG. 5 illustrates a reduced-size ROI with three targets displayed on the HMD, according to an embodiment;FIG. 6 illustrates a flowchart for selecting a specific layer of augmentation for AR targets;FIG. 7 illustrates a user browsing through multiple augmentations by scrolling in the direction of the target using VOI;FIG. 8 illustrates a flowchart for initiating smart applications (e.g., translation, visual search) based on the ROI; andFIG. 9 illustrates an example of a computing system in which one or more embodiments may be implemented.DETAILED DESCRIPTIONSeveral illustrative embodiments will now be described with respect to the accompanying drawings, which form a part hereof. While particular embodiments, in which one or more aspects of the disclosure may be implemented, are described below, other embodiments may be used and various modifications may be made without departing from the scope of the disclosure or the spirit of the appended claims.Embodiments of the present invention are directed toward selecting an augmented reality (AR) object on a head mounted display (HMD) using human body gestures. Some embodiments disclose methods for selecting one or more augmented reality (AR) objects on a head mounted display (HMD) without a user physically touching the HMD.An HMD can provide augmented reality (AR) functionality by overlaying physical objects viewed by a user with digital content (e.g., text, pictures, video) associated with the physical objects, or associated with the user's location and/or context, for example. For example, an HMD with augmented reality (AR) capabilities can place images of both the physical world and virtual objects over the user's field of view. As a result, an HMD can provide users with a mobile and collaborative AR experience.As used herein, the term HMD refers to a device that captures distance sensor data and has a display capability linked to a mobile processor, which may be a separate device relative to the head mounted device. In an embodiment, the HMD 120 may be an accessory for a mobile device CPU (e.g., the processor of a cell phone, tablet computer, smartphone, etc.) with the main processing of the HMD control system being performed on the processor of mobile device. In another embodiment, the HMD 120 may comprise a processor, a memory, a display and a camera.In another embodiment, the HMD may include a wireless interface for connecting with the Internet, a local wireless network, or another computing device. In another embodiment, a projector may be associated in the HMD to enable projection of images onto surfaces. The HMD is preferably lightweight and constructed to avoid use of heavy components, which could cause the device to be uncomfortable to wear. The HMD may also be operable to receive audio/gestural inputs from a user. Such gestural or audio inputs may be spoken voice commands or a recognized user gesture, which when recognized by a computing device may cause that device to execute a corresponding command.Augmented reality (AR) can be a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as, but not limited to, sound, text, graphics, video, and GPS data.By using AR technology such as object recognition, the information about the surrounding real world of the user becomes interactive and digitally manipulable. Artificial information about the user's environment and its objects can be overlaid on the real world.Further, although embodiments are described herein with respect to a HMD, those of skill in the art will appreciate that other forms of head-mounted displays may be utilized. For example, embodiments described herein may be implemented with respect to one or more contact lenses that a user may wear and/or may be implemented in another form of display through which a user may perceive a field of view.Some embodiments allow for interacting with an HMD to select geo-located point-of-interest (POI) and AR targets. By detecting a natural human body gesture, the system can trigger an HMD to select a subset of the POIs or AR targets within the ROI seen through the glasses.The various embodiments include methods of selecting AR targets in an augmented reality system, including defining a ROI based on user's gesture by capturing spatial data with one or more head mounted sensors, displaying a shape outlining the ROI on the display of the HMD, calculating parameters including distance with respect to the HMD that corresponds to the AR targets, displaying a plurality of AR objects within the ROI, reducing the size of the ROI based on user's hand movement and using a reduced-sized ROI to select a specific AR target. In an embodiment, the method may include continuously updating the display of the generated virtual object so the virtual object appears anchored to display as the user turns his/her head.FIGS. 1A-B illustrates simplified diagrams of an HMD 120 that may incorporate one or more embodiments.The flowchart illustrated by FIG. 2 describes a touch-less method of interacting with HMDs to select geo-located POIs (point of interest) and AR targets, according to an embodiment. By detecting a natural human body gesture, the HMD can select a subset of the POIs or AR targets seen through the glasses. FIG. 3A illustrates an exemplary method of interacting with HMDs to select an AR object using the method described in FIG. 2 . The gesture can involve pointing both hands' index and thumb fingers in the orthogonal direction as shown in FIG. 3A and FIG. 4 . After a ROI has been recognized by the HMD, a user can select the geo-located POIs or AR targets in a two or three dimensional space by further specifying the ROI or volume-of-interest (VOI), as shown in FIG. 3B , FIG. 5 and FIG. 7 .According to another embodiment, a depth-enabled camera (e.g., stereo camera) on the HMD can be used for using VOI to select an augmentation when the AR object has multiple augmentations associated with it. The depth-enabled camera can recognize the movement of the hands in front of the user or in the camera view. With these cameras, the HMD can recognize the position of the user's hands in relations to the target, and therefore display different augmentations based on the position.For example, a user can select a specific layer of augmentation for AR targets, as further described in the flowchart illustrated by FIG. 6 . First, a user can select a specific AR target by selecting the ROI. Moreover, if multiple augmentations are associated with a given AR target, the user can browse through them by scrolling in the direction of the target using VOI. For example in FIG 7 , for the movie poster 720, the user can browse through three different augmentations 705, 710, 715 showing: the name of the movie with review; the trailer; and the show times associated with the movie. Depending on the hands position, the corresponding AR augmentation is shown to the user.According to another embodiment, the system can initiating smart applications (e.g., translation, visual search) based on the ROI, as further described in the flowchart illustrated by FIG. 8 . For example, a ROI can be fed to a visual search system or an optical character recognition (OCR) based translator and the results displayed on the HMD.According to another embodiment, the ROI can be utilized to narrow the field of view for video sharing. For example, the ROI is treated as the shared view for a video based communication, so only part of his field of view (e.g., presentation, document) is shared with remote users.Defining a Region-of-Interest using Hand GesturesFIG. 1A is simplified illustration 100 of an HMD 120 configured to define a region-of-interest (ROI) based on a user's gestures, according to one embodiment. In this embodiment, an HMD 120 worn by a user 110 has a camera and/or other sensor(s) configured to track the user's hand 130 in order to define the ROI. In so doing, the HMD 120 is able to display a shape outlining the ROI on the display 140 of the HMD 120. This can allow the user to give real-time feedback to adjust the ROI as desired, removing the need for a separate interface (e.g., touch pad, buttons) to receive user input. Other interfaces can be incorporated into the HMD 120, depending on desired functionality. The HMD 120 can utilize the one or more pre-installed camera(s) 150 to track the user's hand 130 in order to determine the ROI.FIG. 1B is an illustration of an embodiment of an HMD 120 that can utilize the techniques provided herein. The embodiment shown includes displays 140 and one or more cameras 150. This embodiment includes a frame similar to glasses that can rest on the nose and ears of a user 110, positioning the displays 140 in front of a user's eyes.The various embodiments enable the HMD 120 to capture user's hand 130 gestures using the sensors on the HMD 120. In an embodiment, the camera 150 may be a head mounted camera, which can generate image data that a processor can analyze to estimate distances to objects in the image through trigonometric analysis of the images. Alternatively or in addition, the HMD 120 may include one or more distance measuring sensors (e.g., a laser or sonic range finder) that can measure distances to various surfaces within the image. In the various embodiments a variety of different types of distance measuring sensors and algorithms may be used an imaged scene to measure for measuring distances to objects within a scene viewed by the user 110. Also, more than one sensor and type of sensor may be used in the HMD 120.Further, the HMD 120 may include orientation sensors, such as accelerometers, gyroscopes, magnetic sensors, optical sensors, mechanical or electronic level sensors, and inertial sensors which alone or in combination can provide data to the device's processor regarding the up/down level orientation of the device (e.g., by sensing the gravity force orientation) and thus the user's head position orientation (and from that viewing perspective). Further, the HMD 120 may include rotational orientation sensors, such as an electronic compass and accelerometers that can provide data to the device's processor regarding left/right orientation and movement. Collectively, sensors (including accelerometers, gyroscopes, magnetic sensors, optical sensors, mechanical or electronic level sensors, inertial sensors, and electronic compasses) are configured to provide data regarding the up/down and rotational orientation of the HMD 120 (and thus the user's viewing perspective).The HMD 120 may be configured to recognize user inputs, which may be made through gestures that may be imaged by the camera. A distance to the recognized object within the image may be determined from data gathered from the captured image and distance sensors. The HMD 120 may provide image and distance sensor data to and receive display information from a mobile processor which may be separate from the HMD 120, such as in a smartphone or other mobile device.At least a portion of the displays 140 is transparent, providing a transparent display area that enables a user to view not only images shown on the displays 140, but also physical objects in the user's surroundings. The level of transparency of the images shown on the display 140 may vary, depending on the desired functionality of the displays 140, settings of a graphical user interface (GUI) shown on the display 140, and/or a software application executed by the HMD 120 (e.g., a video, a map, an internet browser). Although the embodiments shown in FIGS. 1A-B illustrate a display 140 positioned in a glasses-like frame, other technologies capable of providing a transparent display area (e.g., a retinal projector or other optical system) can be utilized in other embodiments.Furthermore, The camera(s) 150 (e.g., outward-facing cameras) can capture images of the user's surroundings, including the user's hand 130 and/or other objects that can be controlled by the user 110 to provide input to the HMD 120. The cameras can include other types of sensors that provide images and/or other information to a processing unit that enables the HMD 120 to track a physical object (e.g., POI) in front of the user. In one embodiment, the HMD 120 can employ a camera 150 capable of determining depth (e.g., stereo camera) to track an object's depth as well. According to one embodiment, the depth camera can be utilized when using VOI to display multiple augmentations, as shown in FIG. 7 .The camera(s) 150 can have a field of view that enables the HMD 120 to track an object (e.g., hand gestures) that appears within the transparent display area from the perspective of the user. Embodiments may switch to a low-power mode when the physical object is not within the transparent display area from the perspective of the user 110. In some embodiments, the camera(s) 150 can have a field of view that is broader than the transparent display area, to allow the HMD 120 to begin executing and/or scale up object-tracking algorithms when the HMD 120 determines the physical object is approaching the transparent display area (from the perspective of the user 110).Selecting a Specific AR Target from a Plurality of AR TargetsFIG. 2 is a flow diagram illustrating an embodiment of a method 200 of selecting an augmented reality (AR) object on a head-mounted display (HMD). At block 205, the HMD defines a region-of-interest (ROI) based on user's gesture.A detection algorithm can be used to detect a user's gesture. The HMD can detect a predetermined gesture of a user and define the ROI based on the user's gesture. For example, a user's gesture can include pointing the index and thumb fingers of both hands in the orthogonal direction to create a rectangular shape as illustrated in FIGS. 3A and 3B . In other instances, the user's gesture can include a fist, an open hand, pointing with finger(s), a hand rotation, a wave, a movement of one or more fingers, and any combination thereof.Additionally, according to another embodiment, a subsequent user gesture can occur after the ROI is defined. For example, the gesture may include drawing the hands of the user apart/closer for resizing the ROI. In other instances, the subsequent user gesture may include a tap gesture for taking a picture, a push gesture for guiding an AR object across the screen, a flick gesture for moving the AR object, a turn gesture for rotating the AR object, a grab/pull gesture for zoom operations, a swipe gesture for scrolling through media.As indicated above, the transparent display area can be at least a portion of a display and/or display means of an HMD 120 configured to allow a user to view images shown on the display as well as physical object in the user's surroundings. The user can select a ROI based on the user's hand gesture. FIG. 3A further illustrates how the HMD 120 identifies the ROI.At block 210, a shape outlining the ROI within the transparent display area is defined and displayed to the user. The shape can be, for example, a rectangular overlay outlining the ROI on the HMD 120. The shape can give the user the visual feedback of the ROI's location. The shape itself may or may not be highlighted and/or otherwise indicated in the transparent display area. Defining the shape may be performed, for example, by a processing unit, memory, and/or other computing means, which can be part of a computing system incorporated into and/or communicatively coupled with the HMD. The shape may be displayed as extending from a display element, for example from the edge of the display screen. Moreover, as discussed in more detail below, the shape may be associated with interactive elements, digital content, and/or physical objects in an AR or other software application. Such shape may change in size, shape, and location within the display area as, for example, a user moves in relation to a physical object. As a result, the highlights showing the shape associated with the selected object(s) may or may not be displayed at all times.The various embodiments enable a HMD 120 to render a virtual shape outlining the ROI on the display 140. This enables the HMD 120 to provide an augmented reality experience that can facilitate interactions with a computing device and real-time collaboration and feedback from the user.In some embodiments, the HMD 120 can process recognized gestures as input commands to execute tasks on the HMD 120. Methods may be implemented in a computing device having a processor configured with processor-executable instructions to perform the operations. The processor may commence operation by receiving sensor data regarding an orientation of the HMD 120. Additionally, the processor may receive image data from the cameras 150, as well as data from other sensors included in the HMD 120 described above. Thus, the processor may obtain all information gathered by the HMD 120 regarding images and distances to objects within the field of view of the camera(s) 150. The processor may calculate distance and orientation data of objects in the ROI. These calculations may use well known trigonometric methods when images are provided in block, direct measurements when distance sensors are used to provide distance data, and combinations of distance information obtained from sensors and calculated from images. Furthermore, the processor may process the image using distance sensor data, camera data and the distance and orientation data.Furthermore, the processor may track the user's hand 130 over a time interval to determine if a predetermined gesture is recognized. In this determination, the processor may determine whether any gestures are recognized in the field of view. If the gesture is recognized, the processor may substantially match the recognized gesture with an input command stored in memory of the HMD 120. The processor may execute the input command that corresponds to the recognized gesture.According to one embodiment, once the ROI is identified, a picture and/or video can be capture of the image inside the ROI when the user 110 disengages. For example, if the user continues to maintain the hand gesture after receiving the shape outlining the ROI, such as a camera taking pose, the HMD 120 can be setup to capture the image outlined by the ROI after the user disengages. An example of a disengagement event can be when the user's hands quickly move away from the ROI. Therefore, unlike a conventional HMD, a user does not actually need to touch a button on the conventional HMD to take a picture.According to another embodiment, the HMD 120 can capture a video of fixed buffer size. In this instance, the gesture mechanism can be used as a trigger to indicate to the HMD to save all the frames from the last predetermined duration (e.g., 30 seconds). For example, a user wearing the HMD 120 may have just seen a whale jump out of the ocean, and verbally request the HMD 120 to record the past 30 seconds. Therefore the HMD 120 can have a functionality wherein it is buffering the frames and based on a recognized gesture or voice command, the HMD 120 can store those frames into memory.At block 215, after the ROI has been recognized, the HMD 120 displays to the user 110 a plurality of AR objects (e.g., Rita's Fashion 405 in FIG. 4 ) associated with targets inside the ROI. Additionally, the targets initially inside the ROI can be tracked as the user moves the ROI or the user's field of view. Means for tracking can be performed by a camera, sensor, and/or other components configured to capturing image and/or position measurements, communicatively connected with a processing unit, memory, and/or other computing means configured to determine a position based on the image and/or position measurements. Components for tracking the target can be calibrated with components for displaying images in the transparent display area, enabling the HMD to determine what the user sees.Tracking means may engage any of a variety of tracking algorithms. Certain tracking algorithms may simply track a single point (e.g., a coordinate on the transparent display area) and/or region associated with the object and/or a location on the object (e.g., hand(s) of the user). More sophisticated tracking algorithms may track other features, such as the object's distance. Embodiments may be able to track multiple and/or different target for AR interaction.Once the plurality of AR objects are displayed to the user using block 215, the user may then want to select a specific AR object, as described in block 220, for further manipulation of the selected object.For example, when a user is wearing the HMD and advertisement and banners associated with a plurality of AR object can pop up. It can be distracting and annoying if all the functionalities are coming alive at the same time. By reducing the size of the ROI, the user can reduce the number of AR objects for interaction.At block 220, a user can reduce or enlarge the size of the ROI based on the movement of the user's hand, as further illustrated in FIGS 3A-3B . By manipulating the ROI using real-time user feedback, the user 110 can select a specific AR object in real-time. Here again, means for reducing or enlarging the size of the ROI can include a processing unit, memory, and/or other computing means coupled to a display showing the ROI.At block 225, the HMD 120, using the reduced-size ROI from block 220, selects a specific AR object from the plurality of AR objects in block 215. For example, using Qualcomm's Vuforia, AR targets can be displayed on an HMD display 140 by the HMD 120.Augmented reality functionality can be used in countless scenarios to enable the user to learn more about the user's surroundings. For example, as later described in FIGS. 6 and 7 , an HMD 120 can recognizes a movie poster with AR associated with the movie poster, then the HMD 120 can display a virtual "play trailer" button, which can be activated with another hand gesture. Countless other scenarios are contemplated. Thus, techniques provided herein can be expanded to augmented reality scenarios, enabling interactions with elements of digital content, from the perspective of the user, to the selected object. Because these interactive elements are bound to selected objects in the user's surroundings, corresponding ROI on the HMD's display 140 can move and scale relative to the selected object's position. Additionally, according to some embodiments, the selected AR object can then be further manipulated using the methods described in FIGS. 6 and 8 .An example of an AR platform is the Qualcomm Vuforia™ augmented reality platform, which can provide more information to the display 140 of the HMD and/or mobile device once a user has selected a target through the use of augmented reality. For example, Vuforia's computer vision functionality can recognize a variety of 2D and 3D visual targets, and display AR associated with the recognized targets. Therefore, a user can select a real world target and AR associated with the selected target may be displayed on the display 140 of the HMD 120. In some instances, advertising can jump off the printed page, or product packaging can come alive on retail shelves. Additionally, products themselves can provide enhanced interactivity to provide instructions.Therefore, the AR can enhance the value of print media and advertising, consumer products and packaging, and educational materials. The AR associated with the selected target in block 225 can enable real-world interactivity. For example, AR can be associated with printed advertising, products and packaging, and in retail environments.It should be appreciated that the specific steps illustrated in FIG. 2 provide an example of a method 200 of enabling user interaction with an HMD 120. Alternative embodiments may include alterations to the embodiments shown. For example, alternative embodiments may include defining the shape within the transparent display area, displaying the shape outlining the ROI, displaying the AR(s) at different points during the method 200. Yet other embodiments may include performing actions to calibrate the defining the ROI, displaying the shape and tracking components with the display components of the HMD 120. Furthermore, additional features may be added, removed, or combined depending on the particular applications. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.Defining a ROI Based on User's GestureFIGS. 3A and 3B illustrate a touch-less method of interacting with HMDs to select a specific AR object 320. By detecting a natural human body gesture, the HMD 120 can be triggered to capture the image or video seen through the glasses. As illustrated in FIG. 3A , the hands are placed in such a way that a rectangle is formed with the fingers and thumbs being the edges. The area inside this rectangle can be treated as the Region-of-Interest (ROI) for the user 110.FIG. 3A illustrates a method for defining the ROI 305 based on user's gesture, as previously described in block 205. For example, the gesture can involve pointing both hands' index 350 and thumb 355 fingers in the orthogonal direction.As illustrated in FIG. 3A , the gesture can include pointing the index and thumb fingers of both hands in the orthogonal direction to create a rectangular. In one instance, the gesture can include using the thumb 355 and fingers 360 to form a C-shape. In other instances, the gesture can include a fist, an open hand, pointing with finger(s), a hand rotation, a wave, a movement of one or more fingers, and any combination thereof.FIG. 3B illustrates a method for reducing or enlarging the size of the ROI 305 to select a specific AR object 320. For example, the gesture can involve bringing in the hands together to reduce the size of the ROI. Alternatively, the gesture can involve moving the hands out to enlarge the size of the ROI.According to other embodiments, once the ROI has been defined, the user can manipulate the AR objects inside the ROI using a subsequent user gesture. For example, the gesture may include drawing the hands of the user apart/closer for resizing the ROI. In other instances, the subsequent user gesture may include a tap gesture for taking a picture, a push gesture for guiding an AR object across the screen, a flick gesture for moving the AR object, a turn gesture for rotating the AR object, a grab/pull gesture for zoom operations, a swipe gesture for scrolling through media, and any combination thereof.The HMD 120 can track the user's hand 130, enabling the user 110 to interact and define a ROI 305. For example, the user can make a gesture in various ways, such as by moving the user's hand 130 to form a rectangle, engaging in a camera-taking pose, performing a predetermined movement, or keeping a hand shape in the element's proximity for a threshold amount of time.An example of a camera-taking pose is for the user 110 to create two C-shaped using his hands. However with the camera-taking pose, the hands can only be moved apart horizontally not diagonally and therefore the resizing of the ROI cannot be redefine as a larger rectangle area, only a wider one.In another embodiment, the ROI can be based on one hand in an L-shape as the base the ROI in relations to the distance between the center of the camera view and the user's hand 130. In this example, the user's field of view in the HMD 120 and where the corner of the "L" that's defined by the hand is can be used to define the ROI. A feedback from the HMD 120 can be a little red dot that marks the center of the display 140 and a rectangle that marks where the border of the ROI. The ROI can get larger or smaller depending on the user's hand 130 relative to the center field of view.The HMD 120, after recognizing selection of the gesture, can display to the user a shape 315 on the HMD. The shape 315 (e.g., rectangular overlay) can outline the ROI 305. The user can adjust the ROI 305 in real-time by moving the hand(s) inward or outward. Depending on desired functionality, if only one hand is used, the shape displayed on the HMD 120 can be anchored to a dot on the display (e.g., dot in the middle of the display) so that the user can adjust the shape accordingly without the need of both hands to outline the border of the shape.For example, the HMD 120 indentifies the ROI 305 (e.g., rectangle) by tracing the finger tips or the outline formed by the shape of the hands. Once the ROI 305 is detected, the HMD 120 displays to the user as the overlaid projection of the shape 315 on the HMD glasses, as previously mentioned in block 210. Then the user can adjust the size of the ROI as previously mentioned in block 220, by moving the hands before disengaging, as shown in FIG. 3B .FIG. 3B illustrates how the selected ROI can be adjusted based on real-time user input. For example, the ROI 305 is initially selected by the user in FIG. 3A . In FIG. 3B , the AR object 320 can be selected by moving the hands closer together or farther apart. The HMD using object recognition techniques can determine that the ROI 305 is decreasing or enlarging based on the user's hand movement.According to another embodiment, the system can use a hand gestures and/or voice commands to activate the camera on the HMD. Once the reduced-sized ROI is identified, a picture and/or video can be capture of the image inside the ROI 305. Alternatively, the gesture mechanism and voice command can be used as a trigger to indicate to the HMD to save all the frames from the last predetermined duration (e.g., 30 seconds).Furthermore, a disengagement event can occur when the hands are sufficiently away from the field of view of the target. The disengagement event can also occur if the fingers and thumbs are closed together or by a voice driven command. The disengagement event can signal to the HMD 120 that detection and selection of the ROI 305 is complete. Therefore, according to one embodiment, the HMD 120 can turn off some of the sensors and go into low power mode.A disengaging event can include when the hands are not close enough. The HMD 120 can track the hands and the shaped formed by the hands' edge continuously; therefore the HMD 120 can determine when the hands are apart by a predetermined distance.Therefore, the HMD 120 can determine if the hands have been moved apart by more than a threshold then it can assume that disengagement has occurred. For example, a user can move his hand apart so that they are not close to the ROI.However, the HMD 120 can implement a feature that can distinguish when the user is trying to make the region of interest larger or if the user is trying to disengage when the hands moving apart. For example, the HMD 120 can distinguish between resizing the ROI versus a disengaging event based on the hands location in relations to the ROI and the speed of the hand movement.Alternatively, closing just one hand can be a disengaging event. For example, when the HMD 120 determines that it cannot detect an octagonal frame from the outline of the hands, the HMD 120 can assume that the user disengaged.Displaying a Plurality of AR ObjectsFIG. 4 illustrates an example of a user's view through the display 140 of an HMD 120. As illustrated in FIG. 4 , a plurality of AR objects associated with targets inside the ROI 305 is shown on the display 140. As previously mentioned in block 215, after a user selects a ROI 305, the HMD 120 displays to the user a plurality of AR objects associated with target (e.g., point-of-interests (POIs)) inside the ROI. Additionally, the targets initially inside the ROI can be tracked as the user moves the ROI 305 or the user's field of view.Once the ROI 305 has been defined, the HMD 120 can display to the user the targets which have associated AR applications. Once the HMD has detected the ROI 305 and/or display to the user a shape 315 outlining the ROI, it can initiate AR applications. After the HMD detects a ROI 305, it can initiate the associated AR application within the ROI 305. For example, in FIG. 4 , the user is displayed five objects (e.g., POI) with associated AR applications. In this example, the following five objects are inside the ROI: Rita's Fashion 405 which is 50 meters away; Starbucks 410; Henry's diner 415; Henry's diner 420; and the cinema 425. According to one embodiment, the display 140 can also show other relevant information about the objects (e.g., reviews, distance from the user).In some instances, advertisers can register their project banners/logo/wrappers as AR targets. For example, using Qualcomm's Vuforia, targets can have additional information that can be display on an HMD display 140 using AR. Therefore, when a user reduces the ROI 305 to select an AR target 320, an AR animation associated with the selected target can gives extra information.FIG. 5 illustrates an example of the ROI 305 being reduced by the user, which results in less AR targets or POIs being shown on the display 140. As described in block 220, a user can reduce the size of the ROI 305 using hand gestures (e.g., by moving hands closer). Using the reduced-sized, the display 140 in FIG. 5 now only has three 3 POIs (Starbucks 505, Henry's Diner 510, Henry's Diner 515) inside the ROI. By reducing the size of the ROI, the number of targets inside the ROI has been reduced from five POIs in FIG. 4 to three POIs in FIG. 5 . According to another embodiment, the user can further reduce the size of the ROI so that only one AR target 320 is inside the ROI 305.According to some embodiments, a selected AR target 320 may have multiple augmentations associated with it. During this scenario, a user can use volume-of-interest (VOI) techniques, as further described in FIGS. 6 and 7 , to select a specific layer of augmentation for the selected AR target 320.Using Volume of Interest when a Selected Target has Multiple AugmentationsFIG. 6 is a flow diagram illustrating an embodiment of a method 600 of defining a volume-of-interest (VOI) using hand gestures along the direction of the target (e.g., z-axis). For example, the user can browse through the geo-located points or different augmentations for the same target by scrolling one or more hands along the z-axis. As shown in FIG. 2 , a user has reduced the size of the ROI to select a target. After selecting a target, in some instances, the selected target may have multiple augmentations associated with it.At block 605, the HMD has already defines a ROI based on user's gesture, as previously described in FIG. 2 . By manipulated the ROI using real-time user feedback, the user can select a specific AR in real-time, similar to block 225. Moreover, if multiple augmentations are associated with a given AR target, a user can specify a specific layer using the method described in block 610.At block 610, a user can browse through the multiple augmentations by scrolling in the direction (e.g., z-axis) of the selected target. For example, a user can select a specific layer of augmentation for an AR target 320 by browse through them by scrolling in the direction of the target using VOI, as illustrated in FIG. 7 .At block 615, the system displays to the user different augmentations by associated with the selected target based on user's hand position (e.g., along the z-axis as illustrated in FIG. 7 ).FIG. 7 illustrated the real-time interaction and display of different augmentation based on user's hand position. For example, a user can select a movie poster using the method described in FIG. 2 . In some instances, there may be multiple augmentations associated with the selected movie poster. Using the method described in FIG. 6 , the user can browse through the different augmentations associated with the movie poster.As illustrated in the different positions in FIG. 7 , different augmentations can be displayed to the user based on the user's hand position. For example, when the user's hand position is at 705, the display 140 of the HMD 120 shows the name of the movie and the reviews. Position 710 can occur when the user's hand position moves closer in the direction of the target 720 (e.g., z-axis) in relations to position 705. At position 710, in this example, the augmentation for playing the trailer of the movie is shown on the display 140. As the user's hand position moves closer to the target, at position 715, the show times of the movie and the option to purchase tickets online can be displayed. Finally, in this example, at the position closest to the target 720, the HMD 120 can display images associated with the movie.As illustrated by the example in FIGS. 6 and 7 , the HMD 120 can use hand positions and/or hand gestures to define a VOI. Based on the VOI, the HMD 120 can display different augmentations associated with a selected target. Alternatively, once a target is selected, the HMD 120 can implement other modes based on user's preferences or predetermined gesture recognized functions, as illustrated in FIG. 8 .Example of Other Implementations Once The Target Is SelectedFIG. 8 is a flow diagram illustrating an embodiment of a method 800 of initiating a smart application based on the selected ROI. One example of a smart application can include a visual translator. For example, the image captured inside the ROI can be fed to a visual search system or an (OCR). In this example, the OCR can be used to determine and recognize the text from the image. Based on the recognized characters, a translator can automatically translate and the results can be displayed on the display 140.To illustrate this embodiment, at block 805, the user can select a specific target or text for translation. Similar to the method described in FIG. 2 , the user can use hand gestures to outline a ROI 305 for selecting a text for translation.At block 810, a user can request a translation of the specific text in the ROI 305. For example, the user can use voice commands or predetermined hand gestures to initiate the translation. Alternatively, the HMD 120 can automatically recognize a foreign language and initiate a translation without a request from a user.At block 815, the HMD 120 can use visual search system or an OCR to recognize the text in the ROI 305. Conventional methods for text recognition can be utilized by the HMD 120.At block 820, the HMD 120 can translate the recognized text to a language specified by the user and show it on the display 140. For example, the language can be a default language, predetermined based on prior usage, or specified in real-time by the user. Additionally, the HMD 120 can read the translated text out loud to the user 110. Conventional methods for text translation can be utilized by the HMD 120.FIG. 8 illustrates an example of one implementation once the target is selected using hand gestures. Alternative embodiments may include alterations to the embodiments shown. For example, alternative embodiments may include the HMD automatically recognizing a foreign language and translating the text without a request from the user. Furthermore, additional features may be added, removed, or combined depending on the particular applications. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.According to another embodiment, the ROI can be utilized to narrow the field of view for video sharing. The HMD 120 can use the ROI as the shared view during a collaboration mode. When a user wants a share a part of his field of view, the user can select a ROI 305 using the method described in FIG. 2 . During a collaboration mode, the HMD 120 can treat the ROI 305 as the shared view for a video based communication, so only part of his field of view (e.g., presentation, document) is shared with remote users. According to one embodiment, the shared view can be concluded once a disengagement event occurs. Alternative embodiments may include alterations to the embodiments shown. Furthermore, additional features may be added, removed, or combined depending on the particular applications.FIG. 9 illustrates an example of a computing system in which one or more embodiments may be implemented.The computer system 900 may further include (and/or be in communication with) one or more non-transitory storage devices 925, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device, such as a random access memory ("RAM"), and/or a read-only memory ("ROM"), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including without limitation, various file systems, database structures, and/or the like. For example, the storage devices 925 can be used to buffer video captured from the camera 150 of the HMD 120, in the event the user wants to capture the all the frames from the last predetermined duration.The computer system 900 might also include a communications subsystem 930, which can include without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device, and/or a chipset (such as a Bluetooth™ device, an 802.11 device, a WiFi device, a WiMax device, cellular communication facilities, etc.), and/or the like. The communications subsystem 930 may include one or more input and/or output communication interfaces to permit data to be exchanged with a network, other computer systems, and/or any other electrical devices/peripherals. In many embodiments, the computer system 900 will further comprise a working memory 935, which can include a RAM or ROM device, as described above. The communication subsystem 930 can be used to link the HMD 120 to the user's smartphone.The computer system 900 also can comprise software elements, shown as being currently located within the working memory 935, including an operating system 940, device drivers, executable libraries, and/or other code, such as one or more application(s) 945, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, a portion of one or more procedures described with respect to the method(s) discussed above, such as the method 200 described in relation to FIG. 2 , might be implemented as code and/or instructions executable by a computer (and/or a processing unit within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.A set of these instructions and/or code might be stored on a non-transitory computer-readable storage medium, such as the storage device(s) 925 described above. In some cases, the storage medium might be incorporated within a computer system, such as computer system 900. In other embodiments, the storage medium might be separate from a computer system (e.g., a removable medium, such as an optical disc), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer system 900 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 900 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.), then takes the form of executable code.It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.As mentioned above, in one aspect, some embodiments may employ a computer system (such as the computer system 900) to perform methods in accordance with various embodiments of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer system 900 in response to processor 910 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 940 and/or other code, such as an application program 945) contained in the working memory 935. Such instructions may be read into the working memory 935 from another computer-readable medium, such as one or more of the storage device(s) 925. Merely by way of example, execution of the sequences of instructions contained in the working memory 935 might cause the processor(s) 910 to perform one or more procedures of the methods described herein. Additionally or alternatively, portions of the methods described herein may be executed through specialized hardware. Merely by way of example, a portion of one or more procedures described with respect to the method(s) discussed above, such as the method 200 described in relation to FIG. 2 , might be implemented by the processor 910 in the HMD 120. Alternatively, the HMD 120 can be linked to a smartphone via the communications subsystem 930, and the method of FIG. 2 can be implemented by the processor 910 in the smartphone.The terms "machine-readable medium" and "computer-readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer system 900, various computer-readable media might be involved in providing instructions/code to processor(s) 910 for execution and/or might be used to store and/or carry such instructions/code. In many implementations, a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take the form of a non-volatile media or volatile media. Non-volatile media include, for example, optical and/or magnetic disks, such as the storage device(s) 925. Volatile media include, without limitation, dynamic memory, such as the working memory 935.Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read instructions and/or code.Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 910 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 900.The communications subsystem 930 (and/or components thereof) generally will receive signals, and the bus 905 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 935, from which the processor(s) 910 retrieves and executes the instructions. The instructions received by the working memory 935 may optionally be stored on a non-transitory storage device 825 either before or after execution by the processor(s) 910.The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims.Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.Also, configurations may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform the described tasks.Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not bind the scope of the claims.In the following, further embodiments are described to facilitate the understanding of the invention:1. A method for selecting an Augmented Reality (AR) object on a device, comprising:obtaining, by the device, a first input associated with a first gesture performed by a user of the device;defining, based on the first input, a Region-of-Interest (ROI) within one or more images of a scene, wherein the ROI includes an object within the scene, the object being associated with a plurality of displayable AR objects;displaying, by the device, a first AR object, the first AR object being selected for display from the plurality of displayable AR objects associated with the object;obtaining, by the device, a second input associated with a second gesture performed by the user of the device, the second gesture being performed in a third dimension relative to a two-dimensional plane of the ROI;selecting, by the device based on the second input, a second AR object for display from the plurality of displayable AR objects associated with the object; anddisplaying, by the device, the second AR object.2. The method of embodiment 1, wherein the second AR object is selected for display from the plurality of displayable AR objects when the second gesture is made at a first depth relative to the two-dimensional plane.3. The method of embodiment 2, further comprising:obtaining, by the device, a third input associated with a third gesture performed by the user of the device, the third gesture being performed in the third dimension relative to the two-dimensional plane;selecting, by the device based on the third input, a third AR object for display from the plurality of displayable AR objects associated with the object, wherein the third AR object is selected when the third gesture is made at a second depth relative to the two-dimensional plane, the first depth being different from the second depth; anddisplaying, by the device, the third AR object.4. The method of embodiment 1, wherein the second gesture includes movement of at least one hand of the user in the third dimension.5. The method of embodiment 4, wherein the at least one hand of the user is observed in a field of view of a camera of the device during the second gesture.6. The method of embodiment 1, wherein the first AR object and the second AR object are displayed in the ROI at a same time, and wherein a position of the first AR object is displayed relative to a position of the second AR object based on a depth of the second gesture.7. The method of embodiment 6, wherein the position of the first AR object is displayed relative to the position of the second AR object based on a change in depth between the first gesture and the second gesture.8. The method of embodiment 1, wherein the first gesture defining the ROI is performed by at least one hand of the user.9. The method of embodiment 8, wherein the second gesture for selecting the second AR object for display includes moving the at least one hand in the third dimension.10. An apparatus for selecting an Augmented Reality (AR) object, the apparatus comprising:a memory;a processor coupled to the memory and configured to:obtain a first input associated with a first gesture performed by a user;define, based on the first input, a Region-of-Interest (ROI) within one or more images of a scene, wherein the ROI includes an object within the scene, the object being associated with a plurality of displayable AR objects;select a first AR object for display from the plurality of displayable AR objects associated with the object;obtain a second input associated with a second gesture performed by a user, the second gesture being performed in a third dimension relative to a two-dimensional plane of the ROI; andselect, based on the second input, a second AR object for display from the plurality of displayable AR objects associated with the object.11. The apparatus of embodiment 10, wherein the second AR object is selected for display from the plurality of displayable AR objects when the second gesture is made at a first depth relative to the two-dimensional plane.12. The apparatus of embodiment 11, wherein the processor is further configured to:obtain a third input associated with a third gesture performed by the user, the third gesture being performed in the third dimension relative to the two-dimensional plane; andselect, based on the third input, a third AR object for display from the plurality of displayable AR objects associated with the object, wherein the third AR object is selected when the third gesture is made at a second depth relative to the two-dimensional plane, the first depth being different from the second depth.13. The apparatus of embodiment 10, wherein the second gesture includes movement of at least one hand of the user in the third dimension.14. The apparatus of embodiment 13, wherein the at least one hand of the user is observed in a field of view of a camera during the second gesture.15. The apparatus of embodiment 10, wherein the first AR object and the second AR object are displayed in the ROI at a same time, and wherein a position of the first AR object is displayed relative to a position of the second AR object based on a depth of the second gesture.16. The apparatus of embodiment 15, wherein the position of the first AR object is displayed relative to the position of the second AR object based on a change in depth between the first gesture and the second gesture.17. The apparatus of embodiment 10, wherein the first gesture defining the ROI is performed by at least one hand of the user.18. The apparatus of embodiment 17, wherein the second gesture for selecting the second AR object for display includes moving the at least one hand in the third dimension.19. The apparatus of embodiment 11, wherein the apparatus comprises a mobile device.20. The apparatus of embodiment 11, further comprising a display configured to display the first AR object and the second AR object.21. The apparatus of embodiment 11, further comprising a camera for capturing the one or more images of the scene.22. A non-transitory computer-readable medium having stored thereon instructions that, when executed by one or more processors, cause the one or more processor to:obtain a first input associated with a first gesture performed by a user;define, based on the first input, a Region-of-Interest (ROI) within one or more images of a scene, wherein the ROI includes an object within the scene, the object being associated with a plurality of displayable AR objects;select a first AR object for display from the plurality of displayable AR objects associated with the object;obtain a second input associated with a second gesture performed by the user, the second gesture being performed in a third dimension relative to a two-dimensional plane of the ROI; andselect, based on the second input, a second AR object for display from the plurality of displayable AR objects associated with the object.23. The non-transitory computer-readable medium of embodiment 22, wherein the second AR object is selected for display from the plurality of displayable AR objects when the second gesture is made at a first depth relative to the two-dimensional plane.24. The non-transitory computer-readable medium of embodiment 23, further comprising instructions that, when executed by the one or more processors, cause the one or more processor to:obtain a third input associated with a third gesture performed by the user, the third gesture being performed in the third dimension relative to the two-dimensional plane; andselect, based on the third input, a third AR object for display from the plurality of displayable AR objects associated with the object, wherein the third AR object is selected when the third gesture is made at a second depth relative to the two-dimensional plane, the first depth being different from the second depth.25. The non-transitory computer-readable medium of embodiment 22, wherein the second gesture includes movement of at least one hand of the user in the third dimension.26. The non-transitory computer-readable medium of embodiment 25, wherein the at least one hand of the user is observed in a field of view of a camera during the second gesture.27. The non-transitory computer-readable medium of embodiment 22, wherein the first AR object and the second AR object are displayed in the ROI at a same time, and wherein a position of the first AR object is displayed relative to a position of the second AR object based on a depth of the second gesture.28. The non-transitory computer-readable medium of embodiment 27, wherein the position of the first AR object is displayed relative to the position of the second AR object based on a change in depth between the first gesture and the second gesture.29. The non-transitory computer-readable medium of embodiment 22, wherein the first gesture defining the ROI is performed by at least one hand of the user.30. The non-transitory computer-readable medium of embodiment 29, wherein the second gesture for selecting the second AR object for display includes moving the at least one hand in the third dimension. |
Techniques and mechanisms for identifying a power state to be provided with an integrated circuit (IC). In an embodiment, evaluator circuitry of a system-on-chip is programmable based on multiple criteria which are each for a different respective power mode. Programming of the evaluator circuitry enables concurrent evaluations each to determine, for a different respective power mode, whether a detected state of the IC is able to accommodate said power mode. Results of the evaluations are communicated, in parallel with each other, to circuitry which selects one such power mode based on relative priorities of the power modes with respect to each other. In another embodiment, the evaluator circuitry comprises an array of circuit cells which are configurable each to perform a different respective evaluation based on a corresponding combination of a test condition and a detected condition of the IC. |
CLAIMSWhat is claimed is:1. A system-on-chip (SOC) comprising:first circuitry to:receive first signals while the first circuitry is programmed based on multiple criteria which each correspond to a different respective power mode of multiple power modes of the SOC, wherein the first signals indicate a state of the SOC; and concurrently perform multiple evaluations each based on the state of the SOC and a different respective criteria of the multiple criteria;second circuitry coupled to receive respective results of the multiple evaluations in parallel from the first circuitry, the second circuitry to select a first power mode based on the results; andthird circuitry, responsive to the second circuitry, to generate second signals to transition the SOC to the first power mode.2. The SOC of claim 1, wherein the state comprises multiple conditions, wherein the first signals each correspond to a different respective condition of the multiple conditions, wherein the first circuitry comprises an array of circuit cells, and wherein the first circuitry is to receive first signals while the first circuitry is programmed to correspond circuit cells of the array each to a different respective combination of a test condition and one of the multiple conditions.3. The SOC of claim 2, wherein rows of the array each correspond to a different respective criteria of the multiple criteria, and wherein columns of the array each correspond to a different respective condition of the multiple conditions.4. The SOC of claim 2, wherein a first cell of the array is to be programmed, comprising the first cell to provide a selection of one of:a first mode to provide a multi-bit evaluation functionality of the first cell;a second mode to provide a single-bit evaluation functionality of the first cell; or a third mode to disable both the multi -bit evaluation functionality and the single-bit evaluation functionality.5. The SOC of any of claims 1 or 2, wherein the first circuitry is to be reprogrammed with updated criteria information.6. The SOC of any of claims 1 or 2, further comprising fourth circuitry to:receive sensor signals while the fourth circuitry is programmed with a definition of a format of the first signals; andgenerate the first signals based on the sensor signals and the definition of the format.7. The SOC of any of claims 1 or 2, wherein the second circuitry is to receive the respective results while the second circuitry is programmed with rank information which indicates a relative priorities of the multiple power modes with respect to each other, wherein the second circuitry is to select the first power mode further based on the relative priorities.8. The SOC of claim 7, wherein the results are to identify a plurality of power modes as candidate power modes, wherein the second circuitry is to select the first power mode based on a determination that, of the candidate power modes, the first power mode is a highest priority power mode.9. The SOC of any of claims 1 or 2, wherein the third circuitry is to be programmed with mode transition information which indicates action sequences each to transition to a respective power mode, wherein the second signals are generated based on the mode transition information.10. The SOC of any of claims 1 or 2, wherein the first circuitry is coupled to be programmed with a Basic Input/Output System process of the SOC.11. A system comprising:an integrated circuit (IC) comprising:first circuitry to:receive first signals while the first circuitry is programmed based on multiple criteria which each correspond to a different respective power mode of multiple power modes of the IC, wherein the first signals indicate a state of the IC; and
concurrently perform multiple evaluations each based on the state of the IC and a different respective criteria of the multiple criteria; second circuitry coupled to receive respective results of the multiple evaluations in parallel from the first circuitry, the second circuitry to select a first power mode based on the results; andthird circuitry, responsive to the second circuitry, to generate second signals to transition the IC to the first power mode; anda display device coupled to the IC, the display device to display an image based on data communicated with the IC.12. The system of claim 11, wherein the state comprises multiple conditions, wherein the first signals each correspond to a different respective condition of the multiple conditions, wherein the first circuitry comprises an array of circuit cells, and wherein the first circuitry is to receive first signals while the first circuitry is programmed to correspond circuit cells of the array each to a different respective combination of a test condition and one of the multiple conditions.13. The system of any of claims 11 or 12, wherein the IC further comprises fourth circuitry to:receive sensor signals while the fourth circuitry is programmed with a definition of a format of the first signals; andgenerate the first signals based on the sensor signals and the definition of the format.14. The system of any of claims 11 or 12, wherein the second circuitry is to receive the respective results while the second circuitry is programmed with rank information which indicates a relative priorities of the multiple power modes with respect to each other, wherein the second circuitry is to select the first power mode further based on the relative priorities.15. The system of any of claims 11 or 12, wherein the third circuitry is to be programmed with mode transition information which indicates action sequences each to transition to a respective power mode, wherein the second signals are generated based on the mode transition information.16. A method comprising:
programming first circuitry of a system-on-chip (SOC) based on multiple criteria each corresponding to a different respective power mode of multiple power modes of the SOC; receiving first signals at the first circuitry after the programming of the first circuitry, the first signals indicating a state of the SOC;concurrently performing at the first circuitry multiple evaluations each based on the state of the SOC and a different respective criteria of the multiple criteria;providing respective results of the multiple evaluations in parallel from the first circuitry to second circuitry of the SOC;at the second circuitry, selecting a first power mode based on the results; and with third circuitry of the SOC, generating second signals, based on the selecting, to transition the SOC to the first power mode.17. The method of claim 16, wherein the state comprises multiple conditions, wherein the first signals each correspond to a different respective condition of the multiple conditions, wherein the first circuitry comprises an array of circuit cells, and wherein, based on the programming of the evaluator circuit, circuit cells of the array each correspond to a different respective combination of a test condition and one of the multiple conditions.18. The method of any of claims 16 or 17, further comprising:programming fourth circuitry of the SOC with a definition of a format of the first signals;receiving sensor signals at the fourth circuitry; andgenerating the first signals based on the sensor signals and the definition of the format.19. The method of any of claims 16 or 17, further comprising:programming the second circuitry with rank information which indicates a relative priorities of the multiple power modes with respect to each other, wherein selecting the first power mode is further based on the relative priorities.20. The method of any of claims 16 or 17, programming the third circuitry with mode transition information which indicates action sequences each to transition to a respective power mode, wherein the second signals are generated based on the mode transition information·21. The method of any of claims 16 or 17, wherein the first circuitry is programmed with a Basic Input/Output System process of the SOC.22. A system-on-chip (SOC) comprising:an array of circuit cells, the array comprising:columns which are coupled each to receive a different respective one of first signals which indicate a state of the SOC;rows configured to receive a programming which corresponds the rows each to a different respective criterion of multiple criteria which each correspond to a different respective power mode of the SOC, wherein, for each of the rows:cells of the row are configurable each based on a different respective test condition of the corresponding criterion; andthe row is to perform a respective evaluation based on the state of the SOC and the corresponding criterion;first circuitry coupled to receive evaluation results each from a different respective row of the array, the first circuitry to select a first power mode based on the evaluation results; andsecond circuitry to configure the first power mode responsive to the first circuitry.23. The SOC of claim 22, wherein a first cell of the array is configurable to select any of: a first mode wherein a multi-bit evaluation functionality of the first cell is enabled; a second mode wherein a single-bit evaluation functionality of the first cell is enabled; ora third mode wherein both the multi-bit evaluation functionality and the single-bit evaluation functionality are disabled.24. The SOC of any of claims 22 or 23, further comprising third circuitry which is configured to:receive a programming based on a definition of a signal format; andbased on the programming, to generate the first signals according to the signal format.25. The SOC of any of claims 22 or 23, wherein the first circuitry to select the first power mode based on the evaluation results comprises the first circuitry to identify a plurality of
power modes as candidate power modes, and to select the first power mode based on a determination that, of the candidate power modes, the first power mode is a highest priority power mode. |
DEVICE, SYSTEM AND METHOD TO DETERMINE A POWER MODE OF ASYSTEM-ON-CHIPCLAIM FOR PRIORITY[0001] This application claims the benefit of priority of U.S. Patent Application No. 16/448,797, filed on June 21, 2019, titled“Device, System, and Method to Determine a Power Mode of a System-on-Chip”, and which is incorporated by reference in entirety.BACKGROUND[0002] This disclosure generally relates to power management for an integrated circuit and more particularly, but not exclusively, to the identification of a power state to be provided with a system-on-chip.[0003] In a system-on-chip (SOC), circuit components of the SOC are integrated on a single chip. SOC integrated circuits are becoming ever more popular in various applications including embedded applications such as with set-top-boxes, mobile phones, portable media devices, and so on. While the high integration of components in a SOC provides advantages such as chip area savings and better signal quality, power consumption and performance latency are becoming increasingly important constraints for devices that include such SOCs. Especially with portable SOC applications, efficient power management functionality is a valuable aspect of many SOC implementations.[0004] With successive generations of integrated circuit technologies, the number, variety, and capabilities of SOCs continue to grow. As a result, there is expected to be an increasing premium placed on incremental improvements to how power efficiencies are provided by next-generation SOCs, and by SOCs which are already in use.BRIEF DESCRIPTION OF THE DRAWINGS[0005] The various embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:[0006] FIG. 1 illustrates a functional block diagram showing elements of a system-on- chip to provide power management at a system-on-chip according to an embodiment.[0007] FIG. 2 illustrates a flow diagram showing elements of a method for determining a power mode to be configured according to an embodiment.[0008] FIG. 3 illustrates a functional block diagram showing elements of a system-on- chip to implement a power mode which is determined according to an embodiment.
[0009] FIG. 4 illustrates a functional block diagram showing elements of power management circuitry to signal a power mode transition according to an embodiment.[0010] FIG. 5 illustrates a functional block diagram showing elements of power management circuitry to determine a transition between power modes according to an embodiment.[0011] FIG. 6 A illustrates a hybrid logic and functional block diagram showing elements of evaluation circuitry for identifying candidate power modes to be made available for selection according to an embodiment.[0012] FIG. 6B illustrates a hybrid logic and functional block diagram showing elements of a programmable circuit cell to facilitate identification of a candidate power mode according to an embodiment.[0013] FIG. 7 illustrates a functional block diagram showing a computing device in accordance with one embodiment.[0014] FIG. 8 illustrates a functional block diagram showing an exemplary computer system, in accordance with one embodiment.DETAILED DESCRIPTION[0015] Embodiments described herein variously provide techniques and mechanisms for identifying a power state to be provided with integrated circuitry. In an embodiment, the identifying is based on evaluations which each determine, for a different respective power mode, whether a detected state of the integrated circuitry is able to accommodate said power mode.[0016] For example, the evaluations are performed with power management circuitry which is programmable (or otherwise configurable) based on multiple criteria each for a different respective power mode. In some embodiments, such power management circuitry is reconfigurable - e.g., to accommodate use as a component in any of various types of SOCs and/or to accommodate updates to the available power modes for a given SOC.[0017] By way of illustration and not limitation, a programming of such power management circuitry enables a concurrent performance of multiple evaluations that are based each on a different respective criterion of the multiple criteria. In variousembodiments, results of such multiple evaluations are communicated, in parallel with each other, to circuitry which selects a power mode from among one or more candidate power modes. Such selection is based, for example, on predefined relative priorities of the candidate power modes with respect to each other.
[0018] Various traditional power management techniques rely on high-level software selection of a power state. Power state durations and transitions are often time constrained due to latencies associated with such software solutions. Also, software-based power management often requires a static set of input conditions. This invention mitigates such disadvantages by providing power management functionality with configurable hardware that accommodates changes to reference information for use in determining a power state.[0019] Certain features of various embodiments are described herein with reference to the identification of a power state which is to be configured at a SOC. However, such description may be extended to additionally or alternatively apply to a power state which is to be configured at any of various other types of integrated circuits. Unless otherwise indicated, the term“detected condition” refers herein to a condition of a SOC (or other integrated circuitry) - e.g., wherein a detected state of the SOC comprises multiple detected conditions. By contrast, a“test condition” refers herein to a basis for evaluating a corresponding detected condition. For example, in some embodiments, evaluating whether a state of an SOC (or “SOC state”) satisfies a given criterion includes determining whether a detected condition of the SOC state satisfies a corresponding test condition of said criterion.[0020] The technologies described herein may be implemented in one or more electronic devices. Non-limiting examples of electronic devices that may utilize the technologies described herein include any kind of mobile device and/or stationary device, such as cameras, cell phones, computer terminals, desktop computers, electronic readers, facsimile machines, kiosks, laptop computers, netbook computers, notebook computers, internet devices, payment terminals, personal digital assistants, media players and/or recorders, servers (e.g., blade server, rack mount server, combinations thereof, etc.), set-top boxes, smart phones, tablet personal computers, ultra-mobile personal computers, wired telephones, combinations thereof, and the like. More generally, the technologies described herein may be employed in any of a variety of electronic devices which include a system-on-chip.[0021] In the following description, numerous details are discussed to provide a more thorough explanation of the embodiments of the present disclosure. It will be apparent to one skilled in the art, however, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present disclosure.[0022] Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker to indicate a greater number of constituent
signal paths, and/or have arrows at one or more ends to indicate a direction of information flow. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction and may be implemented with any suitable type of signal scheme.[0023] Throughout the specification, and in the claims, the term“connected” means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices. The term“coupled” means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices. The term“circuit” or“module” may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. The term“signal” may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of“a,”“an,” and“the” include plural references. The meaning of“in” includes“in” and“on.”[0024] The term“device” may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus which comprises the device.[0025] The term“scaling” generally refers to converting a design (schematic and layout) from one process technology to another process technology and subsequently being reduced in layout area. The term“scaling” generally also refers to downsizing layout and devices within the same technology node. The term“scaling” may also refer to adjusting (e.g., slowing down or speeding up - i.e., scaling down, or scaling up respectively) of a signal frequency relative to another parameter, for example, power supply level.[0026] The terms“substantially,”“close,”“approximately,”“near,” and“about,” generally refer to being within +/- 10% of a target value. For example, unless otherwise specified in the explicit context of their use, the terms“substantially equal,”“about equal” and“approximately equal” mean that there is no more than incidental variation between among things so described. In the art, such variation is typically no more than +/-10% of a predetermined target value.
[0027] It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.[0028] Unless otherwise specified the use of the ordinal adjectives“first,”“second,” and “third,” etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.[0029] For the purposes of the present disclosure, phrases“A and/or B” and“A or B” mean (A), (B), or (A and B). For the purposes of the present disclosure, the phrase“A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).[0030] The terms“left,”“right,”“front,”“back,”“top,”“bottom,”“over,”“under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms“over,” “under,”“front side,”“back side,”“top,”“bottom,”“over,”“under,” and“on” as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material“over” a second material in the context of a figure provided herein may also be“under” the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material“on” a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies.[0031] The term“between” may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material“between” two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices.
[0032] As used throughout this description, and in the claims, a list of items joined by the term“at least one of’ or“one or more of’ can mean any combination of the listed terms. For example, the phrase“at least one of A, B or C” can mean A; B; C; A and B; A and C; B and C; or A, B and C. It is pointed out that those elements of a figure having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner similar to that described, but are not limited to such.[0033] In addition, the various elements of combinatorial logic and sequential logic discussed in the present disclosure may pertain both to physical structures (such as AND gates, OR gates, or XOR gates), or to synthesized or otherwise optimized collections of devices implementing the logical structures that are Boolean equivalents of the logic under discussion.[0034] For purposes of the embodiments, the transistors in various circuits, modules, and logic blocks are Tunneling FETs (TFETs). Some transistors of various embodiments may comprise metal oxide semiconductor (MOS) transistors, which include drain, source, gate, and bulk terminals. The transistors may also include Tri-Gate and FinFFT transistors, Gate All Around Cylindrical Transistors, Square Wire, or Rectangular Ribbon Transistors or other devices implementing transistor functionality like carbon nanotubes or spintronic devices. MOSFET symmetrical source and drain terminals i.e., are identical terminals and are interchangeably used here. A TFET device, on the other hand, has asymmetric Source and Drain terminals. Those skilled in the art will appreciate that other transistors, for example, Bi-polar junction transistors-BJT PNP/NPN, BiCMOS, CMOS, etc., may be used for some transistors without departing from the scope of the disclosure.[0035] It is pointed out that those elements of the figures having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner similar to that described, but are not limited to such.[0036] FIG. 1 illustrates elements of a system-on-chip (SOC) 100 which provides a programmable power management functionality according to certain embodiments. SOC 100 is merely one example of an integrated circuit (IC) chip that determines a power mode for multiple components (also referred to herein as“functional blocks”) which have various respective power utilization characteristics that change over time. Such an IC chip further comprises circuitry to determine whether, at a given time, these power utilizationcharacteristics necessitate (or otherwise allow for) a transition of the SOC from a currently- configured power mode to a different power mode.
[0037] In an embodiment, SOC 100 supports operation as a component of a desktop computer, laptop computer, handheld device (e.g., a smart phone, palmtop device, tablet, etc.), gaming console, wireless communication device, or other such computing-capable device. To facilitate such operation, SOC 100 comprises multiple functional blocks which, at different times, exhibit various respective power consumption requirements, power reduction margins and/or other such power utilization characteristics. In the example embodiment shown, such functional blocks comprise a processor unit 150 (including one or more processor cores 152), a memory controller 140, a memory 142, interconnect circuitry 120, a display module 160, and a hub module 162. However, some embodiments are not limited with respect to the particular number or configuration of functional blocks for which power management is provided at a SOC. For example, in other embodiments, power management is provided for more, fewer and/or differently configured functional blocks of SOC 100.[0038] Interconnect circuitry 120 couples various functional blocks of SOC 100 to circuit logic of SOC 100 (such as the illustrative power management circuitry 110 shown) which is to determine a power state of SOC 100. Interconnect circuitry 120 includes any of a variety of one or more busses, crossbars, fabrics and/or other connection mechanisms to variously couple one or more functional blocks of SOC 100 to power management circuitry 110. For example, interconnect circuitry 120 facilitates communication of control signals from power management circuitry 110 each to a respective functional block, where said control signals variously configure respective operational parameters of a given power mode. Alternatively or in addition, interconnect circuitry 120 facilitates communication of one or more sensor signals each from a respective functional block to power management circuitry 110. Such one or more sensor signals facilitate the identification of a next power state of SOC 100, for example. In some embodiments, interconnect circuitry 120 further facilitates communication between various functional blocks via one or more paths which are independent of power management circuitry 110.[0039] In an example embodiment, processor unit 150 is operable to execute a Basic Input/Output System (BIOS), an operating system (OS) and/or any of various other software processes - e.g., by accessing instructions stored in memory 142 or in a separate storage device. For example, the one or more cores 152 provide functionality to execute an OS which is to variously send to memory controller 140 requests to read data from, and/or write data to, access memory 142.[0040] During operation of SOC 100, memory controller 140 provides processor unit 150 with access to memory 142, such as a dynamic random access memory (DRAM). Operation
of memory 142 and memory controller 140 conform, for example, to some or allrequirements of a dual data rate (DDR) specification such as the DDR Four (DDR4) Synchronous DRAM (SDRAM) specification JESD79-4B, published June, 2017 by the JEDEC Solid State Technology Association of Arlington, Virginia, a high bandwidth memory (HBM) specification such as the HBM DRAM Standard JESD235, October 2013, or other such specification.[0041] Display module 160 is operable to perform image data processing and hub module 162 to serve as a hub for of one or more other components (not shown) of SOC 100. Hub module 162 comprises a platform hub, an input/output (I/O) hub or other such hub circuitry, for example. In one such embodiment, display module 160 and/or hub module 162 access memory 142 at various times via memory controller 140 - e.g., where such access is supported by a given power state of SOC 100.[0042] For example, SOC 100 operates at different times in any of two or more power states, and provides power management circuitry 110 to support, initiate, or otherwise implement transitions between such power states. According to one exemplary embodiment, power management circuitry 110 comprises circuit logic (represented by the illustrative signal generator 112 and evaluator logic 113 shown) to identify a given power state which is to be configured for SOC 100. Such identifying is based in part on an operational state of SOC 100 - e.g., including a state of current and/or expected future operation of one or more functional blocks of SOC 100.[0043] In an embodiment, power management circuitry 110 is programmable to be provided with a configuration state that determines at least in part how communications and/or other operations are to be performed to identify a next power state for SOC 100. Providing such a configuration state includes power management circuitry 110 receiving, or otherwise being programmed based on, configuration information (e.g., including the illustrative mode information 111 shown) which determines at least in part how information is to be communicated, evaluated and/or otherwise used to identify a power mode based on a detected state of one or more functional blocks.[0044] In an embodiment, configuration state such as that illustrated by mode information 111 specifies or otherwise indicates various criteria which each correspond to a different respective power mode of multiple possible power modes for SOC 100. For example, in some embodiments, a given criteria for a corresponding power mode includes one or more test conditions. At a given time, an ability of SOC 100 to accommodate the corresponding power mode is indicated where, for example, power management circuitry 110 determines
that each of said one or more test conditions is satisfied by an indicated state of the one or more functional blocks.[0045] By way of illustration and not limitation, power management circuitry 110 comprises evaluator logic 113 which is coupled to receive, or otherwise be programmed based on, multiple criteria which each correspond to a different respective power mode. Such configuration of power management circuitry 110 is performed, for example, with a BIOS process or any of various other suitable mechanisms. In an embodiment, the BIOS performs one or more operations that, for example, are adapted from conventional techniques for providing configuration information during a boot-up or other suitable process. The configuration of power management circuitry 110 with the multiple criteria enables evaluator logic 113 to perform comparisons and/or other operations which facilitate a determination as to which power modes (if any), other than a current power mode, are accommodated by an indicated state of SOC 100.[0046] For example, while evaluator logic 113 is so configured, power management circuitry 110 operates to monitor a state of one or more functional blocks of SOC 100. In the example embodiment shown, a signal generator 112 of power management circuitry 110 is coupled to receive one or more signals - e.g., via interconnect 130 and/or other such interconnect structures - each from a respective sensor (not shown) of SOC 100. Based on information indicated by such sensor signals, circuitry of signal generator 112 generates corresponding signals to be communicated to evaluator logic 113 - e.g., where a format of such signals is to accommodate the programmed configuration of evaluator logic 113. In some embodiments, configuration of power management circuitry 110 based on mode information 111 additionally or alternatively comprises signal generator 112 receiving or otherwise being programmed based on information (referred to herein as“signal generation information”) which specifies or otherwise indicates a format of signals to be communicated from signal generator 112 to evaluator logic 113.[0047] Based on signals from signal generator 112 (the signals indicating a current or expected future state of SOC 100), evaluator logic 113 performs multiple evaluations which each correspond to a different respective power mode. For example, each such evaluation is to determine whether the detected state of SOC 100 (as indicated by the signals from signal generator 112) accommodates the corresponding power mode. In some embodiments, the multiple evaluations are performed concurrently with evaluator logic 113. For example, in an embodiment, various circuits of evaluator logic 113 are coupled to receive the same signals from signal generator 112, to concurrently perform respective evaluations of said signals, and
to output respective results of said evaluations - e.g., where the results are communicated from evaluator logic 113 in parallel with each other.[0048] In an embodiment, a selector 114 of power management circuitry 110 is coupled to receive evaluation results from evaluator logic 113 and to identify (based on said results) a next power state to be configured for SOC 100. For example, selector 114 determines which of the multiple possible power modes (if any) have been determined by evaluator logic 113 to be what is referred to herein as“candidate power modes” - i.e., power modes which could be accommodated by a recently detected state of SOC 100. Selector 114 performs a selection from among the one or more candidate power modes - e.g., where such selection is based on a predefined relative priorities of said power modes. In some embodiments, configuration of power management circuitry 110 additionally or alternatively comprises selector 114 receiving or otherwise being programmed based on information (referred to herein as“rank information”) which indicates such relative priorities of the power modes with respect to each other.[0049] In an embodiment, a controller 115 of power management circuitry 110 is coupled to receive from selector 114 a signal which identifies the power mode which has been selected from among the one or more candidate power modes. Based on such a signal, controller 115 communicates one or more control signals to transition power management circuitry 110 from a current power mode to the selected power mode. Such control signaling is communicated, for example, via interconnect 130, interconnect circuitry 120 and/or other such interconnect structures to various functional blocks of SOC 100.[0050] For example, controller 115 includes or otherwise has access to reference information (referred to herein as“mode parameter information”) which specifies or otherwise indicates respective setting (for a given power mode) of one or more operational parameter of SOC 100. In some embodiments, such mode parameter information further specifies or otherwise indicates a particular sequence according to which such operational parameters are to be modified to transition to the power mode. In one such embodiment, configuration of power management circuitry 110 additionally or alternatively comprises controller 115 receiving or otherwise being programmed based on such mode parameter information.[0051] In one illustrative embodiment, control signals generated by controller 115 cause one or more functional blocks of SOC 100 to perform clock gating, power gating, selectively enabling/disabling voltage supply circuitry and/or any of various other power management operations - e.g., including one or more operations that (for example) are adapted from
conventional power management mechanisms and techniques. The particular number, type and/or order of power mode transition actions performed in response to such control signals are not limiting on some embodiments.[0052] FIG. 2 shows features of a method 200 for determining a power mode to be configured according to an embodiment. Method 200 is one example of an embodiment wherein circuit logic of a SOC concurrently performs multiple evaluations - based on a programmed configuration of said circuit logic - which each determine, according to a respective criterion, whether a detected state of the SOC accommodates a corresponding power mode. Method 200 is performed with circuity of SOC 100 (e.g., with power management circuitry 110), for example.[0053] As shown in FIG. 2, method 200 includes (at 210) programming evaluator circuitry of a SOC based on multiple criteria which each correspond to a different respective power mode of multiple possible power modes of the SOC. In an embodiment, the programming at 210 is based on information which specifies or otherwise indicates criteria which each include a respective one or more test conditions. For example, for a given one of such one or more test conditions, the evaluator circuitry is programmed at 210 to include or otherwise have access to one or more parameters of the test condition. In various embodiments, such one or more parameters include a minimum threshold value, a maximum threshold value, and/or a single-bit value which is to be a basis for a Boolean evaluation (if any) of a corresponding single-bit value representing a detected condition. In one such embodiment, the one or more parameters further facilitate selection from among multiple possible modes of evaluation - e.g., wherein a first mode provides a multi-bit evaluation based on a given condition, a second mode provides a single-bit evaluation based on the given condition, and a third mode disables the providing of either or both of a multi-bit evaluation or a single-bit evaluation based on the given condition.[0054] In various embodiments, the programming at 210 is at least a part of operations to provide power management circuitry of an SOC with a configuration state which determines, at least in part, how communications and/or other operations are to be performed by such circuitry to identify a next power state for the SOC. For example, such configuration state is provided by writing data to one or more mode registers and/or other suitable resources which are included in, or otherwise accessible, to the power management circuitry. Alternatively or in addition, such configuration state is provided by configuring (e.g., including reconfiguring) one or more switches, multiplexers, demultiplexers, and/or other suitable components of the power management circuitry.
[0055] In one such embodiment, the power management circuitry is provided with additional or alternative configuration state that is based, for example, on signal generation information, mode transition information and/or rank information. Such configuration state is provided, for example, by a BIOS process of the SOC. Alternatively or in addition, providing such configuration state includes reconfiguring (e.g., reprogramming) the power management circuitry with updated criteria information, signal generation information, mode transition information and/or rank information.[0056] Method 200 further comprises (at 212) receiving first signals at the evaluator circuitry after the programming of the evaluator circuitry, wherein the first signals indicate a state of the SOC. The state comprises multiple detected conditions of the SOC - e.g., including one or more actual conditions and/or one or more expected future conditions. In some embodiments, the criteria with which the evaluation circuitry is programmed at 210 each include a respective one or more test conditions, wherein the multiple detected conditions of the SOC state correspond to a superset of all such test conditions of the criteria.[0057] In some embodiments, method 200 further comprises operations (not shown) which, for example, include including programming signal generator circuitry based on a definition of a format of the first signals which are received at 212. In one such embodiment, said operations further comprise receiving sensor signals at the signal generator circuitry after the signal generator circuitry has been so programmed, where the sensor signals indicate the detected SOC state. The signal generator circuitry then generates the first signals, for example, based on the sensor signals and the definition of the format.[0058] Method 200 further comprises (at 214) concurrently performing at the evaluator circuitry multiple evaluations which are each based on the state of the SOC and a different respective criteria of the multiple criteria. For example, in some embodiments, the first signals each correspond to a different respective condition of the detected SOC state. In one such embodiment, the evaluator circuitry comprises an array of circuit cells which (based on the programming at 210) each correspond to a different respective combination of a test condition and a detected condition of the SOC state. For example, the array comprises circuit cells which are variously configured each to evaluate whether a corresponding detected condition satisfies a corresponding test condition. In one example embodiment, rows of the array each correspond to a different respective criteria of the multiple criteria, wherein columns of the array each correspond to a different respective detected condition of the SOC state.
[0059] Method 200 further comprises (at 216) providing respective results of the multiple evaluations in parallel from the evaluator circuitry to selector circuitry. In various embodiments, the results each indicate whether a corresponding criteria is satisfied by the detected state of the SOC. Accordingly, said results variously indicate, for each such criteria, whether a power mode corresponding to said criteria can be accommodated by the detected SOC state.[0060] Method 200 further comprises (at 218) at the selector circuitry, selecting a first power mode based on the results. In some embodiments, the selector circuitry isprogrammed with rank information which indicates relative priorities of the multiple power modes with respect to each other - e.g., where the selecting at 218 is further based on the relative priorities. In an illustrative scenario according to one such embodiment, the results identify a plurality of power modes as candidate power modes, wherein the selecting of the first power mode at 218 is based on a determination that, of the candidate power modes, the first power mode is a highest priority power mode.[0061] Method 200 further comprises (at 220), generating second signals to transition the SOC to the first power mode, wherein the second signal is generated with controller circuitry based on the selecting at 218. In various embodiments, the controller circuitry isprogrammed with mode transition information which indicates sequences of actions, the sequences each to transition to a respective power mode. In one such embodiment, the second signals are generated at 220 based on said mode transition information.[0062] FIG. 3 shows features of a system-on-chip (SOC) 300 to implement a power mode which is determined according to an embodiment. FIG. 3 illustrates one example of an embodiment wherein a SOC is programmable or otherwise configurable (e.g., reconfigurable) to determine how communications and/or other operations are to be performed at the SOC to identify a next power state. SOC 300 includes some or all of the features of SOC 100, for example.[0063] As shown in FIG. 3, SOC 300 comprises functional blocks (FBs) 310a,..., 310m which, for example, variously provide respective functionality such as that of memory controller 140, memory 142, controller 115, display module 160, hub module 162, and/or the like. SOC 300 further comprises power management circuitry 320, which is operable to detect some state of FBs 310a, ..., 310m and, based on such a state, to provide one or more signals to transition SOC 300 from a current power mode to a next power mode.[0064] In an embodiment, power management circuitry 320 provides functionality of power management circuitry 110 - e.g., wherein a signal generator 330, an evaluator circuit
350, a selector 360, and a controller 370 of power management circuitry 320 correspond functionally to signal generator 112, evaluator logic 113, selector 114, and controller 115 (respectively). For example, power management circuitry 320 is coupled to receive sensor signals 314 that indicate detected conditions of FBs 310a,.. 310m - e.g., wherein FBs 310a,..., 310m variously include (or are coupled to) sensors 312a,..., 312n which generate sensor signals 314.[0065] In an embodiment, a state of SOC 300 at a given time (for example, including a state of FBs 310a,..., 310m) comprises multiple detected conditions which are indicated by sensor signals 314. For example, a given one of such detected conditions comprises or otherwise indicates the presence, or absence, of a characteristic (e.g., a temperature, pressure, acceleration, sound, light, vibration, orientation, or the like) of an environment in or near SOC 300 and/or a characteristic of one or more operations performed by SOC 300. By way of illustration and not limitation, such a condition is detected at a particular time, or over some period of time. Alternatively or in addition, a detected condition is one which(according to some predefined criteria) has been determined to be expected in the future. In some embodiments, a detected condition is represented with a metric value which, for example, measures specifies or otherwise indicates a level, a rate of change (first order, second order, or the like) and/or other such feature of a supply voltage, a reference voltage, a signal current, or any of various other electrical characteristic. Alternatively or in addition, a condition specifies or otherwise indicates an occurrence or non-occurrence of one or more events of a particular event type - e.g., including a count of such events, a rate of change of such events, or the like. Examples of such an event include, but are not limited to, a memory access event, a data error event, an error handling event, a user interaction with an I/O interface mechanism (such as a touchscreen, keyboard, microphone or the like), etc. In some embodiments, a detected condition comprises an availability, or unavailability, of a power supply, a wired network, a wireless network, a communication path (e.g., a data link or channel) or other such resource of SOC 300.[0066] In some embodiments, generation and communication of sensor signal 314 comprise one or more operations which are adapted, for example, from conventional techniques for monitoring the state of an integrated circuit. Various embodiments are not limited with respect to the particular number and/or types of detected conditions which are indicated by sensor signals 314, or with respect to particular mechanisms by which sensor signals 314 are generated and communicated to signal generator 330.
[0067] In some embodiments, signal generator 330 provides functionality to determine, based on sensor signals 314, state information 332 which comprises multiple conditions of the detected state. In one such embodiment, signal generator 330 receives sensor signals 314 while signal generator 330 is configured based on a definition of a format for signals 336 which are to communicate a detected state of FBs 310a,..., 310m. By way of illustration and not limitation, signal generator 330 is programmed to include or otherwise has access to signal generation information 334 which defines a format for state information 332 and/or a format according to which signals 336 are to communicate state information 332 to evaluator circuit 350. In some embodiments, such a format accommodates a configuration (e.g., a programming) of evaluator circuit 350, where such configuration is based on multiple criteria which each correspond to a different respective power mode.[0068] Circuitry of signal generator 330 performs one or more translation, conversion, mapping and/or other such operations to determine values of state information 332, where such determining is based on signal generation information 334 and information which is communicated by sensor signals 314. In some embodiments, signal generator 330 is operable to be reprogrammed or otherwise reconfigured based on such signal generation information - e.g., wherein signal generator 330 is programmed with signal generation information 334 during a first boot-up process of SOC 300, and where signal generator 330 is instead programmed with different signal format information during a previous (or subsequent) boot up process of SOC 300. Such reconfiguration enables signal generator 330 to provide a different formatting of state information 332 and/or signals 336 at various times.Alternatively or in addition, such reconfiguration enables signal generator 330 to provide formatting of state information 332 and/or signals 336 based on a different order or other arrangement by which signal generator 330 receives sensor signals 314.[0069] In an embodiment, evaluator circuit 350 is coupled to receive signals 336 while evaluator circuit 350 is programmed or otherwise configured based on multiple criteria which each correspond to a different respective power mode of SOC 300. By way of illustration and not limitation, evaluator circuit 350 is configured - e.g., programmed - to include or otherwise have access to criteria information 342 which indicates J criteria Ti,..., T (where J is an integer greater than one) which each correspond to a different respective power mode of J power modes Mi,..., M of SOC 300. Criteria Ti,..., T each include a respective one or more test conditions which each correspond to a respective one of the detected conditions which are indicated by state information 332 and communicated with signals 336. In an
example embodiment, the detected conditions which are communicated via signals 336 correspond to a superset of the respective test conditions of criteria Ti,.. T .[0070] In an embodiment, evaluator circuit 350 concurrently performs multiple evaluations which are each based on the detected state of SOC 300 (as indicated by signals 314). Such multiple evaluations are further based each on a different respective criteria of the multiple criteria Ti,.. T . For example, for a given criterion Tjof multiple criteria Ti,.. T - where j G { 1,.. J } - the criterion Tjcomprises one or more test conditions which each correspond to a different respective one of the detected conditions indicated by signals 336.In such an embodiment, evaluator circuit 350 evaluates whether the one or more test conditions of criterion Tjare each satisfied by the corresponding detected condition.[0071] In some embodiments, evaluator circuit 350 is operable to be reprogrammed or otherwise reconfigured based on such criteria information - e.g., wherein evaluator circuit 350 is programmed with criteria information 342 during a first boot-up process of SOC 300, and where evaluator circuit 350 is instead programmed with different criteria information during a previous (or subsequent) boot-up process of SOC 300. Such reconfiguration enables evaluator circuit 350 to perform respective evaluations for more, fewer and/or different power modes at various times.[0072] In an embodiment, selector 360 is coupled to receive respective results of the multiple evaluations which are performed by evaluator circuit 350 based on signals 336. For example, selector 360 is coupled to receive signals 352 which each indicate a respective evaluation result. In some embodiments, signals 352 are communicated to selector 360 in parallel with each other. Signals 352 indicate, for example, which (if any) of power modes Mi,..., M is a candidate power mode could be accommodated by the detected state of SOC 300. Based on signals 352, selector 360 selects one such candidate power mode as a power mode to be implemented with SOC 300.[0073] In an embodiment, selection of a power mode from among one or more candidate power modes is based on rank information which indicates relative priorities of power modes Mi,..., M with respect to each other. By way of illustration and not limitation, selector 360 receives evaluation results via signals 352 while selector 360 is programmed or otherwise configured to include (or otherwise have access to) rank information 344 which identifies respective rank values Ri,..., R for power modes Mi,..., M . In one such embodiment, the results indicated by signals 352 identify a plurality of power modes as candidate power modes, wherein selector 360 selects a given one of said power modes based on a
determination that, of the candidate power modes, the given power mode is a highest priority power mode.[0074] In some embodiments, selector 360 is operable to be reprogrammed or otherwise reconfigured based on such rank information - e.g., wherein selector 360 is programmed with rank information 344 during a first boot-up process of SOC 300, and where selector 360 is instead programmed with different rank information during a previous (or subsequent) boot up process of SOC 300. Such reconfiguration enables selector 360 to provide a different prioritization the same of power modes, to provide a prioritization of more, fewer and/or different power modes, and/or the like at various times.[0075] In the example embodiment shown, controller 370 is coupled to receive from selector 360 a signal 362 which identifies a power mode that has been selected from among the one or more candidate power modes. Responsive to signal 362, controller 370 generates one or more signals (such as the illustrative control signals 372 shown) to transition SOC 300 to the selected power mode. By way of illustration and not limitation, controller 370 is programmed to include, or otherwise to have access to, mode parameter information 346 which specifies or otherwise indicates, for each of power modes Mi,..., M , a respective one or more operational parameter settings to implement said power mode. In someembodiments, mode parameter information 346 further indicates a particular order of actions to transition between a given two power modes.[0076] Based on the selected power mode which is indicated by signal 362, and further based on mode parameter information 346, controller 370 communicates - to FBs 310a,..., 310m and/or any of various other resources of SOC 300 - control signals 372 which transition SOC 300 to the selected power state. For example, control signals 372 are communicated to change the operation of one or more of a processor pipeline, a memory controller, a memory device, a buffer, a cache, a mode register, clock gating logic, power gating logic, and/or the like. In some embodiments, changing such operation of SOC 300 in response to control signals 372 comprises one or more actions which, for example, are adapted from conventional techniques for configuring a power mode of integrated circuitry. Various embodiments are not limited with respect to a particular power mode which is to be implemented based on control signals 372, or with respect to particular actions to implement such a power mode in response to signals 372.[0077] In some embodiments, controller 370 is operable to be reprogrammed or otherwise reconfigured based on such mode parameter information - e.g., wherein controller 370 is programmed with mode parameter information 346 during a first boot-up process of
SOC 300, and where controller 370 is instead programmed with different mode parameter information during a previous (or subsequent) boot-up process of SOC 300. Such reconfiguration enables controller 370 to variously transition between more, fewer and/or different power modes at various times.[0078] FIG. 4 shows features of a power management circuitry 400 for signaling a power mode transition according to an embodiment. Power management circuitry 400 is one example of an embodiment wherein circuit logic of a SOC is configurable to perform one or more evaluations which each correspond to a different respective test condition of a criteria for a given power mode. In some embodiments, power management circuitry 400 includes some or all of the features of power management circuitry 110, for example, and/or is operable to perform some or all of method 200.[0079] As shown in FIG. 4, power management circuitry 400 comprises a signal generator 430 which, for example, corresponds functionally to one of signal generator 112 or signal generator 330 - e.g., where evaluator circuitry 450 of power management circuitry 400 corresponds functionally to one of evaluator circuit 350 or evaluator logic 113. Signal generator 430 is coupled to receive, as inputs 414, some N sensor signals xi, X2, . . ., XN(where N is an integer greater than one) which, in a given period of time, specify or otherwise indicate a detected state of a SOC which includes power management circuitry 400. Based on sensor signals xi, X2, . . ., XN, signal generator 430 determines I values ci, C2, C3,..., ci (where I is an integer greater than one) which each indicate a respective condition of the detected state - e.g., wherein an output 436 from signal generator 430 communicates values ci, C2, C3,..., ci to evaluator circuitry 450.[0080] In an example embodiment, determining values ci, C2, C3, ..., ci comprises signal generator 430 translating, converting, mapping or otherwise processing sensor signals xi,X2, . . . , XNbased reference information which, for example, includes one or more of the illustrative input order information 433, format information 434, or output order information 435 shown. In one such embodiment, input order information 433 identifies a particular arrangement X (e.g., an order or other configuration) of respective inputs 414 by which signals xi, X2, . . . , XN are received. For example, signals xi, X2, . . ., XN are each of a respective sensor signal type (e.g., each to one of a temperature signal type, a voltage condition type, a current condition type, a particular event type, or the like), where input order information 433 identifies a correspondence of inputs 414 each to a respective one of such sensor signal types. Alternatively or in addition, output order information 435 identifies a particular arrangement
C according to which values ci, C2, C3,.. ci are to be communicated each via a respective signal of output 436.[0081] Format information 434 specifies or otherwise indicates, for a given value Ci of the values ci, C2, C3,.. ci (where i G { 1,.. I }), a respective format by which the value Ci is to represent a corresponding condition of the detected SOC state. By way of illustration and not limitation, format information 434 identifies a range of possible values for Ci, a unit of measurement (e.g., millivolts, milliamps, degrees Celsius, etc.) by which Ci represents a corresponding detected condition, and/or one or more sensor signals which are the basis determining Ci - e.g., where format information 434 identifies a function for calculating Ci based on one or more of sensor signals xi, X2,..., XN.[0082] In the example embodiment shown, format information 434 identifies (for example) that value ci is to be determined based on a first one or more signals { xni} of signals xi, X2,..., XN, that value C2 is to be determined based on a second one or more signals { Xn2 } of signals xi, X2,..., XN, and that value ci is to be determined based on an Ith one or more signals { xni} of signals xi, X2, . . . , XN. Alternatively or in addition, format information 434 includes respective minimum possible values cmin_i, cmin_2,..., cmin_i for ci, C2,..., ci, respective maximum possible values cmax_i, cmax_2,· .., cmax_i for ci, C2,..., ci, and/or respective units of measurement cUnits_i, cUnits_2,..., cUnits_i represented by ci, C2,..., ci. In some embodiments, format information 434 additionally or alternatively includes values ct>in_i, Cbin_2, ..., Cbin_i which identify, for each of ci, C2,..., ci (respectively), whether the value is a single bit Boolean value or, alternatively, a multi-bit value. In various embodiments, signal generator 430 supports being reprogrammed or otherwise reconfigured with updated versions of input order information 433, format information 434, and/or output order information 435 - e.g., wherein such updating is provided by a BIOS process of the SOC.[0083] For a given criterion of the multiple criteria (which each correspond to a different respective power mode), evaluator circuitry 450 provides multiple evaluator circuits which are variously programmable or otherwise configurable each to evaluate whether a respective test condition of the criterion is satisfied by a corresponding one of values ci, C2,..., ci. In an illustrative scenario according to one embodiment, evaluator circuitry 450 comprises evaluator circuits Ey, E2j, E3j,..., Eijwhich are variously configured each to evaluate whether a corresponding one of values ci, C2, C3,..., ci satisfies a different respective test condition of criterion Tj.[0084] In an illustrative scenario according to one embodiment, evaluator circuit Ey is programmed to perform an evaluation as to whether value cijis between a minimum
threshold value tijminand a maximum threshold value tijmax. Additionally or alternatively, evaluator circuit E2jis programmed to perform an evaluation as to whether value C2jis less than another maximum threshold value t2jmax. In one such embodiment, evaluator circuit E3jis programmed to forego any evaluation of value C3j- e.g., based on a programmed value t3jNAwhich indicates that, with respect to criterion Tj, the detected condition represented by value C3j is a“don’t care” condition. Additionally or alternatively, evaluator circuit Eij is programmed to perform an evaluation (such as a Boolean XNOR operation) as to whether a single bit value ci is equal to a single bit test value tijtrae. In one such embodiment, the programming of evaluator circuits Eq, E2j, E3j, . . . , Eij includes, for example, writing some or all of tijmin, tijmax , t2jmax, t3jNA, or tijtrue to respective registers, configuring one or more switches, multiplexers or other such circuit components, and/or the like.[0085] In an embodiment, results 451 of respective evaluations by evaluator circuits Eij, E2j, E3j,..., Eijare logically AND’ed (or otherwise combined) to generate a signal 452 which indicates whether criterion Tjis satisfied by the detected state which is indicted by values ci, C2, C3, ..., ci. Signal 452 is then communicated from evaluator circuitry 450 in parallel with one or more other such evaluation result signals (not shown) - e.g., wherein signal 452 is one of signals 352. It is to be appreciated that the particular number and type of evaluations performed by evaluator circuits Eij, E2j, E3j,..., Eij(or other such circuits of evaluator circuitry 450) is merely illustrative, and not limiting on some embodiments.[0086] FIG. 5 shows features of a power management circuitry 500 to determine a transition between power modes according to an embodiment. Power management circuitry 500 is one example of an embodiment wherein evaluator circuitry comprises an array of circuit cells (or“cells,” for brevity) which are programmable each to perform a different respective evaluation based on a corresponding combination of a test condition and a detected condition of an SOC. Power management circuitry 500 includes some or all of the features of power management circuitry 110, power management circuitry 320, or power management circuitry 400 - e.g., where power management circuitry 500 provides functionality to perform operations of method 200.[0087] As shown in FIG. 5, power management circuitry 500 includes evaluator circuitry 550, multiplexer circuitry 560, and a controller 570 which - for example - correspond functionally to evaluator logic 113, selector 114, and controller 115 (respectively). Power management circuitry 500 supports communication of signals 536, signals 552, and control signals 572 that, in some embodiments, correspond functionally to signals 336, signals 352, and control signals 372 (respectively).
[0088] In the illustrative embodiment shown, evaluator circuitry 550 comprises circuits that, for example, variously support functionality such as that provided by evaluator circuits Eij, E¾, E3j,.. ·, Eijof evaluator circuitry 450. Such circuits of evaluator circuitry 550 are variously identified in FIG. 5 using a labeling scheme CKij , where the notation“i” denotes a detected condition to be received by the circuit in question, and the notation“j” denotes a criterion which is supported in part by the circuit in question. Accordingly, the combination “ij” denotes a particular test condition of a particular criterion - e.g., where an ith test condition of the jth criterion is to be evaluated based on a corresponding ith detected condition indicated by signals 536.[0089] By way of illustration and not limitation, a circuit cell array of evaluator circuitry 550 comprises a row of cells CK11, CK21,..., CKI1 which are to perform respective evaluations for a first criterion - e.g., where a row of cells CK12, CK22,..., CKO are to perform respective evaluations for second criterion, and a row of cells CKIJ, CK2J,..., CKIJ are to perform respective evaluations for Jth criterion. In one such embodiment, a column of cells CK11, CK12,..., CKIJ are each coupled to receive a signal identifying condition ci - e.g., where a column of cells CK21, CK22,..., CK2J are each coupled to receive a signal identifying condition c2, and a column of cells CKI1, CKI2,..., CKIJ are each coupled to receive a signal identifying condition ci. Such cells are variously operable each to perform a respective determ i nation as to whether a given detected condition of SOC state satisfies a test condition for which the cell has been programmed or otherwise configured (e.g.,reconfigured). For a given cell, such a configuration is based, for example, on a minimum threshold value, a maximum threshold value, a reference Boolean (single bit) value, an identifier of whether a condition is a don’t care condition, and/or the like.[0090] Based on the detected conditions ci, c2, C3, ..., ci which are communicated column wise into the cell array of evaluator circuitry 550, evaluation results are variously generated by the cell array and communicated, in parallel with each other, as signals 552 which are received by multiplexer circuitry 560. Signals 552 identify which power modes (if any) are candidate modes that can be accommodated by the detected SOC state. Based on priority information Ru 544 (e.g., including rank information 344), multiplexer circuitry 560 provides to controller 570 a signal which identifies a selected power mode which, of the one or more candidate modes indicated by signals 552, is a relatively highest priority mode.[0091] In one such embodiment, controller 570 is to programmed (e.g., reprogrammed) with mode transition information 546 which identifies action sequences each to transition between a respective two power modes. By way of illustration and not limitation, mode
transition information 546 specifies or otherwise indicate a sequence A12 of actions to transition from a first power mode to a second power mode, a sequence A 13 of actions to transition from the first power mode to a third power mode, a sequence Au of actions to transition from the first power mode to a Jth power mode, etc. Based on the selected power mode (as indicated by multiplexer circuitry 560) and mode transition information 546, controller 570 identifies a vector 573 of actions ai,.. ax to transition the SOC from a currently-implemented power mode to the selected power mode. In response to the identification of vector 573, controller 570 communicates control signals 572 to various functional blocks of the SOC - e.g., wherein control signals 572 are communicated according to a sequence indicated by mode transition information 546.[0092] FIG. 6A shows features of evaluator circuitry 600 to identify one or more power modes as being available to be configured with a SOC according to an embodiment.Evaluator circuitry 600 is one example of an embodiment wherein an array of circuit cells are configurable perform evaluations each based on a different respective combination of a test condition and a corresponding detected condition of an SOC. In some embodiments, evaluator circuitry 600 includes features of evaluator logic 113, evaluator circuit 350, evaluator circuit 450 or evaluator circuitry 550 - e.g., wherein evaluator circuitry 600 provides functionality to perform operations of method 200. Evaluator circuitry 600 supports communication of signals 652a, 652b, 652c,..., 652ja,..., 652j that, for example, correspond functionally to signals 352 or signals 552.[0093] As shown in FIG. 6A, evaluator circuitry 600 includes an array of circuit cells including (for example) a row 650a of cells Ell, E21, E31,..., Ell which are to perform respective evaluations for a first criterion - e.g., where a row 650b of cells E12, E22, E32,..., EI2 are to perform respective evaluations for a second criterion, a row 650c of cells E13,E23, E33,..., EI3 are to perform respective evaluations for a third criterion, and a row 650j of cells E1J, E2J, E3J,..., EIJ are to perform respective evaluations for a Jth criterion.[0094] In an illustrative scenario according to one embodiment, a SOC includes evaluator circuitry 600, where (at a given time) a detected state of the SOC includes conditions ci, C2, C3, ..., ci. In one such embodiment, evaluator circuitry 600 is coupled to receive signals (such as signals 336) which variously identify said detected conditions of the SOC state. By way of illustration and not limitation, a column of cells El l, E12,..., EIJ are each coupled to receive a signal identifying condition ci - e.g., where a column of cells E21, E22,..., E2J are each coupled to receive a signal identifying condition C2, a column of cells E31, E33,..., E3J are
each coupled to receive a signal identifying condition C3, and a column of cells Ell, EI2,..EIJ are each coupled to receive a signal identifying condition ci.[0095] Such cells are variously operable each to perform a respective determination as to whether a given detected condition satisfies a test condition for which the cell has been programmed or otherwise configured (e.g., reconfigured). For example, each such cell is programmable or otherwise configurable to include or otherwise have access to one or more parameters of a respective test condition. Such one or more parameters for a given cell are variously identified in FIG. 6 A using a labeling scheme ty, where the notation“i” denotes a detected condition to be received by the cell in question, and the notation“j” denotes a criterion which is supported in part by the cell in question. For example, cell En is programmed to evaluate condition ci based on one or more parameters tn of a first test condition of the first criterion - e.g., where cell E22 is programmed to evaluate condition C2 based on one or more parameters t22 of a second test condition of the second criterion. In various embodiments, some or all such test condition parameters include features of criteria information 342.[0096] Row 650a outputs a signal 652a which, for example, represents a logical ANDing (or other suitable combination) of respective evaluation results which are generated by cells El l, E21, E31,..., Ell - e.g., where signal 652a indicates whether the detected state could accommodate a power state which corresponds to the first criterion. In one suchembodiment, signal 652a is output in parallel with other signals 652b, 652c,..., 652j which are similarly generated by rows 650b, 650c,..., 650j (respectively).[0097] FIG. 6B shows features of evaluator circuitry 600 which is programmable to identify, according to an embodiment, whether a detected condition of a circuit state satisfies a test condition which corresponds to a given power mode. Evaluator circuitry 600 is one example of an embodiment wherein an array of circuit cells comprises a circuit cell which is programmable or otherwise configurable to perform any of various evaluations based on a detected condition. For example, evaluator circuitry 660 includes features of evaluator circuitry 600 - e.g., wherein evaluator circuitry 600 provides functionality to perform operations of method 200.[0098] As shown in FIG. 6B, evaluator circuitry 600 includes a programmable circuit cell 662 that, for example, provides functionality of a cell of evaluator circuitry 550, or a cell of evaluator circuitry 600. In an illustrative scenario according to one embodiment, a SOC includes evaluator circuitry 660, where (at a given time) a detected state of the SOC comprises multiple conditions including a condition Ci. In one such embodiment, cell 662 is
coupled to receive a signal (e.g., one of signals 336) which identifies condition Ci. Cell 662 provides functionality to evaluate whether condition Ci satisfies a test condition of a criterion for a given power mode - e.g., where cell 662 is programmable based on any of various test conditions.[0099] In an embodiment, such programming of cell 662 is based on one or more parameters of a particular test condition - e.g., wherein cell 662 is configured to include or otherwise have access to said one or more parameters. By way of illustration and not limitation, such one or more parameters include a minimum threshold value tminjj for a value of Ci, and a maximum threshold value tmaxjjfor a value of Ci. Alternatively or in addition, such one or more parameters include (for example) a value gteen_ijto selectivelyenable/disable a“greater than or equal to” (gte) comparator functionality of comparator circuit 664, and/or a value lteen_ijto selectively enable/disable a“less than or equal to” (lte) comparator functionality of comparator circuit 664. In one such embodiment, the one or more parameters further comprise a value ssely with which a multiplexer 666 (or other suitable circuitry) is controlled to selectively output a signal 668 which is based on a particular combination of an evaluation based on the minimum threshold value tminjjand/or an evaluation based on the maximum threshold value tmax j.[00100] In various embodiments, the one or more parameters of the test condition additionally or alternatively include (for example) a single-bit value refbjjwhich is to be a reference for a Boolean evaluation (if any) of a single-bit value of Ci. By way of illustration and not limitation, cell 662 comprises a comparator 670 (e.g., a XNOR gate) which is to indicate with a signal 672 whether refbjjis equal to a corresponding single-bit value of Ci.[00101] In some embodiments, circuit cell 662 further comprises a multiplexer 680 which is coupled to receive signals 668, 672, and, in some embodiments, another signal 674 representing a value ksatwhich indicates that, with respect to the criterion in question, condition Ci is a“don’t care” condition. In one such embodiment, another configured parameter (represented as a mode select parameter msely) controls a selection by multiplexer 680 between signals 668, 672, 674. As a result, a signal 682 which is output by multiplexer 680 represents a selected one of a result of a multi-bit evaluation by comparator circuit 664, a result of a single-bit evaluation by comparator 670, or a“don’t care” output which is independent of any such multi-bit evaluation or single-bit evaluation. Accordingly, multiplexer 680 and the parameter msehjfacilitate a configurability of cell 662 to select between a first mode which provides a multi-bit evaluation functionality, a second mode
which provides a single-bit evaluation functionality, and a third (“don’t care”) mode which disables both the multi-bit evaluation functionality and the single-bit evaluation functionality.[00102] FIG. 7 illustrates a computing device 700 in accordance with one embodiment.The computing device 700 houses a board 702. The board 702 may include a number of components, including, but not limited to, a processor 704 and at least one communication chip 706. The processor 704 is physically and electrically coupled to the board 702. In some implementations the at least one communication chip 706 is also physically and electrically coupled to the board 702. In further implementations, the communication chip 706 is part of the processor 704.[00103] Depending on its applications, computing device 700 may include other components that may or may not be physically and electrically coupled to the board 702. These other components include, but are not limited to, volatile memory (e.g., DRAM), non volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).[00104] The communication chip 706 enables wireless communications for the transfer of data to and from the computing device 700. The term“wireless” and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 706 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 700 may include a plurality of communication chips 706. For instance, a first communication chip 706 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 706 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.
[00105] The processor 704 of the computing device 700 includes an integrated circuit die packaged within the processor 704. The term“processor” may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The communication chip 706 also includes an integrated circuit die packaged within the communication chip 706.[00106] In various implementations, the computing device 700 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 700 may be any other electronic device that processes data.[00107] Some embodiments may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to an embodiment. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readable transmission medium (electrical, optical, acoustical or other form of propagated signals (e.g., infrared signals, digital signals, etc.)), etc.[00108] FIG. 8 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 800 within which a set of instructions, for causing the machine to perform any one or more of the methodologies described herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is
illustrated, the term“machine” shall also be taken to include any collection of machines (e.g., computers) that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies described herein.[00109] The exemplary computer system 800 includes a processor 802, a main memory 804 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 806 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory 818 (e.g., a data storage device), which communicate with each other via a bus 830.[00110] Processor 802 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 802 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor 802 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processor 802 is configured to execute the processing logic 826 for performing the operations described herein.[00111] The computer system 800 may further include a network interface device 808.The computer system 800 also may include a video display unit 810 (e.g., a liquid crystal display (LCD), a light emitting diode display (LED), or a cathode ray tube (CRT)), an alphanumeric input device 812 (e.g., a keyboard), a cursor control device 814 (e.g., a mouse), and a signal generation device 816 (e.g., a speaker).[00112] The secondary memory 818 may include a machine-accessible storage medium (or more specifically a computer-readable storage medium) 832 on which is stored one or more sets of instructions (e.g., software 822) embodying any one or more of themethodologies or functions described herein. The software 822 may also reside, completely or at least partially, within the main memory 804 and/or within the processor 802 during execution thereof by the computer system 800, the main memory 804 and the processor 802 also constituting machine-readable storage media. The software 822 may further be transmitted or received over a network 820 via the network interface device 808.[00113] While the machine-accessible storage medium 832 is shown in an exemplary embodiment to be a single medium, the term“machine-readable storage medium” should be
taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term“machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any of one or more embodiments. The term“machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.[00114] Techniques and architectures for managing power utilization by circuitry are described herein. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of certain embodiments. It will be apparent, however, to one skilled in the art that certain embodiments can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description.[00115] Reference in the specification to“one embodiment” or“an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase“in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.[00116] Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the computing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[00117] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the discussion herein, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like,
refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.[00118] Certain embodiments also relate to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) such as dynamic RAM (DRAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and coupled to a computer system bus.[00119] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description herein. In addition, certain embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of such embodiments as described herein.[00120] Besides what is described herein, various modifications may be made to the disclosed embodiments and implementations thereof without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow. |
The invention includes a method of filling gaps in a semiconductor substrate. A substrate and a gas mixture containing at least one heavy-hydrogen compound are provided within a reaction chamber. The gas mixture is reacted to form a layer of material over the substrate by simultaneous deposition and etch of the layer. The layer of material fills the gap such that the material within the gap is essentially void-free. The invention includes a method of providing improved deposition rate uniformity. A material is deposited over a surface in the presence of at least one gas selected from the group consisting of D2, HD, DT, T2 and TH. The net deposition rate during the deposition has a degree of variance across the surface which is measurably improved relative to a corresponding degree of variance that occurs during deposition utilizing H2 under otherwise substantially identical conditions. |
1.A method for depositing a layer on a substrate, comprising:Providing a substrate in a high density plasma reaction chamber;Supplying at least one compound having a heavy hydrogen isotope component to the reaction chamber;Producing a high density plasma in the reaction chamber; andA chemical vapor deposited layer on the substrate, the layer incorporating at least a portion of at least one of the compounds.2.The method of claim 1 wherein said heavy hydrogen isotope is ruthenium.3.The method of claim 1 wherein the at least one compound is selected from the group consisting of SiDxH4-x, Si2DyH6-y, PDzH3-z, SiCl2DH, SiCl2D2, SiO4C8DqH20-q, DH, D2, wherein x = 1 to 4, y = 1 to 6, z = 1 to 3 and q = 1 to 20.4.The method of claim 1 wherein said layer comprises an oxide material.5.The method of claim 1 wherein said layer is deposited and etched simultaneously during deposition.6.The method of claim 1 wherein said depositing produces a substantially flat surface.7.The method of claim 1 wherein said at least one compound consists of a mixture, said mixture further comprising at least one of O2 and O3.8.A method of filling a gap, comprising:Providing a substrate including a slit structure, andA material is deposited on the substrate using at least one precursor having at least one heavy hydrogen isotope component; the material has less gap after deposition than the gap obtained by replacing 1 H with heavy hydrogen.9.The method of claim 8 wherein said gap structure comprises a trench within the substrate.10.The method of claim 8 wherein said slit structure comprises a gap between adjacent elements.11.The method of claim 8 wherein said slit structure is a first slit structure, and wherein the substrate further comprises a second slit structure, the first slit structure being a groove and the second slit structure being a gap between the elements, wherein said The deposition of material essentially fills the trenches and the gaps between the components.12.The method of claim 8 wherein the at least one precursor is selected from the group consisting of SiRxH4-x, Si2RyH6-y, PRzH3-z, SiCl2RH, SiCl2R2, SiO4C8RqH20-q, wherein R is ruthenium, osmium or a combination thereof, and wherein x = 1 4, y = 1 to 6, z = 1 to 3, and q = 1 to 20.13.A method of producing a filled region of a substrate comprising simultaneously depositing and etching a material on a substrate in the presence of a gas comprising a heavy hydrogen compound.14.The method of claim 13 wherein said gas comprises a precursor component and a sputter component, said sputter component comprising a heavy hydrogen compound.15.The method of claim 14 wherein said dihydrogen compound is a first heavy hydrogen compound, and wherein said precursor compound comprises a second heavy hydrogen compound.16.The method of claim 13 wherein said gas comprises a precursor component and a sputter component, said precursor component comprising a heavy hydrogen compound.17.The method of claim 13 wherein said gas comprises at least one of D2, HD, DT, T2 and TH.18.The method of claim 17 wherein said gas further comprises H2.19.The method of claim 13 wherein the fill region is shown to include at least one feature selected from the group consisting of a groove and a void between the elements.20.The method of claim 19 wherein one or more of the at least one feature has a aspect ratio greater than about 1:1.21.The method of claim 19 wherein one or more of the at least one feature has a aspect ratio greater than about 2:1.22.The method of claim 19 wherein one or more of the at least one feature has a aspect ratio greater than about 3:1.23.The method of claim 19 wherein one or more of the at least one feature has a aspect ratio greater than about 4:1.24.The method of claim 19 wherein one or more of the at least one feature has a aspect ratio greater than about 5:1.25.The method of claim 19 wherein one or more of said at least one feature has a width of less than about 10 nm.26.The method of claim 13 wherein said depositing occurs on the rugged topography and produces a flatter surface that is flatter relative to the rugged topography.27.The method of claim 13 wherein said material is selected from the group consisting of boron/phosphorus doped silicon oxide, fluorine doped silicon oxide, phosphorus doped silicon oxide, boron doped silicon oxide, and undoped silicon oxide.28.A method of controlling a total deposition rate during high density plasma chemical vapor deposition, comprising providing at least one compound comprising a heavy hydrogen isotope during deposition, the total deposition rate being etched by simultaneous material deposition rate through a material The ratio of the speed is defined.29.The method of claim 28 wherein at least one compound is provided to the sputtering gas.30.The method of claim 29 wherein said at least one compound is selected from the group consisting of diatomic hydrogens containing at least one atom selected from the group consisting of D and T.31.The method of claim 29 wherein said depositing occurs across the surface of the wafer, the total deposition rate at the center point of the wafer surface being substantially equal to the total deposition rate at a point on the edge of the wafer surface.32.The method of claim 31 wherein the total deposition rate at the center point is substantially equal to the total deposition rate occurring at all points on the line between the point along the center point and the edge of the wafer surface.33.The method of claim 29 wherein deposition occurs across the surface of the wafer, and wherein the total deposition rate across any point on the surface of the wafer is substantially equal to the total deposition rate across each other point of the surface of the wafer.34.The method of claim 17 wherein said depositing comprises depositing an insulating material on a substrate having one or more slits, said deposit filling one or more slits with said insulating material to form a filled gap substantially free of gaps .35.The method of claim 17 wherein the total rate of deposition is reduced relative to the total deposition rate occurring using the 1H form of said at least one compound under otherwise identical deposition conditions.36.A method of filling a high aspect ratio gap in a semiconductor substrate, comprising:Providing a substrate in the reaction chamber;Supplying a gas mixture containing at least one compound containing heavy hydrogen in the reaction chamber;The gas mixture is reacted to form a layer of material on the substrate by simultaneous deposition and etching of the layer, the layer of material filling a gap of high aspect ratio, the material within the gap being substantially free of gaps.37.The method of claim 36 wherein said reaction chamber is a high density plasma chemical vapor deposition chamber.38.The method of claim 36, wherein said at least one compound containing heavy hydrogen is selected from the group consisting of SiRxH4-x, Si2RyH6-y, PRzH3-z, SiCl2RH, SiCl2R2, SiO4C8RqH20-q, HR and R2, wherein R is ruthenium, osmium or A combination thereof, wherein x=1 to 4, y=1 to 6, z=1 to 3, and q=1 to 20.39.A method of providing improved uniformity of deposition rate comprising depositing a material on a surface in the presence of at least one selected from the group consisting of D2, HD, DT, T2, and TH, the deposition being relative to the same material by deposition of the material The ratio of etch rates is determined by the total deposition rate defined, which is measured to have an improved degree of deviation across the surface relative to the degree of corresponding deviation produced during the otherwise substantially identical conditions using H2 deposition.40.The method of claim 39 wherein said depositing comprises high density plasma deposition.41.38. The method of claim 39, wherein the degree of deviation using at least one gas is increased by at least about 18% compared to the degree of corresponding deviation obtained using H2 alone.42.38. The method of claim 39, wherein said depositing comprises depositing a high density plasma on the substrate with a high frequency bias power of less than about 5 kW.43.The method of claim 39 wherein said surface is comprised of a 200 mm diameter wafer.44.The method of claim 39 wherein said surface is comprised of a 300 mm diameter wafer. |
Method for filling gaps using high-density plasma chemical vapor deposition and method for depositing materialsTechnical fieldThe present invention relates to a method of forming a layer on a substrate, and in particular to an embodiment of a method of filling a gap.Background of the inventionInsulating materials and layers of insulating materials are widely used in a variety of semiconductor applications to insulate or electrically insulate structural components from circuit components. The insulating properties of these materials and layers are often affected by the ability to minimize or eliminate gap regions formed during deposition of the insulating material. The uniformity of deposition rate across the deposited layer during deposition can affect film quality and effectively minimize gap entrainment. When the feature size of the device is reduced, the width of the gap to be filled also decreases and the aspect ratio of such a gap may become very high. Therefore, minimization of the gap region in the gap filling becomes more difficult but more important for effective insulation. Deviations in deposition rate across the surface may make it difficult, if not impossible, to optimize deposition conditions and parameters for the removal of the gap.One method that has been used to meet the need for precise gap filling is high density plasma chemical vapor deposition (HDP-CVD). Further improvements to other conventional methods are obtained by using an alternative sputtering gas such as H2 instead of a sputtering gas such as argon which is typically employed in HDP-CVD systems. The use of H2 replacement can reduce gap formation in gap filling applications under certain deposition conditions.It is desirable to develop improved gap fill techniques and methods to provide improved deposition rate uniformity.Summary of inventionIn one aspect, the invention includes a method of depositing a layer on a substrate. The substrate is provided in a plasma chamber having a high density. At least one compound having a heavy hydrogen isotope composition is supplied to the reaction chamber and a high density plasma is generated in the chamber. A layer of at least a portion of the heavy hydrogen compound is introduced by chemical vapor deposition on the substrate.A method of filling a gap in a semiconductor substrate is included in one aspect of the invention. A substrate is provided into the reaction chamber and a gas mixture is fed into the reaction chamber. The gas mixture contains at least one heavy hydrogen containing compound. The gas mixture is reacted to form a layer of material on the substrate by simultaneous deposition and etching of the layers. The layer of material fills the gap such that the material in the gap is substantially free of gaps.In one aspect, the invention includes a method of providing improved deposition rate uniformity. The material is deposited on the surface in the presence of at least one gas selected from the group consisting of D2, HD, DT, T2 and TH. The total deposition rate during deposition is defined by the ratio of material deposition and simultaneous material etch rate. The total deposition rate yields a measurably improved degree of deviation relative to the degree of deviation that occurs during deposition relative to the use of H2 under otherwise substantially identical conditions across the surface.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following figures:BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a cross-sectional view of a reactor that can be used to practice the process of the present invention.Figure 2 is a cross-sectional view of a semiconductor substrate wafer in a preparation step of the method of the present invention.3 is a diagram of the base wafer of FIG. 2 showing the processing steps subsequent to the processing steps of FIG.Figure 4 is a graphical representation of etch/deposition uniformity showing the measured etch/deposition ratio as a function of bias power for a process employing H2.Figure 5 is a graphical representation of etch/deposition uniformity showing measured etch/deposition ratio as a function of bias power in a process employing D2 in accordance with one aspect of the present invention.Figure 6 is a cross-sectional view of a substrate wafer segment at the preparation step of the method of the present invention.Figure 7 is a cross-sectional view of the wafer segment of Figure 6 at a subsequent processing step of the step of Figure 6.Figure 8 is a cross-sectional view of a substrate wafer segment in a preparation step of a method in accordance with an aspect of the present invention.Figure 9 is a cross-sectional view of the wafer segment of Figure 8 at a subsequent processing step of the step of Figure 8.Detailed description of a preferred embodimentThe invention includes an improved method of depositing a layer on a substrate. This layer can be deposited by chemical vapor deposition using one or more heavy hydrogen compounds. For the purposes of this specification, the term heavy hydrogen may refer to hydrazine (D) or hydrazine (T), and the term heavy hydrogen compound may refer to a nature in which one, more or all of the hydrogen atoms are replaced by D and/or T to a greater extent than the corresponding heavy isotope. Abundance levels of compounds. More specifically, the silicon-containing layer may be present by high-density plasma chemical vapor deposition (HDP-CVD) in a heavy hydrogen sputtering gas, a precursor compound containing heavy hydrogen, or both a heavy hydrogen precursor and a heavy hydrogen sputtering gas. Formed under the conditions.HDP-CVD typically involves simultaneous chemical deposition and sputter etching of the deposited material. Simultaneous deposition and etching produces a net deposition rate, which may also be referred to as the total deposition rate. The methodology of the present invention can be used to provide improved uniformity (low deviation) of the net deposition rate across the surface. Aspects of the methodology of the present invention may be particularly useful for gap filling processes and for minimizing or eliminating gaps in the resulting fill gap.The methodology of the present invention is described with reference to Figures 1-9. Turning first to Figure 1, there is shown a reactor 10 that can be used to carry out the process of the present invention. Reactor 10 can be, for example, a chemical vapor deposition reactor such as an HDP-CVD reactor. As depicted in FIG. 1, reactor 10 has a reaction chamber 12 and may include a dome 13 having one or more radio frequency (RF) coils 14. Reactor 10 may include one or more inlets 18 and may include one or more outlets 20. The inlet 18 can be, for example, a gas injector.Reactor 10 can include a substrate platform 22 to secure or clamp substrate 90. If the reactor 10 is a HDP reactor, the substrate platform 22 can be an electrostatic chuck (ESC). In order to practice the methodology of the present invention, reactor 10 can generally be an inductively coupled reactor, although the present invention contemplates the use of capacitively coupled reactors or electron cyclotron resonance (ECR) reactors.An exemplary inductively coupled HDP-CVD reactor that can be used to practice the methodology of the present invention is the standard Novellus HDP chamber (SPEED(R)) from Novellus Systems, Inc. of San Jose, Califonia. When such a reactor is used, low frequency (LF) power can be applied to the dome coil 14 to induce an RF electric field within the chamber 12. This RF electric field can be used to generate a high density plasma 16. For the purposes of the present invention, the term "high density plasma" means a plasma having a density greater than or equal to about 1011 ions/cm3. Sputtering can be performed by applying high frequency (HF) power to the electrostatic chuck 22 to cause ionized plasma molecules to be attracted from the plasma 16 to the wafer 90 to apply an RF bias to the wafer 90.To simplify the description below, the reported conditions and parameters are based on the use of the SPEED(R) HDP system. However, it should be understood that the present invention includes the use of alternative reactors for which conditions and parameters different from those specifically recited can be used.A specific aspect of the invention is described with reference to Figures 1-4. Referring to Figure 2, a semiconductor wafer of wafer segment 90 in the preparation step of the method of the present invention is shown. Wafer segment 90 can include semiconductor substrate 100. To assist in understanding the appended claims, the terms "substrate of semiconductor" and "semiconductor substrate" are defined to mean any structure comprising a semiconductor material, including but not limited to: bulk semiconductor materials such as semiconductor wafers (separate or In the assembly on which other materials are included) and the layer of semiconducting material (individual or in an assembly comprising other materials). The term "substrate" refers to any support structure including, but not limited to, the semiconductor substrates described above. Substrate 100 can include surface 102, which can be a flat sheet as shown in Figure 2 or alternatively include one or more local topographical features.Referring to Figure 3, material layer 104 can be deposited on surface 102 using, for example, HDP-CVD in the presence of one or more heavy hydrogen containing compounds. Layer 104 can be formed, for example, in a high density plasma chamber as shown in FIG. The at least one compound comprising a heavy hydrogen isotope component may be supplied to the reaction chamber 12 via one or more inlets 18. Layer 104 (Fig. 3) may comprise, for example, an insulating material such as silicon oxide and in particular aspects may be silicon dioxide or doped silicon dioxide.Conventional methods of forming oxide layer 14 using HDP-CVD typically involve deposition using a gas mixture comprising one or more precursor compounds, one or more inert gases such as argon (Ar) and in certain In the case of one or more oxidant gases of O2, O3 and NO3. By HDP deposition, some gas mixture can be introduced into the sputtering of the deposited material and can be referred to as the sputter component of the mixture. In some cases, conventional methods replace some or all of the inert gas with, for example, sputtered H2 or He that can affect the deposited material. However, the rate of deposition using H2 in HDP-CVD systems may vary across the surface, resulting in unpredictable and uneven deposition.Exemplary precursor compounds used in conventional silica deposition processes include silane (SiH4), disilane (Si2H6), dichlorosilane (SiCl2H2), and Si(OC2H5)4 ("TEOS"). Conventional doped oxide layer formation may be included in one or more of phosphine (PH3), diborane (B2H6), arsine (AsH4), trimethyl phosphate (C3H9O3P3), and trimethyl borate (C3H9O3B3) There is under doping.The methodology of the present invention includes the use of one or more deuterated and/or deuterated precursor compounds, deuterated and/or deuterated dopant compounds, diatomic heavy hydrogen gases, or any combination of these. For the purposes of the present invention, diatomic heavy hydrogen gas or heavy hydrogen gas may refer to DH, DT, D2, T2, TH, and mixtures thereof.In a particular aspect, the methodology of the present invention can include supplying one or more precursor molecules, an oxygen-containing gas, and hydrogen in a deuterated and/or deuterated form to reaction chamber 12 (FIG. 1). A high density plasma 16 can be generated in the reaction chamber 12 to perform chemical vapor deposition of the layer 104 on the substrate 90. The deposition of layer 104 may include a total (net) deposition rate that will depend on the deposition rate that will be produced when there is no etching at the same time (ie, when there is no bias power) and the simultaneous etch rate when a particular bias power is used. . The net deposition rate can be affected by factors such as bias power, flow rate, pressure, temperature, or other parameters. Thus, during deposition of layer 104, the net deposition rate can be altered or adjusted by changing one or more parameters during deposition.Some or all of the H2 or other sputtering gas components supplied to chamber 12 (FIG. 1) are advantageously replaced to enhance the uniformity of the overall deposition rate across the surface of substrate 90. In other words, the presence of a ruthenium sputter during the HDP-CVD deposition of layer 104 (Fig. 3), under otherwise identical conditions, can be reduced across the surface relative to the degree of deviation of the corresponding deposition rate produced using, for example, H2. The degree of deviation of the net deposition rate of 10.Figures 4 and 5 illustrate the use of HDP-CVD to produce improved net deposition uniformity when D2 (Figure 5) is substituted for H2 (Figure 4) during silica deposition. The HDP-CVD data shown in Figures 4 and 5 was obtained in a Novellus SPEED(R) reactor using a 200 mm wafer 90 (Figure 3) having a flat surface 102 on which a silicon dioxide layer 104 was deposited. For the center point 108 (Fig. 3) on the wafer 90, the edge ring of the wafer 90 is represented by a point 110, and the first loop at a first distance from the center point 108 is represented by a point 112 at a second distance from the center point 108. The second loop is represented by point 114, and the measured etch-to-depth ratio (E/D ratio) is shown in each of FIGS. 4 and 5.A comparison of Figures 4 and 5 shows that measurable improved etch/deposition uniformity formation is obtained when D2 is used instead of H2 during HDP-CVD formation of the oxide layer. As shown in Figures 4 and 5, both H2 and D2 contribute to the deposited portion of the E/D ratio at low bias power, causing enhanced net deposition and interpreting the negative E/D ratio. However, even at these bias powers, D2 improves overall etch/deposition uniformity. The use of a tantalum sputter component can reduce variations in the net deposition rate across the entire flat surface of the wafer 90 relative to other sputter components, resulting in a substantially uniform layer thickness across all points on the flat wafer. The described invention can improve deposition uniformity and/or layer thickness uniformity by a 18% or higher reduction in net deposition rate deviation across the surface relative to conventional non-heavy hydrogen methodology.Referring again to Figure 3, an illustrative embodiment of layer 104 may include silicon oxide, silicon dioxide, phosphorus doped silicon dioxide (PSG), boron doped silicon oxide (BSG), borophosphorium doped oxide. (BPSG) or an oxide doped with an alternative dopant such as F, As, Ge or a combination thereof. In some embodiments of the invention, when layer 104 is a doped oxide layer, the heavy hydrogen (D or T) may be provided by providing one or more deuterated or deuterated dopants such as PH3-xRx, B2H6. -yRy, AsH4-q, C3H9-zRzO3P3 and C3H9-zRzO3B3 contribute to the plasma 16 shown in Figure 1; wherein R is T, D or a combination thereof; wherein x = 1 to 3, y = 1 to 6, z =1 to 9, and q = 1 to 4. In a particular application, one or more deuterated and/or deuterated dopant compounds may be provided in combination with one or more of the diatomic heavy hydrogen gases described above.HDP-CVD deposition of layer 104 in accordance with the methodology of the present invention can be used to provide one of ruthenium and osmium in high density plasma by providing one or more deuterated and/or deuterated precursor compounds or Two. The heavy hydrogen-containing precursor may include one or more of SiRxH4-x, Si2RyH6-y, SiCl2RH, SiCl2R2, SiO4C8RqH20-q, wherein R is ruthenium, osmium or a combination thereof, wherein x=1 to 4, y= 1 to 6 and q = 1 to 20. The one or more heavy hydrogen-containing precursors may be used alone or in combination with a deuterated and/or deuterated sputtering gas in the plasma 16 and/or one or more deuterated and/or deuterated as described above. The dopants are used in combination.As shown in FIG. 3, deposition of layer 104 utilizing HDP-CVD in a high density plasma in the presence of germanium and/or germanium on the flat surface 102 can be used to create layer 104 to achieve substantially flat Surface 106, because the presence of heavy hydrogen provides improved net deposition rate uniformity relative to hydrogen. The method of the present invention may also be advantageously used on non-flat surfaces, on surfaces having rugged topography, on substrates having one or more pits, openings or trenches, with one or more A layer is formed on the substrate of the part that protrudes from the surface of the substrate or on various combinations thereof.An exemplary application in which the method of the invention can be particularly advantageous is for filling gaps. Providing ruthenium and or ruthenium during HDP-CVD can advantageously fill the pits, openings, and/or slits to provide less gap and/or clearance regions than would be produced if hydrogen was replaced with hydrogen instead of the heavy hydrogen A filled area with fewer gap areas. The methodology of the present invention can provide a substantially gap-free caulk, wherein the term "no gap" refers to a fill region in which there is no detectable gap region.An exemplary application of the methodology of the present invention is described with reference to Figures 6 and 7. Referring to Figure 6, a wafer or wafer segment 90 is illustrated which includes a substrate material 100 having slits or openings 120 in the substrate. The opening 120 can be described as a slit feature or a slit structure and can be, for example, an insulating region such as a trench insulating region. The opening 120 can be formed by using a conventional processing method.The opening 120 can be described as having a bottom surface 126 and having open sidewalls 122 and 124. As shown in FIG. 6, the opening 120 can have sidewalls 122 and 124 that are substantially perpendicular to the bottom surface 126 of the panel. The opening 120 can be further described as having a chord ratio defined by the height of the opening relative to the width of the opening, or in the particular embodiment illustrated in Figure 6, the aspect ratio can be the sidewalls 122, 124 The ratio of the length to the length of the bottom surface 126. Alternatively, the opening 120 can have a non-flat bottom surface, can have a non-rectangular shape and/or can include sidewalls that are not substantially perpendicular to the substrate surface (not shown).Referring to FIG. 7, a material layer 104 may be deposited on the substrate 100 and the substrate 102 to fill the opening 120. Alternatively, the formation of gap fill 104 can be performed such that a net deposition of material 104 occurs within opening 120 without significant net deposition on surface 102 (not shown). The deposition of material 104 may include HDP-CVD using a high density plasma containing a sputter component containing ruthenium and/or osmium. In certain applications, ruthenium and/or osmium may be provided to the sputter component of the high density plasma, which may be by providing D2, T2 or any bimolecular combination of H, T and D as described above.The methodology of the present invention also includes forming layer 104 using any of the deuterated or deuterated precursor compounds described above. These precursors can be used alone, in combination with other precursors, or in combination with the supply of heavy hydrogen gas. In applications where layer 104 includes a doped material, such as a doped oxide, dopants that are deuterated and or deuterated may be provided to the high density plasma. Suitable deuterated and/or deuterated dopants for forming layer 104 are as described above.The use of one or more deuterated and/or deuterated compounds can provide improved interstitial quality for high density plasma deposition of layer 104 in gap filling applications relative to conventional gap filling techniques. With the methodology of the present invention, the opening 120 can be filled to contain less gap and/or reduce the gap area relative to deposition conditions that are otherwise substantially identical using only the deuterated compound, wherein the term "substantially the same" refers to Within the tolerances of process control. Thus, the methodology of the present invention can be particularly useful for gap fill applications with high aspect ratio (greater than about 3:1) or very high aspect ratio (greater than about 5:1). A slit having an aspect ratio of up to about 8:1 can be filled substantially without gaps using the methodology of the present invention. Furthermore, the methodology can be used in particular to fill openings having a submicron width. Openings having widths as narrow as a few nanometers can also be effectively filled with substantially no gaps using the methodology of the present invention. However, it should be understood that the present invention also contemplates utilizing the methodology to fill pits and low aspect ratio openings.Another application using the methodology of the present invention is described with reference to Figures 8-9. Referring to FIG. 8, wafer segment 90 can be provided on substrate 100 with opening 130 between elements 132 and 134. The opening 130 can be described as a slit feature or a slit structure. Elements 132 and 134 are not limited to any particular structural feature and may be, for example, a wire. Elements 132 and 134 can be formed using conventional processing methods. The opening 130 can include a bottom surface 140 and sidewalls 136, 138 and can include a aspect ratio having a value as shown above with reference to the opening 120 (FIG. 6).As shown in FIG. 9, a layer of material 104 may be deposited on the substrate 100 to substantially fill the opening 130. As is known to those skilled in the art, layer 104 can be constructed across the wafer to have a substantially planar surface 106 by suitably adjusting the bias power during HDP-CVD in a particular application. Using the methodology of the present invention, material 104 can be deposited by providing one or more germanium and/or germanium in a high density plasma during CVD to substantially fill gap 130 to substantially no gap. Any of the heavy hydrogen precursor, heavy hydrogen dopant, diatomic heavy hydrogen compound, or a mixture of the foregoing is used to provide helium and/or helium to the high density plasma.When a Novellus SPEED(R) HDP-CVD reactor is used to practice the methodology of the present invention, suitable deposition conditions can include providing diatomic hydrogen and/or their heavy hydrogen form at from about 100 standard cubic centimeters per minute (sccm) to 2000 sccm. The oxygenate comprising O2 and/or O3 can be supplied to the reactor at about 200 sccm-20 sccm. One or more precursor compounds or a heavy hydrogen substituted precursor compound may be provided in a total amount of from 100 sccm to 10 sccm. A suitable deposition temperature is from about 400 ° C to about 800 ° C at a pressure of from about 2 mTorr to about 50 mTorr.It should be understood that the invention includes the use of alternative inductive reactors, ECR reactors or capacitive reactors. The use of these alternative reactors can include the use of alternative conditions and parameters depending on the various reactors and specific compounds utilized during deposition. |
Methods, apparatus, systems and articles of manufacture detect spoofing attacks for video-based authentication are disclosed. Disclosed example method to perform video-based authentication include determining whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values. Such example methods also include determining that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values. |
What Is Claimed Is:1. A method to perform video-based authentication, the method comprising: determining, with a processor, whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values; anddetermining, with the processor, that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.2. A method as defined in claim 1, wherein the sequence of input images is a second sequence of input images, and further comprising randomly sampling a first sequence of input images to obtain the second sequence of input images.3. A method as defined in claim 2, wherein randomly sampling the first sequence of input images comprises:capturing the first sequence of input images at a first rate higher than a second rate; grouping the first sequence of input images into successive groups of input images containing respective numbers of images based on a relationship between the first rate and the second rate; andrandomly selecting one input image from respective ones of the successive groups of input images to form the second sequence of input images.4. A method as defined in any one of claims 1 to 3, wherein determining whether the sequence of input images exhibits the first region having fluctuating pixels values comprises:determining differences between successive pairs of images in the sequence of input images to determine a sequence of difference images; andprocessing the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.5. A method as defined in claim 4, wherein processing the sequence of difference images comprises:processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images;determining a number of pixels included in a first group of neighboring fluctuating pixels; andif the number of pixels satisfies a threshold, determining that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.6. A method as defined in any one of claims 1 to 3 further comprising triggering operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.7. A method as defined in any one of claims 1 to 3, further comprising determining that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.8. A tangible machine readable storage medium comprising machine readable instructions which, when executed, cause a machine to at least:determine whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values; anddetermine that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.9. A storage medium as defined in claim 8, wherein the sequence of input images is a second sequence of input images, and the machine readable instructions, when executed, further cause the machine to randomly sample a first sequence of input images to obtain the second sequence of input images.10. A storage medium as defined in claim 9, wherein to randomly sample the first sequence of input images, the machine readable instructions, when executed, further cause the machine to:capture the first sequence of input images at a first rate higher than a second rate; group the first sequence of input images into successive groups of input images containing respective numbers of images based on a relationship between the first rate and the second rate; andrandomly select one input image from respective ones of the successive groups of input images to form the second sequence of input images.1 1. A storage medium as defined in any one of claims 8 to 10, wherein to determine whether the sequence of input images exhibits the first region having fluctuating pixels values, the machine readable instructions, when executed, further cause the machine to:determine differences between successive pairs of images in the sequence of input images to determine a sequence of difference images; andprocess the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.12. A storage medium as defined in claim 1 1, wherein to process the sequence of difference images, the machine readable instructions, when executed, further cause the machine to:process successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images;determine a number of pixels included in a first group of neighboring fluctuating pixels; andif the number of pixels satisfies a threshold, determine that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.13. A storage medium as defined in any one of claims 8 to 10, wherein the machine readable instructions, when executed, further cause the machine to trigger operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.14. A storage medium as defined in any one of claims 8 to 10, wherein the machine readable instructions, when executed, further cause the machine to determine that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.15. An apparatus to perform video-based authentication, the apparatus comprising:a fluctuating pixel detector to determine whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values; anda video sequence validator to determine that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.16. An apparatus as defined in claim 15, wherein the sequence of input images is a second sequence of input images, and further comprising an image capturer to randomly sample a first sequence of input images to obtain the second sequence of input images.17. An apparatus as defined in claim 16, wherein the image capturer is to randomly sample the first sequence of input images by:capturing the first sequence of input images at a first rate higher than a second rate; grouping the first sequence of input images into successive groups of input images containing respective numbers of images based on a relationship between the first rate and the second rate; andrandomly selecting one input image from respective ones of the successive groups of input images to form the second sequence of input images.18. An apparatus as defined in any one of claims 15 to 17, wherein the fluctuating pixel detector is further to:determine differences between successive pairs of images in the sequence of input images to determine a sequence of difference images; andprocess the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.19. An apparatus as defined in claim 18, wherein the fluctuating pixel detector is to process the sequence of difference images by:processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images;determining a number of pixels included in a first group of neighboring fluctuating pixels; andif the number of pixels satisfies a threshold, determining that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.20. An apparatus as defined in any one of claims 15 to 17, wherein the video sequence validator is further to trigger operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.21. An apparatus as defined in any one of claims 15 to 17, wherein the video sequence validator is further to determine that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.22. A system to perform video-based authentication, the system comprising: means for determining whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values; andmeans for determining that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.23. A system as defined in claim 22, wherein the sequence of input images is a second sequence of input images, and further comprising means for randomly sampling a first sequence of input images to obtain the second sequence of input images.24. A system as defined in any one of claims 22 or 23, wherein the means for determining whether the sequence of input images exhibits the first region having fluctuating pixels values comprises:means for determining differences between successive pairs of images in the sequence of input images to determine a sequence of difference images; andmeans for processing the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.25. A system as defined in any one of claims 22 or 23, further comprising means for triggering operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values. |
DETECTION OF SPOOFING ATTACKS FOR VIDEO-BASED AUTHENTICATIONFIELD OF THE DISCLOSURE[0001] This disclosure relates generally to user authentication and, more particularly, to detection of spoofing attacks for video-based authentication.BACKGROUND[0002] Visual authentication utilizes one or more image recognition procedures, such as a facial recognition procedure, to authenticate subjects, such as employees, authorized users, etc., using images captured by a camera or other optical sensor. Visual authentication based on single images is susceptible to spoofing using photos of the subject, and/or using still images of the subject displayed by a media device. More sophisticated video-based authentication techniques, which check for motion in addition to performing image recognition, are also susceptible to spoofing using videos of the subject displayed by a media device.BRIEF DESCRIPTION OF THE DRAWINGS[0003] FIG. 1 is a block diagram of an example system including an example visual authentication verifier to detect spoofing attacks for video-based authentication as disclosed herein.[0004] FIGS. 2A-B collectively illustrate example operation of the example system of FIG. 1 to detect spoofing attacks for video-based authentication.[0005] FIG. 3 is a block diagram of an example image capturer that may be used to implement the example visual authentication verifier of FIG. 1.[0006] FIG. 4 is a block diagram of an example fluctuating pixel detector that may be used to implement the example visual authentication verifier of FIG. 1. [0007] FIG. 5 is a flowchart representative of example machine readable instructions that may be executed to implement the example visual authentication verifier of FIG. 1.[0008] FIG. 6 is a flowchart representative of example machine readable instructions that may be executed to implement the example image capturer of FIG. 3.[0009] FIG. 7 is a flowchart representative of example machine readable instructions that may be executed to implement the example fluctuating pixel detector of FIG. 4.[0010] FIG. 8 is a block diagram of an example processor platform structured to execute the example machine readable instructions of FIG. 5 to implement the example visual authentication verifier of FIG. 1.[0011] FIG. 9 is a block diagram of an example processor platform structured to execute the example machine readable instructions of FIG. 6 to implement the example image capturer of FIG. 3.[0012] FIG. 10 is a block diagram of an example processor platform structured to execute the example machine readable instructions of FIG. 7 to implement the example fluctuating pixel detector of FIG. 4.[0013] FIG. 1 1 depicts a sequence of three example image including an example region of fluctuating pixels.[0014] The material disclosed herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity.Furthermore, wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts, elements, etc. DETAILED DESCRIPTION[0015] Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to detect spoofing attacks for video-based authentication are disclosed herein. Some example methods to perform video-based authentication disclosed herein include determining whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values. Some such example methods also include determining that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0016] In some disclosed example methods, the sequence of input images provided to perform video-based authentication of the subject is a second sequence of input images, and the example methods further include randomly sampling a first sequence of input images to obtain the second sequence of input images. In some such example methods, randomly sampling the first sequence of input images includes capturing the first sequence of input images at a first rate (e.g., an input sampling rate or input frame rate) higher than a second rate (e.g., a desired frame rate). Some such example methods also include grouping the first sequence of input images into successive groups of input images containing respective numbers of images based on a relationship (e.g., a ratio) between the first rate and the second rate. Some such example methods further include randomly selecting one input image from respective ones of the successive groups of input images to form the second sequence of input images.[0017] Additionally or alternatively, in some disclosed example methods, determining whether the sequence of input images exhibits the first region having fluctuating pixels values includes determining differences between successive pairs of images in the sequence of input images to determine a sequence of difference images. Some such example methods also include processing the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values. In some such example methods, processing the sequence of difference images includes processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels. For example, fluctuating pixels are pixels that fluctuate between at least two values across three successive images in the sequence of input images. Some such example methods also include determining a number of pixels included in a first group of neighboring fluctuating pixels. Some such example methods further include, if the number of pixels satisfies a threshold, determining that the sequence of input images exhibits the first region having fluctuating pixel values, with the first region corresponding to the first group of neighboring fluctuating pixels.[0018] Additionally or alternatively, some disclosed example methods further include triggering operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.[0019] Additionally or alternatively, some disclosed example methods further include determining that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0020] These and other example methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to detect spoofing attacks for video-based authentication are disclosed in further detail below.[0021] As mentioned above, visual authentication techniques that process a single image to authenticate a subject are susceptible to spoofing using photos of the subject, and/or using still images of the subject displayed by a media device. This is because image recognition techniques, such as facial recognition techniques, employed by image-based authentication systems may be unable to distinguish between an image of the real subject captured with a camera and an image of the subject's photo or other still image of the subject captured by the camera. More sophisticated video-based authentication techniques check for motion as a way to distinguish the video taken of the real subject, which will exhibit motion characteristics, and video taken of the subject's photo or other still image of the subject, which will not have any motion. However, even these more sophisticated video-based authentication techniques are susceptible to the technical problem of being spoofed using videos of the subject, rather than a still image, displayed by a media device.[0022] Examples of video-based authentication with spoofing detection disclosed herein provide technical solutions to such technical problems. Examples of video-based authentication with spoofing detection disclosed herein detect that a sequence of input images (e.g., obtained from a camera or other optical sensor) provided for video-based authentication is associated with a spoofing attack using a video of a subject, rather than a live subject, by determining that the sequence of input images exhibit characteristics of a video presentation rather than video captured of a live subject. As disclosed in further detail below, in some such examples of video-based authentication with spoofing detection, the sequence of input images is analyzed to determine if it contains one or more regions having pixel fluctuations. Because such regions of pixel fluctuation are likely to be associated with the refreshing of a video display and/or the backlight flashing/scanning of the video display, and are unlikely to occur in a captured video of a live subject, detection of one or more regions having pixel fluctuations is indicative of a sequence of input images associated with a video presentation by a media device and, thus, associated with a spoofing attack. However, if no regions of pixel fluctuation are detected in a sequence of input images, then it is unlikely the sequence is associated with a spoofing attack and, instead, it is likely the sequence of input images is associated with a video of a live subject. Thus, detection of region(s) of pixel fluctuation in a sequence of input images can determine whether the sequence of images is associated with a video presentation or with a live subject and, thus, can solve the problem of determining whether the sequence of input images is the result of a spoofing attack or a true attempt to authenticate a subject.[0023] Furthermore, in some video-based authentication with spoofing detection examples disclosed herein, the sequence of input images to be analyzed is obtained by randomly sampling the input images captured by a camera or other sensing device positioned to view the subject being authenticated. Such random sampling can increase the likelihood that the sequence of input images will capture occurrences of a video display being refreshed and/or undergoing backlight flashing/scanning, etc. As such, obtaining the sequence of input images via random sampling can increase the accuracy of detecting whether the sequence of input images is the result of a spoofing attack or a true attempt to authenticate a subject, thereby providing a further technical benefit not available in prior video-based authentication systems.[0024] Turning to the figures, a block diagram of an example video-based authentication system 100 including an example visual authentication verifier 105 capable of detecting spoofing attacks as disclosed herein is illustrated in FIG. 1. In the illustrated example of FIG. 1, the visual authentication verifier 105 verifies whether input video (e.g., a sequence of input images) to be used to authenticate a subject is valid or, in other words, is a true, real-time (e.g., live) depiction of the subject, or is associated with a spoofing attack (e.g., is a copy of the subject generated by a video display of a media device) and, thus, invalid. If the visual authentication verifier 105 determines the input video is valid, the visual authentication verifier 105 triggers an example access controller 110 to process the input video to authenticate the subject. However, if the visual authentication verifier 105 determines the input video is invalid (e.g., is associated with a spoofing attack), the visual authentication verifier 105 indicates that the subject is not authentic (e.g., authentication is automatically unsuccessful) and, in some examples, prevents the access controller 110 from processing the input video.[0025] The example visual authentication verifier 105 of FIG. 1 includes an example image capturer 1 15 to obtain a sequence of input images from video captured by an example camera 120 in communication with the image capturer 1 15. As used herein, the phrase "in communication," including variances thereof, encompasses direct communication and/or indirect communication through one or more intermediary components and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic or aperiodic intervals, as well as one-time events. The example camera 120 may be implemented by any number and/or type(s) of cameras, optical sensors, etc. As disclosed in further detail below in connection with FIGS. 2A-B, the example camera 120 of the example video-based authentication system 100 is positioned to capture video of an area in which a subject to be authenticated is expected to reside. In some examples, an example actuator 125, such as a switch, a motion sensor, an infrared sensor, etc., may trigger the camera 120 to begin capturing video when a subject, such as a person (e.g., an employee, a user, etc.), an animal, an object, etc., enters the area covered by the camera 120.[0026] The example image capturer 115 of FIG. 1 samples images from the video sequence output from the camera 120 to obtain a sequence of input images to be used to authenticate the subject positioned in front of the camera 120. In some examples, the image capturer 115 randomly samples the images from the video sequence output by the camera 120 to enhance the ability of the visual authentication verifier 105 to be able to determine whether the sequence of input images is valid or associated with a spoofing attack. An example implementation of the image capturer 1 15 is illustrated in FIG. 3, which is described in further detail below.[0027] The example visual authentication verifier 105 of FIG. 1 also includes an example fluctuating pixel detector 130 to detect one or more regions of fluctuating pixels, if present, in the sequence of input images obtained by the image capturer 1 15. Regions of fluctuating pixels, also referred to as blinking pixels, oscillating pixels, etc., are characteristic of a video sequence obtained by capturing (e.g., with the camera 120) a video display of a media device (e.g., a tablet computer, a smartphone, a notebook computer, a video camera, etc.) as the display is being refreshed, undergoing backlight flashing/scanning, etc. In contrast, a video sequence obtained by capturing (e.g., with the camera 120) a true subject in real-time may exhibit changes in the pixels as the subject moves, but does not usually exhibit the region(s) of fluctuating pixels associated with video captured of a video display.[0028] For example, if a sequence of input images corresponds to video captured of the video display of a media device, an input image in the sequence may depict the video display as the display is being refreshed and/or undergoing backlight flashing/scanning. As the display is being refreshed, region(s) of such an image corresponding to portion(s) of the display that have not yet been refreshed may have lower intensity than the same region(s) in preceding and/or subsequent images in the sequence of input images. Similarly, when the display is undergoing backlight flashing or scanning (e.g., to reduce perceived motion blur), region(s) of the image corresponding to portion(s) of the display in which the backlight was turned off may be darker than the same region(s) in preceding and/or subsequent images in the sequence of input images. When successive images in the captured video sequence, which include the image depicting the video display being refreshed and/or undergoing backlight flashing or scanning, are examined, the region(s) of the image corresponding to the portion(s) of the display that have not yet been refreshed and/or in which the backlight was turned off will tend to include pixels having fluctuating (or blinking, flashing, etc.) values across the successive images. The example fluctuating pixel detector 130 operates to detect such region(s) of fluctuating pixels which, if detected, indicate that the sequence of input images is associated with a spoofing attack because the images correspond to video captured of a video display and not of a live subject.[0029] As mentioned above, in some examples, the image capturer 1 15 obtains the sequence of input images to be processed by randomly sampling the images from the video sequence output from the camera 120. Such random sampling can enhance the ability of the fluctuating pixel detector 130 to detect region(s) of fluctuating pixels in the sequence of input images. For example, without random sampling, it is possible that the frame rate of the sequence of input images obtained from the camera 120 could align with the frame rate of a video display being used to perform a spoofing attack against the video-based authentication system 100. If such alignment occurs, the sequence of input images may not capture the video display as it is being refreshed and/or undergoing backlight flashing/scanning.Random sampling of the video sequence output from the camera 120 reduces the likelihood that such an alignment will occur because, even if the frame rates of the sequence of input images of the captured video display are the same, the sequence of input images obtained by random sampling will vary (e.g., jitter) around the frame rate of the video display.[0030] In the illustrated example of FIG. 1, the fluctuating pixel detector 130 detects region(s) of fluctuating pixels in the sequence of input images obtained by the image capturer 115 by determining differences between successive pairs of images in the sequence of input images to determine a sequence of difference images. The example fluctuating pixel detector 130 then processes the sequence of difference images to determine whether the sequence of input images exhibits one or more regions having fluctuating pixel values.Examples of such processing are disclosed in further detail below in connection with the description of FIG. 4, which illustrates an example implementation of the fluctuating pixel detector 130.[0031] The example visual authentication verifier 105 of FIG. 1 further includes an example video sequence validator 135 to indicate whether the video sequence output from the camera 120 is valid for use in authenticating a subject. For example, if the fluctuating pixel detector 130 detects one or more regions of fluctuating pixels in the sequence of images obtained by the image capturer 115, the video sequence validator 135 determines that the sequence of input images depicts a scene including content generated by a video display of a media device and, thus, is associated with a spoofing attack. In some such examples, the video sequence validator 135 further prevents the access controller 1 10 from performing authentication using the video sequence from the camera 120 and, instead, automatically indicates that authentication of the purported subject failed or was unsuccessful.[0032] However, if the fluctuating pixel detector 130 does not detect any regions of fluctuating pixels in the sequence of images obtained by the image capturer 115, the video sequence validator 135 determines that the sequence of input images depicts a scene including a true subject to be authenticated. In some such examples, the video sequence validator 135 further triggers the access controller 1 10 to perform one or more access control procedures to authenticate the subject using the sequence of input images obtained from image capturer 1 15, and/or using the video sequence output from the camera 120.[0033] In the illustrated example of FIG. 1, the access controller 110 implements any number and/or type(s) of access control procedures capable of authenticating a subject from a video sequence. For example, the access controller 1 10 may implement one or more image recognition algorithms, such as a facial recognition algorithm, a target recognition algorithm, a feature identification algorithm, etc., to determine whether the input video sequence depicts a particular (e.g., previously-identified) subject. In some examples, if the access controller 110 determines that the video sequence depicts a particular (e.g., previously-identified) subject, the access controller 1 10 indicates that authentication of the subject was successful and, in some examples, displays an identity of the subject. In some examples, the access controller 1 10 then permits the subject to access the system, area, etc., protected by the video-based authentication system 100. However, if the access controller 110 is unable to authenticate the subject using the video sequence, in some examples the access controller 110 indicates that authentication of the subject was unsuccessful and prevents the subject from accessing the system, area, etc., protected by the video-based authentication system 100.[0034] Although the example visual authentication verifier 105 has been described in the context of the example video-based authentication system 100, the example visual authentication verifier 105 is not limited thereto. For example, the visual authentication verifier 105 can be employed in any environment of use in which determining whether a captured video sequence depicts a scene including content generated by a video display of a media device would be beneficial. [0035] Example operation of the example video-based authentication system 100 of FIG. 1 to detect spoofing attacks for video-based authentication is illustrated in FIGS. 2A- B. In the illustrated example of FIGS. 2A-B, the camera 120 of the example video-based authentication system 100 is positioned to capture an area in which subjects to be authenticated are expected to reside. For example, the camera 120 may be positioned to capture an area in front of a doorway subject to access control, an area in front of a computer terminal, etc. In the illustrated example of FIG. 2A, the camera 120 of the video-based authentication system 100 captures a video sequence depicting an example subject 205 in the monitored area. Because the video sequence captured by the camera 120 is of a real subject, the example visual authentication verifier 105 of the video-based authentication system 100 does not detect any regions of fluctuating pixels in a sequence of input images obtained from the video sequence captured by the camera 120. Accordingly, the visual authentication verifier 105 determines that the video sequence is valid and triggers the example access controller 110 to perform video-based authentication using the sequence of input images and/or the original video sequence output from the camera 120 (which is represented by the word "OK" in FIG. 2A).[0036] In the illustrated example of FIG. 2B, the camera 120 of the video-based authentication system 100 captures a video sequence depicting content generated by a video display in the monitored area, such as a video display of an example tablet computer 210, or of an example smartphone 215, etc. Because the video sequence captured by the camera 120 depicts content generated by a video display, the example visual authentication verifier 105 of the video-based authentication system 100 detects one or more regions of fluctuating pixels in the sequence of input images obtained from the video sequence captured by the camera 120. Accordingly, the visual authentication verifier 105 determines that the video sequence is associated with a spoofing attack and prevents the example access controller 110 from performing video-based authentication using the sequence of input images and/or the original video sequence output from the camera 120 (which is represented by an "X" in FIG. 2B).[0037] A block diagram of an example implementation of the image capturer 115 of FIG. 1 is illustrated in FIG. 3. The example image capturer 115 of FIG. 3 includes an example image grabber 305 to grab or, in other words, capture a first sequence of input images from a video output of the example camera 120. The example image capturer 115 of FIG. 3 also includes an example image selector 310 to randomly select images from the first sequence of input images captured by the image grabber 305 to form a second sequence of input images. In the illustrated example, the second sequence of input images is the sequence of input images to be used by the visual authentication verifier 105 to determine whether the video captured by the camera is valid or is associated with a spoofing attack.[0038] In the illustrated example of FIG. 1, the image grabber 305 captures the first sequence of input images from the camera video output at a sampling frame rate (e.g., a first rate) that is higher than the desired frame rate (e.g., a second rate) of the second sequence of input images to be used by the visual authentication verifier 105. For example, the second (e.g., desired) rate may be S2 frames per second, where S2 is 15, 30, etc., or some other value. In such examples, the image grabber 305 captures the first sequence of input images from the camera video output at a first (e.g., sampling) rate of S I > S2. For example, SI may be a multiple, M, of S2 such that SI = M * S2, where M = 2, 5, 10, 30, etc., or some other integer or non-integer value.[0039] In the illustrated example of FIG. 1, the image selector 310 randomly selects images from the first sequence of input images captured by the image grabber 305 by grouping the images in the first sequence into successive groups of input images containing respective numbers of images determined by the relationship (e.g., a ratio) between the first (e.g., sampling) rate and the second (e.g., desired) rate. For example, if the second (e.g., desired) rate is S2 frames per second, and the first (e.g., sampling) rate is SI = M * S2, then the image selector 310 groups the images in the first sequence into successive groups of input images each containing SI / S2 = M images. The image selector 310 of the illustrated then downsamples (e.g., decimates) the first sequence of input images by randomly selecting one input image from each one of the successive groups of input images to form the second sequence of input images. In some examples, the image selector 310 utilizes a random number generator to select one input image from a respective group of input images. In some examples, the image selector 310 utilizes a pre-defined selection pattern, a round-robin technique, etc., to select one input image from a respective group of input images.[0040] A block diagram of an example implementation of the fluctuating pixel detector 130 of FIG. 1 is illustrated in FIG. 4. The example fluctuating pixel detector 130 of FIG. 4 includes an example image comparator 405 to determine differences between successive pairs of images in a sequence of input images provided by the example image capturer 115 to determine a sequence of difference images. For example, let the sequence of input images be represented by In(x,y), where Indenotes the image at frame index n, and (x,y) ranges over the 0 < x < X- 1 and 0 < y < Y- 1 to index the pixels in the frame. The, the image comparator 405 determines the sequence of difference images as Dn(x,y) = In(x,y) - In-i(x,y), which corresponds to the pixel-wise difference between the pair of successive input images In(x,y) and In-i(x,y).[0041] The example fluctuating pixel detector 130 of FIG. 4 also includes an example candidate region identifier 410 to process the sequence of difference images determined by the example image comparator 405 to identify candidate regions of fluctuating pixels for further evaluation. In the illustrated example, the candidate region identifier 410 processes successive pairs of difference images in the sequence of difference images to identify fluctuating pixels. For example, the candidate region identifier 410 may determine that a pixel is a fluctuating pixel if (1) a first difference image indicates that the value (e.g., luminance, chrominance, etc., or any combination thereof) of the pixel changed by at least a first threshold amount, and (2) a combination of the first difference image and a subsequent second difference image (which may the next subsequent difference image or a later difference image in the sequence) indicates that the value of the pixel returns back to within a second threshold amount of the original pixel value. The second threshold amount may be the same or different from (e.g., lower than) the first threshold amount.[0042] For example, if Dn(x,y) = In(x,y) - In-i(x,y) represents the difference image at frame time n, then any pixel (x,y) in the difference image having a value satisfying (e.g., meeting or exceeding) the first threshold amount is determined to satisfy the first condition described above. Adding this difference image to the next difference image in the sequence, namely, Dn+i(x,y) = In+i(x,y) - In(x,y), yields Dn(x,y) + Dn+i(x,y) = In(x,y) - In-i(x,y) + In+i(x,y) - In(x,y) = In+i(x,y) - In-i(x,y), which corresponds to the difference between the original image at frame time n-1 and the image at frame time n+1. For each pixel (x,y) that satisfied the first condition in the difference image Dn(x,y), the candidate region identifier 410 examines that same pixel location in the image Dn(x,y) + Dn+1(x,y) to determine if the value at that pixel location returned to be within the second threshold amount of the original value. If the candidate region identifier 410 determines that the value of an examined pixel location has returned to be within the second threshold amount, then the candidate region identifier 410 determines that pixel at the examined location fluctuated between at least two values (e.g., its original value in image In-i and its subsequent value in image In) over the three images In-1(x,y), In-1(x,y) and In+1(x,y) and, thus, that pixel was a fluctuating pixel during frame times n-1 to n+1.[0043] Expressed mathematically, if the value of the pixel (x,y) is v(x,y) in the image In-1(x,y), then in some examples, the candidate region identifier 410 determines that the pixel (x,y) is a fluctuating pixel if following two equations, Equation 1 and Equation 2, are satisfied:Equation 1 andDn(x, y) + Dn+1(x, y) = In+i(x, y) - /n-i (*. y) <T2■v(x, y)Equation 2 where T■v(x, y) is a first threshold larger than or equal to the second threshold T21v(x, y).[0044] In some examples, the candidate region identifier 410 employs additional or alternative techniques to identify fluctuating pixels. For example, the candidate region identifier 410 may perform a two-dimensional Fourier transform, or other transform, on the sequence of difference images determined by the image comparator 405, and/or on the sequence of input images, to identify spectral peaks indicative of oscillation, or fluctuation, of pixels at locations in the image space.[0045] In the illustrated example of FIG. 4, for a given frame time, n, the candidate region identifier 410 further groups neighboring pixels determined to be fluctuating pixels into groups of fluctuating pixels. Neighboring fluctuating pixels may include fluctuating pixels that are adjacent to each other and/or are within a threshold distance (e.g., number of pixels) from each other. Then, the candidate region identifier 410 determines the number of pixels included in each group of neighboring fluctuating pixels. [0046] The example fluctuating pixel detector 130 of FIG. 4 further includes an example fluctuation evaluator 415 to evaluate the groups of fluctuating pixels identified by the candidate region identifier 410 to determine whether any of the groups corresponds to a region of fluctuating pixels characteristic of the captured video sequence including content generated by a video display of a media device. In the illustrated example, the fluctuation evaluator 415 compares the respective numbers of pixels included in the groups of neighboring fluctuating pixels to a threshold number (e.g., which may be a percentage of the total number of pixels in an image, such as 5%, 10%, etc., or may be some other value). If any of the groups of neighboring fluctuating pixels contains a number of pixels that satisfies (e.g., meets or exceeds) the threshold number, the fluctuation evaluator 415 determines that the sequence of input images (e.g., at that frame time) includes regions of fluctuating pixels. Additionally, in some examples, the fluctuation evaluator 415 identifies the region(s) of fluctuating pixels as corresponding to the location(s) of the group(s) of neighboring fluctuating pixels satisfying the threshold number of pixels.[0047] FIG. 1 1 depicts a sequence of three example images 1105, 1 1 10 and 11 15, which include an example region 1 120 of fluctuating pixels that could be detected by the example fluctuating pixel detector 130. In the illustrated example of FIG. 11, the example image 1105 corresponds to the image In-1(x,y), the example image 1110 corresponds to the image In(x,y), and the example image 11 15 corresponds to the image In+1(x,y) described above. In the illustrated example of FIG. 1 1, after processing the example images 1 105, 11 10 and 11 15, the example candidate region identifier 410 of the fluctuating pixel detector 130 determines that the pixels included in the example region 1 120 satisfy both conditions (1) and (2) described above. Assuming the example fluctuation evaluator 415 of the fluctuating pixel detector 130 determines that the number of pixels included in the example region 1 120 satisfies the threshold number described above, the region 1 120 is identified as a fluctuating pixel region.[0048] While an example manner of implementing the example video-based authentication system 100 is illustrated in FIGS. 1-4, one or more of the elements, processes and/or devices illustrated in FIGS. 1-4 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example visual authentication verifier 105, the example access controller 1 10, the example image capturer 1 15, the example camera 120, the example actuator 125, the example fluctuating pixel detector 130, the example video sequence validator 135, the example image grabber 305, the example image selector 310, the example image comparator 405, the example candidate region identifier 410, the example fluctuation evaluator 415 and/or, more generally, the example video-based authentication system 100 of FIGS. 1-4 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example visual authentication verifier 105, the example access controller 110, the example image capturer 1 15, the example camera 120, the example actuator 125, the example fluctuating pixel detector 130, the example video sequence validator 135, the example image grabber 305, the example image selector 310, the example image comparator 405, the example candidate region identifier 410, the example fluctuation evaluator 415 and/or, more generally, the example video-based authentication system 100 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example video-based authentication system 100, the example visual authentication verifier 105, the example access controller 1 10, the example image capturer 1 15, the example camera 120, the example actuator 125, the example fluctuating pixel detector 130, the example video sequence validator 135, the example image grabber 305, the example image selector 310, the example image comparator 405, the example candidate region identifier 410 and/or the example fluctuation evaluator 415 is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware. Further still, the example video-based authentication system 100 of FIGS. 1-4 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIGS. 1-4, and/or may include more than one of any or all of the illustrated elements, processes and devices.[0049] Flowcharts representative of example machine readable instructions for implementing the example video-based authentication system 100, the example visual authentication verifier 105, the example access controller 1 10, the example image capturer 115, the example camera 120, the example actuator 125, the example fluctuating pixel detector 130, the example video sequence validator 135, the example image grabber 305, the example image selector 310, the example image comparator 405, the example candidate region identifier 410 and/or the example fluctuation evaluator 415 are shown in FIGS. 5-7. In these examples, the machine readable instructions comprise one or more programs for execution by a processor, such as the processor(s) 812, 912 and/or 1012 shown in the example processor platforms 800, 900 and 1000 discussed below in connection with FIGS. 8, 9 and 10, respectively. The one or more programs, or portion(s) thereof, may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk™, or a memory associated with the processor(s) 812, 912 and/or 1012, but the entire program or programs and/or portions thereof could alternatively be executed by a device other than the processors 812, 912 and/or 1012, and/or embodied in firmware or dedicated hardware (e.g.,implemented by an ASIC, a PLD, an FPLD, discrete logic, etc.). Further, although the example program(s) is(are) described with reference to the flowcharts illustrated in FIGS. 5- 7, many other methods of implementing the example video-based authentication system 100, the example visual authentication verifier 105, the example access controller 1 10, the example image capturer 1 15, the example camera 120, the example actuator 125, the example fluctuating pixel detector 130, the example video sequence validator 135, the example image grabber 305, the example image selector 310, the example image comparator 405, the example candidate region identifier 410 and/or the example fluctuation evaluator 415 may alternatively be used. For example, with reference to the flowcharts illustrated in FIGS. 5-7, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, combined and/or subdivided into multiple blocks.[0050] As mentioned above, the example processes of FIGS. 5-7 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, "tangible computer readable storage medium" and "tangible machine readable storage medium" are used interchangeably. Additionally or alternatively, the example processes of FIGS. 5-7 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non- transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a ROM, a CD, a DVD, a cache, a RAM and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase "at least" is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term "comprising" is open ended. Also, as used herein, the terms "computer readable" and "machine readable" are considered equivalent unless indicated otherwise.[0051] An example program 500 including machine readable instructions that may be executed to implement the example visual authentication verifier 105 of the example video-based authentication system 100 of FIG. 1 is illustrated in FIG. 5. The example program 500 may be executed when the example video-based authentication system 100 is activated (e.g., by the example actuator 125) to authenticate a subject in the field of view of the example camera 120. With reference to the preceding figures and associated written descriptions, the example program 500 of FIG. 5 executes block 505 at which the example image capturer 1 15 of the visual authentication verifier 105 obtains a sequence of input images to be evaluated to determine whether a video sequence output by the camera 120 is valid or is associated with a spoofing attack, as described above. An example program that may be used to implement the processing at block 505 is illustrated in FIG. 6, which is described in further detail below.[0052] At block 510, the example fluctuating pixel detector 130 of the visual authentication verifier 105 determines whether the sequence of input images obtained at block 505 exhibits one or more regions having fluctuating pixel values, as described above. An example program that may be used to implement the processing at block 510 is illustrated in FIG. 7, which is described in further detail below.[0053] At block 515, the example video sequence validator 135 of the visual authentication verifier 105 determines whether one or more regions having fluctuating pixel values were detected at block 510 in the sequence of input images obtained at block 505. If one or more regions having fluctuating pixel values were detected (block 515), then at block 520 the video sequence validator 135 determines, as described above, that the sequence of input images is associated with a spoofing attack. Accordingly, in some examples, at block 520 the video sequence validator 135 prevents the example access controller 110 of the video-based authentication system 100 from performing authentication using the video sequence from the camera 120. However, if no regions having fluctuating pixel values were detected (block 515), then at block 525 the video sequence validator 135 determines that the sequence of input images is valid. Accordingly, as described above, at block 525 the video sequence validator triggers the access controller 110 to perform one or more access control procedures using the sequence of input images obtained at block 505, and/or using the video sequence output from the camera 120.[0054] An example program 505P including machine readable instructions that may be executed to implement the example image capturer 115 of FIG. 3, and/or that may be used to perform the processing at block 505 of FIG. 5, is illustrated in FIG. 6. With reference to the preceding figures and associated written descriptions, the example program 505P of FIG. 6 executes block 605 at which the example image grabber 305 of the image capturer 1 15 captures a first sequence of input images from a video output of the example camera 120 at a first (sampling) rate that is higher than a second (desired) frame rate, as described above. At block 610, the example image selector 310 of the image capturer 1 15 groups the images of the first sequence into successive groups of input images containing respective numbers of images determined by the relationship (e.g., a ratio) between the first (e.g., sampling) rate and the second (e.g., desired) rate, as described above. At block 615, the image selector 310 randomly selects one input image from each one of the successive groups of input images to form a second sequence of input images for further evaluation, as described above. [0055] An example program 51 OP including machine readable instructions that may be executed to implement the example fluctuating pixel detector 130 of FIG. 4, and/or that may be used to perform the processing at block 510 of FIG. 5, is illustrated in FIG. 7. With reference to the preceding figures and associated written descriptions, the example program 51 OP of FIG. 7 executes block 705 at which the example image comparator 405 of the fluctuating pixel detector 130 determines a sequence of difference images from successive pairs of images in a sequence of input images being evaluated (e.g., such as a sequence of input images obtained by the example image capturer 115), as described above. At block 710, the example candidate region identifier 410 of the fluctuating pixel detector 130 processes, as described above, successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, which are pixels that fluctuate between at least two values across three successive images (which may or may not be adjacent) in the sequence of input images being evaluated. At block 715, the candidate region identifier 410 groups neighboring fluctuating pixels into groups of fluctuating pixels, as described above.[0056] At block 720, the example fluctuation evaluator 415 of the fluctuating pixel detector 130 determines whether any of the groups of neighboring fluctuating pixels determines at block 715 contains a number of pixels that satisfies (e.g., meets or exceeds) a threshold number, as described above. If any group of neighboring fluctuating pixels satisfies the threshold number of pixels (block 720), then at block 725 the fluctuation evaluator 415 determines that the sequence of input images includes regions of fluctuating pixels, as described above. Otherwise, if no group of neighboring fluctuating pixels satisfies the threshold number of pixels (block 720), then at block 730 the fluctuation evaluator 415 determines that the sequence of input images does not have any regions of fluctuating pixels.[0057] FIG. 8 is a block diagram of an example processor platform 800 capable of executing the instructions of FIG. 5 to implement the example video-based authentication system 100, the example visual authentication verifier 105, the example access controller 1 10, the example image capturer 115, the example camera 120, the example actuator 125, the example fluctuating pixel detector 130 and/or the example video sequence validator 135 of FIG. 1. The processor platform 800 can be, for example, a server, a personal computer, a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a digital camera, or any other type of computing device.[0058] The processor platform 800 of the illustrated example includes a processor 812. The processor 812 of the illustrated example is hardware. For example, the processor 812 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. In the illustrated example of FIG. 8, the processor 812 is configured to implement the example visual authentication verifier 105, the example access controller 110, the example image capturer 1 15, the example fluctuating pixel detector 130 and the example video sequence validator 135 of FIG. 1.[0059] The processor 812 of the illustrated example includes a local memory 813 (e.g., a cache). The processor 812 of the illustrated example is in communication with a main memory including a volatile memory 814 and a non-volatile memory 816 via a link 818. The link 818 may be implemented by a bus, one or more point-to-point connections, etc., or a combination thereof. The volatile memory 814 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non- volatile memory 816 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 814, 816 is controlled by a memory controller.[0060] The processor platform 800 of the illustrated example also includes an interface circuit 820. The interface circuit 820 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.[0061] In the illustrated example, one or more input devices 822 are connected to the interface circuit 820. The input device(s) 822 permit(s) a user to enter data and commands into the processor 812. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video) such as the example camera 120, a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, a trackbar (such as an isopoint), a voice recognition system, the example actuator 125 and/or any other human- machine interface. Also, many systems, such as the processor platform 800, can allow the user to control the computer system and provide data to the computer using physical gestures, such as, but not limited to, hand or body movements, facial expressions, and face recognition[0062] One or more output devices 824 are also connected to the interface circuit 820 of the illustrated example. The output devices 824 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit 820 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.[0063] The interface circuit 820 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 826 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).[0064] The processor platform 800 of the illustrated example also includes one or more mass storage devices 828 for storing software and/or data. Examples of such mass storage devices 828 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID (redundant array of independent disks) systems, and digital versatile disk (DVD) drives.[0065] Coded instructions 832 corresponding to the instructions of FIG. 5 may be stored in the mass storage device 828, in the volatile memory 814, in the non-volatile memory 816, in the local memory 813 and/or on a removable tangible computer readable storage medium, such as a CD or DVD 836.[0066] FIG. 9 is a block diagram of an example processor platform 900 capable of executing the instructions of FIG. 6 to implement the example image capturer 1 15, the example image grabber 305 and/or the example image selector 310 of FIG. 3. The processor platform 900 can be, for example, a server, a personal computer, a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a PDA, an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a digital camera, or any other type of computing device. [0067] The processor platform 900 of the illustrated example includes a processor 912. The processor 912 of the illustrated example is hardware. For example, the processor 912 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. In the illustrated example of FIG. 9, the processor 912 is configured to implement the example image capturer 1 15, the example image grabber 305 and the example image selector 310 of FIG. 3.[0068] The processor 912 of the illustrated example includes a local memory 913 (e.g., a cache). The processor 912 of the illustrated example is in communication with a main memory including a volatile memory 914 and a non-volatile memory 916 via a link 918. The link 918 may be implemented by a bus, one or more point-to-point connections, etc., or a combination thereof. The volatile memory 914 may be implemented by SDRAM, DRAM, RDRAM and/or any other type of random access memory device. The non-volatile memory 916 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 914, 916 is controlled by a memory controller.[0069] The processor platform 900 of the illustrated example also includes an interface circuit 920. The interface circuit 920 may be implemented by any type of interface standard, such as an Ethernet interface, a USB, and/or a PCI express interface.[0070] In the illustrated example, one or more input devices 922 are connected to the interface circuit 920. The input device(s) 922 permit(s) a user to enter data and commands into the processor 912. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, a trackbar (such as an isopoint), a voice recognition system, and/or any other human-machine interface. Also, many systems, such as the processor platform 900, can allow the user to control the computer system and provide data to the computer using physical gestures, such as, but not limited to, hand or body movements, facial expressions, and face recognition[0071] One or more output devices 924 are also connected to the interface circuit 920 of the illustrated example. The output devices 924 can be implemented, for example, by display devices (e.g., an LED, an OLED, a liquid crystal display, a CRT display, a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit 920 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.[0072] The interface circuit 920 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 926 (e.g., an Ethernet connection, a DSL, a telephone line, coaxial cable, a cellular telephone system, etc.).[0073] The processor platform 900 of the illustrated example also includes one or more mass storage devices 928 for storing software and/or data. Examples of such mass storage devices 928 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and DVD drives.[0074] Coded instructions 932 corresponding to the instructions of FIG. 6 may be stored in the mass storage device 928, in the volatile memory 914, in the non-volatile memory 916, in the local memory 913 and/or on a removable tangible computer readable storage medium, such as a CD or DVD 936.[0075] FIG. 10 is a block diagram of an example processor platform 1000 capable of executing the instructions of FIG. 7 to implement the example fluctuating pixel detector 130, the example image comparator 405, the example candidate region identifier 410 and/or the example fluctuation evaluator 415 of FIG. 4. The processor platform 1000 can be, for example, a server, a personal computer, a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a PDA, an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a digital camera, or any other type of computing device.[0076] The processor platform 1000 of the illustrated example includes a processor 1012. The processor 1012 of the illustrated example is hardware. For example, the processor 1012 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. In the illustrated example of FIG. 10, the processor 1012 is configured to implement the example fluctuating pixel detector 130, the example image comparator 405, the example candidate region identifier 410 and the example fluctuation evaluator 415 of FIG. 4. [0077] The processor 1012 of the illustrated example includes a local memory 1013 (e.g., a cache). The processor 1012 of the illustrated example is in communication with a main memory including a volatile memory 1014 and a non-volatile memory 1016 via a link 1018. The link 1018 may be implemented by a bus, one or more point-to-point connections, etc., or a combination thereof. The volatile memory 1014 may be implemented by SDRAM, DRAM, RDRAM and/or any other type of random access memory device. The non-volatile memory 1016 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1014, 1016 is controlled by a memory controller.[0078] The processor platform 1000 of the illustrated example also includes an interface circuit 1020. The interface circuit 1020 may be implemented by any type of interface standard, such as an Ethernet interface, a USB, and/or a PCI express interface.[0079] In the illustrated example, one or more input devices 1022 are connected to the interface circuit 1020. The input device(s) 1022 permit(s) a user to enter data and commands into the processor 1012. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, a trackbar (such as an isopoint), a voice recognition system, and/or any other human-machine interface. Also, many systems, such as the processor platform 1000, can allow the user to control the computer system and provide data to the computer using physical gestures, such as, but not limited to, hand or body movements, facial expressions, and face recognition[0080] One or more output devices 1024 are also connected to the interface circuit 1020 of the illustrated example. The output devices 1024 can be implemented, for example, by display devices (e.g., an LED, an OLED, a liquid crystal display, a CRT display, a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit 1020 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.[0081] The interface circuit 1020 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1026 (e.g., an Ethernet connection, a DSL, a telephone line, coaxial cable, a cellular telephone system, etc.).[0082] The processor platform 1000 of the illustrated example also includes one or more mass storage devices 1028 for storing software and/or data. Examples of such mass storage devices 1028 include floppy disk drives, hard drive disks, compact disk drives, Blu- ray disk drives, RAID (redundant array of independent disks) systems, and DVD drives.[0083] Coded instructions 1032 corresponding to the instructions of FIG. 7 may be stored in the mass storage device 1028, in the volatile memory 1014, in the non-volatile memory 1016, in the local memory 1013 and/or on a removable tangible computer readable storage medium, such as a CD or DVD 1036.[0084] The following further examples, which include subject matter such as a method to perform video-based authentication, means for performing video-based authentication, at least one machine-readable medium including instructions that, when performed by a machine cause the machine to perform video-based authentication, an apparatus and/or a system to video-based authentication, are disclosed herein.[0085] Example 1 is a method to perform video-based authentication, which includes determining, with a processor, whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel value. The method of example 1 also includes determining, with the processor, that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0086] Example 2 includes the subject matter of example 1, wherein the sequence of input images is a second sequence of input images, and further including randomly sampling a first sequence of input images to obtain the second sequence of input images.[0087] Example 3 includes the subject matter of example 2, wherein randomly sampling the first sequence of input images includes capturing the first sequence of input images at a first rate higher than a second rate, grouping the first sequence of input images into successive groups of input images containing respective numbers of images based on a relationship between the first rate and the second rate, randomly selecting one input image from respective ones of the successive groups of input images to form the second sequence of input images.[0088] Example 4 includes the subject matter of example 1, wherein determining whether the sequence of input images exhibits the first region having fluctuating pixels values includes determining differences between successive pairs of images in the sequence of input images to determine a sequence of difference images, and processing the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.[0089] Example 5 includes the subject matter of example 4, wherein processing the sequence of difference images includes processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images, determining a number of pixels included in a first group of neighboring fluctuating pixels, and if the number of pixels satisfies a threshold, determining that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.[0090] Example 6 includes the subject matter of example 1, and further includes triggering operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.[0091] Example 7 includes the subject matter of example 1, and further includes determining that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0092] Example 8 includes the subject matter of any one of examples 1 to 3, wherein determining whether the sequence of input images exhibits the first region having fluctuating pixels values includes determining differences between successive pairs of images in the sequence of input images to determine a sequence of difference images, and processing the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values. [0093] Example 9 includes the subject matter of example 8, wherein processing the sequence of difference images includes processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images, determining a number of pixels included in a first group of neighboring fluctuating pixels, and if the number of pixels satisfies a threshold, determining that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.[0094] Example 10 includes the subject matter of any one of examples 1 to 3, and further includes triggering operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.[0095] Example 11 includes the subject matter of any one of examples 1 to 3, and further includes determining that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0096] Example 12 is a tangible machine readable storage medium including machine readable instructions which, when executed, cause a machine to at least determine whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values, and determine that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0097] Example 13 includes the subject matter of example 12, wherein the sequence of input images is a second sequence of input images, and the machine readable instructions, when executed, further cause the machine to randomly sample a first sequence of input images to obtain the second sequence of input images.[0098] Example 14 includes the subject matter of example 13, wherein to randomly sample the first sequence of input images, the machine readable instructions, when executed, further cause the machine to capture the first sequence of input images at a first rate higher than a second rate, group the first sequence of input images into successive groups of input images containing respective numbers of images based on a relationship between the first rate and the second rate, and randomly select one input image from respective ones of the successive groups of input images to form the second sequence of input images.[0099] Example 15 includes the subject matter of example 12, wherein to determine whether the sequence of input images exhibits the first region having fluctuating pixels values, the machine readable instructions, when executed, further cause the machine to determine differences between successive pairs of images in the sequence of input images to determine a sequence of difference images, and process the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.[0100] Example 16 includes the subject matter of example 15, wherein to process the sequence of difference images, the machine readable instructions, when executed, further cause the machine to process successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images, determine a number of pixels included in a first group of neighboring fluctuating pixels, and if the number of pixels satisfies a threshold, determine that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.[0101] Example 17 includes the subject matter of example 12, wherein the machine readable instructions, when executed, further cause the machine to trigger operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.[0102] Example 18 includes the subject matter of example 12, wherein the machine readable instructions, when executed, further cause the machine to determine that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values. [0103] Example 19 includes the subject matter of any one of examples 12 to 14, wherein to determine whether the sequence of input images exhibits the first region having fluctuating pixels values, the machine readable instructions, when executed, further cause the machine to determine differences between successive pairs of images in the sequence of input images to determine a sequence of difference images, and process the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.[0104] Example 20 includes the subject matter of example 19, wherein to process the sequence of difference images, the machine readable instructions, when executed, further cause the machine to process successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images, determine a number of pixels included in a first group of neighboring fluctuating pixels, and if the number of pixels satisfies a threshold, determine that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.[0105] Example 21 includes the subject matter of any one of examples 12 to 14, wherein the machine readable instructions, when executed, further cause the machine to trigger operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.[0106] Example 22 includes the subject matter of any one of examples 12 to 14, wherein the machine readable instructions, when executed, further cause the machine to determine that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0107] Example 23 is a tangible machine readable storage medium including machine readable instructions which, when executed, cause a machine to perform a method as defined in any one of examples 1 to 11. [0108] Example 24 is an apparatus to perform video-based authentication, which includes a fluctuating pixel detector to determine whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values. The apparatus of example 24 also includes a video sequence validator to determine that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0109] Example 25 includes the subject matter of example 24, wherein the sequence of input images is a second sequence of input images, and further including an image capturer to randomly sample a first sequence of input images to obtain the second sequence of input images.[0110] Example 26 includes the subject matter of example 25, wherein the image capturer is to randomly sample the first sequence of input images by capturing the first sequence of input images at a first rate higher than a second rate, grouping the first sequence of input images into successive groups of input images containing respective numbers of images based on a relationship between the first rate and the second rate, and randomly selecting one input image from respective ones of the successive groups of input images to form the second sequence of input images.[0111] Example 27 includes the subject matter of example 24, wherein the fluctuating pixel detector is further to determine differences between successive pairs of images in the sequence of input images to determine a sequence of difference images, and process the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.[0112] Example 28 includes the subject matter of example 27, wherein the fluctuating pixel detector is to process the sequence of difference images by processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images, determining a number of pixels included in a first group of neighboring fluctuating pixels, and if the number of pixels satisfies a threshold, determining that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.[0113] Example 29 includes the subject matter of example 24, wherein the video sequence validator is further to trigger operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.[0114] Example 30 includes the subject matter of example 24, wherein the video sequence validator is further to determine that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0115] Example 31 includes the subject matter of any one of examples 24 to 26, wherein the fluctuating pixel detector is further to determine differences between successive pairs of images in the sequence of input images to determine a sequence of difference images, and process the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.[0116] Example 32 includes the subject matter of example 31, wherein the fluctuating pixel detector is to process the sequence of difference images by processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images, determining a number of pixels included in a first group of neighboring fluctuating pixels, and if the number of pixels satisfies a threshold, determining that the sequence of input images exhibits the first region having fluctuating pixel values, the first region corresponding to the first group of neighboring fluctuating pixels.[0117] Example 33 includes the subject matter of any one of examples 24 to 26, wherein the video sequence validator is further to trigger operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values. [0118] Example 34 includes the subject matter of any one of examples 24 to 26, wherein the video sequence validator is further to determine that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0119] Example 35 is an apparatus including a processor configured to perform a method as defined in any one of examples 1 to 1 1.[0120] Example 36 is a system to perform video-based authentication, which includes means for determining whether a sequence of input images provided to perform video-based authentication of a subject exhibits a first region having fluctuating pixel values. The system of example 36 also includes means for determining that the sequence of input images is associated with a spoofing attack in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0121] Example 37 includes the subject matter of example 36, wherein the sequence of input images is a second sequence of input images, and further including means for randomly sampling a first sequence of input images to obtain the second sequence of input images.[0122] Example 38 includes the subject matter of example 37, wherein the means for randomly sampling the first sequence of input images includes means for capturing the first sequence of input images at a first rate higher than a second rate, means for grouping the first sequence of input images into successive groups of input images containing respective numbers of images based on a relationship between the first rate and the second rate, and means for randomly selecting one input image from respective ones of the successive groups of input images to form the second sequence of input images.[0123] Example 39 includes the subject matter of example 36, wherein the means for determining whether the sequence of input images exhibits the first region having fluctuating pixels values includes means for determining differences between successive pairs of images in the sequence of input images to determine a sequence of difference images, and means for processing the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values. [0124] Example 40 includes the subject matter of example 39, wherein the means for processing the sequence of difference images includes means for processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images, means for determining a number of pixels included in a first group of neighboring fluctuating pixels, and means for determining that the sequence of input images exhibits the first region having fluctuating pixel values if the number of pixels satisfies a threshold, the first region corresponding to the first group of neighboring fluctuating pixels.[0125] Example 41 includes the subject matter of example 36, and further includes means for triggering operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.[0126] Example 42 includes the subject matter of example 36, and further includes means for determining that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0127] Example 43 includes the subject matter of any one of examples 36 to 39, wherein the means for determining whether the sequence of input images exhibits the first region having fluctuating pixels values includes means for determining differences between successive pairs of images in the sequence of input images to determine a sequence of difference images, and means for processing the sequence of difference images to determine whether the sequence of input images exhibits the first region having fluctuating pixel values.[0128] Example 44 includes the subject matter of example 43, wherein the means for processing the sequence of difference images includes means for processing successive pairs of difference images in the sequence of difference images to identify fluctuating pixels, the fluctuating pixels being pixels that fluctuate between at least two values across three successive images in the sequence of input images, means for determining a number of pixels included in a first group of neighboring fluctuating pixels, and means for determining that the sequence of input images exhibits the first region having fluctuating pixel values if the number of pixels satisfies a threshold, the first region corresponding to the first group of neighboring fluctuating pixels.[0129] Example 45 includes the subject matter of any one of examples 36 to 39, and further includes means for triggering operation of a further access control procedure to authenticate the subject based on the sequence of input images in response to determining that the sequence of input images does not exhibit any region having fluctuating pixel values.[0130] Example 46 includes the subject matter of any one of examples 36 to 39, and further includes means for determining that the sequence of input images depicts a scene including content generated by a video display in response to determining that the sequence of input images exhibits the first region having fluctuating pixel values.[0131] Example 47 is a system including means for performing a method as defined in any one of examples 1 to 11.[0132] Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. |
In described examples, a probe card (100) includes a mechanical support fixture (105) having an inner aperture (102) with multiple probes secured to the fixture (105) that includes probe tips (103a) that extend into the inner aperture (102) for contacting probe pads on die of a wafer (190) to be probed. At least one magnetic shield (120, 130) includes a magnetic material that at least substantially surrounds a projected volume over an area that encloses the probe tips (103a). The magnetic material has a relative magnetic permeability of at least 800. |
1.A probe card comprising:a mechanical support fixture having an internal aperture, wherein a plurality of probes are secured to the fixture, the plurality of probes including probe pads extending into the internal aperture to contact a die of a wafer to be probed Probe tip, andAt least one magnetic shield comprising a magnetic material at least substantially surrounding a volume that protrudes over a region surrounding the probe tip of the probe,Wherein the magnetic material has a relative magnetic permeability of at least 800.2.The probe card of claim 1 wherein said magnetic shield comprises a hollow sleeve positioned within said internal aperture, said hollow sleeve being secured to said fixture.3.The probe card of claim 1 wherein said magnetic shield is physically mounted to a surface of said fixture.4.The probe card of claim 1 wherein said magnetic shield comprises a first magnetic shield and a second magnetic shield, said first magnetic shield comprising positioned inside said internal aperture and secured to said A hollow sleeve on the fixture, the second magnetic shield being physically mounted to the surface of the fixture.5.The probe card of claim 4 wherein said second magnetic shield surrounds said first magnetic shield concentrically.6.The probe card of claim 1 wherein said magnetic permeability of said magnetic material is at least 5,000.7.The probe card of claim 1 wherein said magnetic material has a thickness between 0.05 and 3 mm.8.The probe card of claim 1 wherein said magnetic shield completely surrounds said protruding volume.9.A wafer probe system comprising:a wafer prober comprising a test head coupled to the probe card to detect a die of the wafer disposed on the wafer chuck;Wherein the probe card comprises:a mechanical support fixture having an internal aperture, wherein a plurality of probes are secured to the fixture, the plurality of probes including probe pads extending into the internal aperture to contact a die of a wafer to be probed Probe tip, andAt least one magnetic shield comprising a magnetic material at least substantially surrounding a volume that protrudes over a region surrounding the probe tip of the probe,Wherein the magnetic material has a relative magnetic permeability of at least 800.10.The probe system of claim 9 wherein said wafer chuck comprises a magnetic shield, said magnetic shield comprising a magnetic material, said magnetic material at least substantially surrounding said enclosing said probe tip A prominent volume under the area.11.The probe system of claim 10 further comprising a magnetic shield on the dielectric layer on said wafer chuck.12.The probe system of claim 9 wherein said magnetic shield comprises a hollow sleeve positioned within said internal orifice, said hollow sleeve being secured to said fixture.13.The probe system of claim 9 wherein said magnetic shield is physically mounted to a surface of said fixture.14.The probe system of claim 9 wherein said magnetic shield comprises a first magnetic shield and a second magnetic shield, said first magnetic shield comprising positioned inside said internal aperture and secured to said A hollow sleeve on the fixture, the second magnetic shield being physically mounted to the surface of the fixture.15.The probe system of claim 14 wherein said second magnetic shield concentrically surrounds said first magnetic shield.16.The probe system of claim 9 wherein said relative magnetic permeability of said magnetic material is at least 5,000.17.The probe system of claim 9 wherein said magnetic material has a thickness between 0.05 mm and 3 mm.18.The probe system of claim 9 wherein said magnetic shield completely surrounds said protruding volume. |
Magnetically shielded probe cardTechnical fieldThe present invention relates to a probe card for detecting a semiconductor wafer having an integrated circuit (IC) die, the probe card comprising at least one magnetically sensitive portion.Background techniqueDuring the semiconductor fabrication process, semiconductor dies are formed on the wafer by processing, including photolithography, deposition, implantation, and etching. A wafer is a substrate that is substantially formed of a semiconductor, such as silicon or gallium arsenide. After the fabrication process is completed and before the wafer is singulated into dies (or chips), the wafers have functional tests that verify that their electrical performance is within the design specifications. Typically, test heads for test equipment for die detection typically mount probe cards with multiple probes or other contact members for contact with probe pads (bond pads or bumps) on the die. The probe card provides an electrical connection for interfacing between the test device and the device under test (DMD).The probe card includes a probe card (e.g., a printed circuit board (PCB)) having a hollow center having a probe pad extending downwardly from the center and disposed to contact the probe pad on the die to be inspected. Multiple probe card tips. One type of probe arrangement is an epoxy card PCB with a ring assembly. The loop assembly is constructed by placing a preformed probe into a plastic template. A hole corresponding to the pattern of the bond pads of the die to be tested is stamped into the template. A ceramic or anodized aluminum ring is bonded to the probe with an epoxy resin. The ring and epoxy are configured to hold the probes in their proper orientation permanently.Summary of the inventionIn the depicted example, the probe card includes a mechanical support fixture having an internal aperture, wherein the plurality of probes are secured to the fixture, the plurality of probes including extending into the internal aperture to Contact the probe tip of the probe pad on the die of the wafer to be probed. The at least one magnetic shield comprises a magnetic material that at least substantially surrounds a volume that protrudes over a region surrounding the probe tip. The magnetic material has a relative magnetic permeability of at least 800.DRAWINGS1A is a top plan view of an example magnetic shield probe card that includes both an inner magnetic shield and an outer magnetic shield, in accordance with an example embodiment.1B is a cross-sectional view of the example shielded probe card shown in FIG. 1A as being on a wafer on a chuck that also has the disclosed magnetic shield, in accordance with an example embodiment.2 illustrates an example shielded probe card implemented as an epoxy card with a ring assembly that includes both internal and external magnetic shields, in accordance with an example embodiment.3 depicts an example probe system including the disclosed shielded probe card shown in FIG. 2 as a shielded probe card, in accordance with an example embodiment.4 illustrates the magnetic field strength at the z-degree of the DUT as a function of the radial distance from the device under test (DUT) when the wafer is probed using the disclosed shielded probe card, in accordance with an example embodiment.detailed descriptionThe drawings are not necessarily to scale. In the drawings, like reference numerals are used to refer to the Some of the illustrated actions or events may occur in different orders and/or concurrently with other acts or events. Also, some of the illustrated actions or events may not be required to implement a method in accordance with the present disclosure.The terms "coupled to" or "coupled to" (and the like) as used herein, unless otherwise defined, describe indirect or direct electrical connections. For example, if the first device is "coupled" to the second device, the connection may be through a direct electrical connection in which only parasitic effects are present in the path, or through an indirect electrical connection via an intermediate term containing other devices and connections. For indirect coupling, the intermediate term generally does not modify the signal information, but may adjust its current level, voltage level, and/or power level.The disclosed embodiments recognize that some integrated circuit (IC) devices (e.g., "fluxgate" sensors) need to be lowered to an ambient magnetic field level of <50 [mu]T to 100 [mu]T to properly perform magnetically sensitive electrical measurements. One known arrangement for providing such reduced environmental field levels assembles a sensor device in a package and places the packaged sensor device along with its test board within a large concentric cylindrical magnetic shield (shield shield) The magnetic shield is typically capped at one or both ends where it is tested while inside the shield housing. The shield housing solution is considered an expensive arrangement (construction cost plus package assembly cost) with low test throughput because it is often necessary to manually insert a single package into the test socket of the test board, and then The test board is inserted into the shield housing. In addition, the assembly process of the IC device eliminates any possible wafer position information, making it difficult to test the wafer processing effect and the yield loss source in the unlikely event that the information is not available.The disclosed embodiments include a magnetically shielded probe card (shielded probe card) that integrates a magnetic material (eg,or similar magnetic material) as a magnetic shield layer into the design of the probe card itself, where the inner shield The layer is near the probe pad of the device under test (die). The disclosed shield probe card: shields the DUT region from an unintentional ambient magnetic field, including the earth's magnetic field and the environmental tester field; and attenuates the magnetic field generated by the current flowing through the traces on the shielded probe card itself.The disclosed shield probe card can include a mechanical support fixture having an internal aperture, wherein a plurality of probes are secured to the fixture, the plurality of probes including extending into the internal aperture for contact The probe tip of the probe pad on the die of the wafer to be probed. The at least one magnetic shield comprises a magnetic material that at least substantially surrounds the protruding volume that protrudes 90 degrees above the area surrounding the probe tip (see Figure 2, described below). As used herein, "substantially surrounding" refers to at least 80 of the length of a corresponding fully enclosed shape (eg, in one particular embodiment, the perimeter (or perimeter) of a circular (annular) magnetic shield). %. The magnetic material has a relative magnetic permeability of at least 800.FIG. 1A is a top plan view of an example shield probe card 100 that includes both an inner magnetic shield 120 and an outer magnetic shield 130, in accordance with an example embodiment. The shielded probe card 100 includes a mechanical support fixture that includes a dielectric or dielectric/conductor fixture (fixing device) 105 having an internal assembly 106 to secure a plurality of probes 103 (eg, tungsten) Probe tip 103a of the probe). For example, the shield probe card 100 can include a ceramic fixture having an internal aperture (or cavity) 102 in which the probe 103 is embedded within the internal assembly 106 of the fixture 105. Probe tip 103a extends into internal aperture 102 to contact a probe pad (bond pad or solder bump) on the die of the wafer to be probed. The unshaded area depicted between the fixture 105 and the outer magnetic shield 130 represents the dielectric contained when the fixture 105 does not include a dielectric material (eg, including a metal such as aluminum or stainless steel). This dielectric is not necessarily solid or continuous, and it may include a silicone film, ceramics, or the like.Both the inner magnetic shield 120 and the outer magnetic shield 130 comprise a magnetic material that at least substantially encloses a volume that protrudes over the area surrounding the probe tip 103a (see the protruding volume in Figure 2 described below). Block 240), the at least substantially enclosing is shown in a concentric arrangement (with a common center) by completely surrounding the protruding volume, thereby providing a continuous loop. However, a continuous magnetic shield ring is not required, but the attenuation performance of the shielded probe card typically decreases significantly with the size of any gaps or slits in the magnetic shield. A certain degree of gap or notch/slit may be created along the length of the magnetic shield during mechanical assembly.The thickness of the magnetic material is generally between 0.05 mm and 3 mm, and is typically from 0.5 mm to 3 mm thick. For the inner magnetic shield 120, the magnetic material may be provided as a foil configured in a shield can form (eg, a circular or rectangular shape), or for the outer magnetic shield 130, the magnetic material may be configured as a foil configured in the form of a sheet to provide The area to be covered. For example,is a variety of alloys commercially available from Magnetic Shield Corporation Bensenville, IL, in the form of shield cans and sheets ranging in thickness from 0.36 mm to 1.57 mm.Given the probe card embodiment shown in FIG. 1, the inner magnetic shield 120 can be physically mounted to the fixture 105 in a variety of ways, including a press-fit assembly onto the fixture 105, constructed to the fixture 105. Press the support in the middle, or by applying an adhesive. The outer magnetic shield 130 can be physically attached to the fixture 105 by a similar attachment.As mentioned above, the magnetic material has a relative magnetic permeability of at least 800. In some embodiments the magnetic material has a relative magnetic permeability of at least 5,000, such as _NER3 having a relative magnetic permeability in the range of from about 5,000 to about 400,000. It includes a nickel-iron soft magnetic alloy having about 50% to 80 therein. % nickel. In other embodiments, the magnetic permeability is between 800 and 5,000, for example including an ultra-low carbon steel shielding material commercially available as, which is said to have a magnetic permeability value of about 1,000.The inner magnetic shield 120 can include a hollow sleeve positioned within the inner aperture 102 that is secured to the fixture 105, such as by press fit. The inner magnetic shield 120 may have an outer diameter of 3 mm to 8 mm. The outer magnetic shield 130 is physically mounted to the surface of the fixture 105 or supported by a surface of the shield probe card 100. It is also possible to embed the magnetic material of the outer magnetic shield 130 within the fixture 105 by lamination.1B is a cross-sectional view of the example shield probe card 100 shown in FIG. 1A above the wafer 190 on a wafer chuck 170, the wafer chuck 170 also having a representation, in accordance with an example embodiment. The disclosed magnetic shield 160 is formed below the wafer 190. Magnetic shield 160 may comprise a flat disk of magnetic material (solid or perforated) having sufficient magnetic permeability. Magnetic shield 160 may also be coated with a dielectric material (eg, polytetrafluoroethylene (PTFE, trade name Teflon) to electrically isolate it from the wafer and/or wafer chuck (see Figure 3, described below). The dielectric layer 362 is shown. The magnetic shield 160 can be attached in a variety of ways, including mechanical clamping, through which the vacuum hole design allows some vias to allow vacuum to hold the wafer clip on top of the shield disk Disc vacuum, as well as bonding.If the inner aperture 102 has sufficient space, and if the shield material is sufficiently thin, then there are more than two concentric magnetic shield layers. In the case of multiple shield layers (≥2), the innermost magnetic shield layer may be physically attached only to the outer magnetic shield layer (eg, using a dielectric spacer material such as Teflon, epoxy Or silicone) without physically mounting them to any probe card surface. In this embodiment, only one of the shield layers is typically mounted to the surface of the probe card.2 illustrates an example shield probe card 200 including both an inner magnetic shield 120 and an outer magnetic shield 130 implemented with a PCB 210 and a ring assembly 220, in accordance with an example embodiment. PCB probe card for epoxy ring assembly. The ring assembly 220 is disposed in the PCB aperture 210a and the ring assembly 220 has its own annular aperture 220a that defines the internal aperture 102. The probe 103 is secured to the ring assembly 220 by an epoxy 225. The volume protruding above the area of the probe tip 103a enclosing the probe 103 is shown as 240. Both the inner magnetic shield 120 and the outer magnetic shield 130 are shown as enclosing the protruding volume 240 and are both concentric with the protruding volume 240.The inner magnetic shield 120 should generally be as close as possible to the protruding volume 240 or coincident with the protruding volume 240 to maximize the magnetic attenuation provided. The z-height of the inner magnetic shield 120 should generally extend downwardly as close as possible (but never touch) the probe tip 103a to minimize the gap between the inner magnetic shield 120 and the DUT, and as designed This extends upwards to maximize magnetic attenuation while still being placed within the limits of the probe system. An example range of the z-height (perpendicular to the plane of the wafer) of the inner magnetic shield 120 above the wafer 190 is from 150 μm to 1.5 mm, depending at least in part on the depth of the probe tip 103a, the z-thickness of the PCB 210, and Needle card design. External magnetic shield 130 should generally extend beyond the z-thickness of PCB 210 in a similar manner.For a probe card configuration having a sufficiently small internal aperture 102 with sufficient probe length under the fixture material, the inner magnetic shield 120 can rest on top of the fixture material (eg, on the ring assembly 220) The shield layer effectively encloses the inner aperture 102. This arrangement increases the distance of the internal magnetic shield to the DUT, but this design can potentially compensate for the associated reduction in rated attenuation (or may not, depending on the environmental field requirements of the test).The disclosed shielded probe card has several advantages over known shielded probe systems. Shielded probe cards can be used on multiple different probe systems by simply repositioning the probe card. In addition, the probe card design can be modified for use with probe cards designed in a similar manner. This is a much cheaper solution than purchasing a fully shielded probe system enclosure or retrofitting an existing probe system with such a shielded enclosure.The disclosed shielded probe card also places the magnetic shield at a position as close as possible to the DUT, thereby isolating the DUT from virtually all external fields. In contrast, conventional outer shielded enclosures still subject the DUT to the effects of the field created by the probe card and the traces inside the enclosure. The disclosed shielded probe card may be particularly important for wafer level test systems that include optics ports above the DUT that are not obstructed by conventional shielded enclosures.FIG. 3 depicts an example probe system 300 including a wafer prober 320 that includes the disclosed magnetic shield probe card shown in FIG. 2 as a shield probe card 200, in accordance with an example embodiment. Shield probe card 200 is shown to include a test (or probe) head 302 on performance board 304. In a typical parametric probe system, the test head 302 is directly docked to the probe card such that no performance board 304 is required. The disclosed shield probe card 200 can be applied to a product multi-probe system or device parameter test system.For system 300, signals are received from a test controller (eg, automatic test equipment (ATE)) 310 via lead 312 via performance board 304, which may include digital, high frequency, high precision analog, RF, and/or power paths . In the parametric probe system (without performance board 304), signals are transmitted directly from test head 302 without the need for lead 312. Probe card 200 has contact points provided by probe tip 103a of probe 103 in a particular array to mirror a corresponding design corresponding contact point of the DUT on wafer 190. The wafer 190 is on an optional dielectric layer 362, such as a Teflon sheet or coating on a magnetic shield layer (eg,) 160, the magnetic shield layer 160 being on the wafer chuck 170, The wafer chuck 170 is on the X, Y, Z, θ table 175. Probe 103 can typically be soldered to PCB 210. Wafer chuck 170, dielectric layer 362, and magnetic shield layer 160 may be patterned to allow vacuum to be pulled down on the back side of wafer 190.The probe system 300 is also shown to include a computer unit 315 for controlling the ATE 310 and the test control unit 311. The parametric probe system can operate without a control PC (e.g., computer unit 315), requiring only parametric test instruments and manual X, Y, Z, θ control of the wafer chuck 160. Test control signals and test data are passed to the DUTs on ATE 310 and wafer 190 and from the DUTs on ATE 310 and wafer 190 via leads 312 and shield probe card 200.4 illustrates magnitudes of magnetic fields at a z-height of a DUT as a function of a radial distance parallel to a wafer surface as detected using a disclosed magnetically shielded probe card, wherein an external B-field (Bext) = 50μT. The simulation data is shown for the combination of the outer diameter of the inner magnetic shield 120 and the thickness of the shield layer. The B field of <500nT (<1% Bext) is shown to reach the DUT surface. This high level of attenuation of Bext provided by the shielded probe card enables measurement of extremely sensitive magnetic components at the wafer level, thereby substantially increasing throughput and the ability to correlate device metrics to wafer level processes. It also reduces the existing learning loop because package assembly time is no longer needed to acquire data and the process split can be fully characterized.The disclosed embodiments are suitable for testing a variety of different IC devices and related products. Various components may be included in the IC die on the wafer and/or may include various layers thereon, such as barrier layers, dielectric layers, magnetic layers, device structures, active components, and passive components, including source regions Drain region, bit line, base, emitter, collector, conductive trace, and conductive via. In addition, IC dies can be formed by a variety of processes including bipolar processes, CMOS, BiCMOS, and MEMS.Modifications are possible in the described embodiments, and other embodiments are possible within the scope of the claims. |
A microelectronic product and the method for manufacturing the product are provided. A source and drain are spaced from one another in a first direction and are connected to opposing ends of a channel to provide a set voltage. First and second gates are spaced from one another in a second direction surrounding a portion of the channel to allow for application and removal of a gate voltage. Application of the gate voltage repels majority carriers in the channel to reduce the current that conducts between the source and drain. |
1.A microelectronic product, including:Substrates extending mainly in the x and y directions;A channel formed on the substrate;A source and a drain, the source and the drain being spaced apart from each other in the y direction and connected to opposite sides of the channel, thereby providing a setting voltage on the channel; andFirst and second gate portions that are spaced apart from each other in the x direction and are located on opposite sides of the channel, thereby allowing a gate voltage to be applied and removed on the gate portion , The application of the gate voltage repels majority carriers in the x direction, thereby reducing the current conducted between the source and the drain.2.The microelectronic product according to claim 1, further comprising a p-doped layer on the substrate and a p + doped implant next to the channel, the channel being located on the p-doped layer For an n-doped channel, the first and second gate portions are the p + doped implant and a portion of the p-doped layer, respectively.3.The microelectronic product of claim 2, further comprising an electrode located above the p-doped layer, the channel is an end implant under the electrode.4.The microelectronic product according to claim 2, wherein the p-doped layer forms a third gate portion under the channel.5.The microelectronic product according to claim 1, further comprising an n-well, the channel is an upper part of the n-well, wherein the n under the channel is repelled due to the first and second gate portions The majority carriers in the well thus prevent current leakage from occurring under the channel between the source and the drain.6.The microelectronic product of claim 5, further comprising a third gate position above the channel and repelling majority carriers in the z direction.7.The microelectronic product of claim 1, wherein the source electrode and the drain electrode are n + regions.8.The microelectronic product according to claim 7, wherein the channel is n-doped.9.The microelectronic product of claim 8, wherein the first and second gates are p + regions.10.A method for manufacturing a junction field effect transistor includes:A channel, a source, a drain, and first and second gate portions are formed on the substrate mainly extending in the x and y directions, the source and the drain are separated from each other in the y direction and connected to the Opposite sides of the channel, thereby providing a setting voltage on the channel, the first and second gate portions are spaced apart from each other in the x direction and are located on opposite sides of the channel, so that the gate portion The upper applied and removed gate voltages reduce and increase the current conducted between the source and the drain, respectively.11.The method for manufacturing a junction field effect transistor according to claim 10, further comprising forming a p-doped layer on the substrate and forming a p + doped implant next to the channel, the channel being the p For an n-doped channel on the doped layer, the first and second gate portions are the p + doped implant and a portion of the p-doped layer, respectively.12.The method for manufacturing a junction field effect transistor according to claim 10, further comprising forming an n-well, the channel is an upper part of the n-well, wherein the first and second gates partially repel the The majority carriers in the n-well under the channel, thus preventing current leakage under the channel between the source and the drain.13.A method of controlling current, including:A set voltage is applied to a source and a drain connected across a channel, wherein the channel is formed on a substrate extending in the x and y directions, and the source and the drain are separated from each other in the y direction Open and connected to the opposite side of the channel; andGate voltages are alternately applied and removed on the first and second gate portions spaced apart from each other in the x direction, and the application of the gate voltage repels majority carriers in the x direction, thereby reducing the flow through the Channel current.14.The method of controlling current according to claim 13, further comprising applying a gate voltage to the gate portion between the substrate and the channel to repel majority carriers in the z direction.15.The method of controlling current according to claim 14, further comprising applying a gate voltage to the gate portion on the side of the channel opposite to the substrate to repel majority carriers in the z direction. |
Device and manufacturing method of low noise junction field effect transistorTechnical fieldEmbodiments of the invention relate to a junction field effect transistor (JFET) that provides greater control of the current flowing through the channel.Background techniqueSemiconductor devices can be manufactured in the form of integrated circuits or individual devices on a semiconductor substrate. A transistor is a type of semiconductor device that can be used for switching, amplification, signal modulation, and many other functions.The type of transistor called a field-effect transistor (FET) relies on applying a voltage to the gate to control the conductivity or current of the "channel".The channel region of any FET can be doped with an n-type implant or a p-type implant to form an n-type device or a p-type device. Various types of FETs use different types of insulation between the channel and the gate.Probably the most common FET is a metal oxide semiconductor field effect transistor (MOSFET), which uses an insulator such as SiO2 (oxide) between the channel and the gate.Another type of FET called JFET uses a p-n junction as the gate. A conventional three-terminal JFET allows current to flow from the source to the drain while using two gates to control the current.In the absence of a gate voltage, charge carriers flow into the channel region between the source and drain terminals and are in a "normally on state" unless a gate voltage is applied. When a gate voltage is applied, a depletion region is formed by pushing mobile carriers away from the channel and "snipping" the channel.The gate voltage can be changed so that the JFET acts as a switch by affecting the cross-sectional area of the channel and the channel resistance, or used to modulate the flow of current. The type of JFET application will determine whether JFET is the most ideal choice as a switch or modulator.In one example, JFET can be used to design a radio transceiver that uses direct conversion. In essence, the RF signal and the local oscillator signal are input into the mixer at the same carrier frequency. The signals are subtracted from each other to obtain a low-frequency baseband output signal.One of the problems with direct conversion is that the mixer must work at a very high frequency while providing a certain gain, which will introduce noise and make signal processing difficult.Ideally, the mixer transistor should be small to support frequencies in excess of 6GHz. However, the area of the device is inversely proportional to the flicker noise generated. At lower frequencies, the main source of flicker noise in MOSFETs is due to the interaction of mobile charges with doped ions in the silicon-oxide interface and in the channel.In contrast, JFET will reduce flicker noise because its conduction occurs through the p-n junction in the body, rather than near the surface of the oxide interface. However, there are still problems with using standard complementary metal oxide semiconductor (CMOS) processes to manufacture JFETs. Manufacturing effective JFETs using standard CMOS processes usually requires carefully designed implants to obtain the correct channel depth, which also requires additional mask processing, which increases the cost of the product. Many JFETs use a buried gate in the substrate material as another device to control channel flow. If a buried gate is not used, the resulting JFET will require voltages up to several hundred volts to "pinch off" the channel, which lacks efficiency.BRIEF DESCRIPTIONThe present invention is described below by way of example with reference to the drawings, in which:1 is a top view of a substrate containing multiple junction field effect transistors according to an embodiment of the invention;FIG. 2 is a cross-sectional front view taken along line 2-2 of FIG. 1;3 is a cross-sectional side view taken along line 3-3 of FIG. 1;4 is a cross-sectional side view taken along line 4-4 of FIG. 1;5 is a cross-sectional front view similar to FIG. 2 showing the stage of applying the insulator material to the substrate in the manufacturing process;6 is a diagram similar to FIG. 5 showing the stage of applying the conformal layer to the substrate in the manufacturing process;7 is a diagram similar to FIG. 6 showing the phase of etching the conformal layer in the manufacturing process;8 is a diagram similar to FIG. 7 showing the stage of inserting the implant into the substrate in the manufacturing process;9 is a diagram similar to FIG. 8 showing the stage where the device has been annealed in the manufacturing process;10 is a top view of a substrate containing a junction field effect transistor according to another embodiment of the present invention;11 is a cross-sectional side view taken along line 11-11 of FIG. 10; andFig. 12 is a sectional front view taken along line 12-12 of Fig. 10.detailed description1 to 4 in the drawings show a JFET 20 according to an embodiment of the present invention, which includes a source 22, a drain 24, a channel 26, and first and second gates 30 and 32.First, the manufacture of the junction field effect transistor will be described with reference to FIGS. 5 to 9, and then its function will be described.FIG. 5 shows the p-substrate 36 of the wafer. The substrate material may be gallium arsenide, silicon, germanium, silicon carbide, or other well-known semiconductor substrate materials. Then, the substrate material is p-doped to form a p-substrate 36, which will then serve as the second gate 32 and the third gate 34.A thin epitaxial layer composed of an insulator material 38 such as oxide is grown on top of the p-substrate 36, and an electrode material 40 is applied on top of the insulator material 38. Then, an unmasked portion of the p-substrate 36 is implanted with an n-type dopant, thereby obtaining an n-type region 42. The n-type dopant may be phosphorus, arsenic, antimony, or any other well-known dopant, which can form a large number of mobile electrons in the material to which the dopant is applied.As shown in FIG. 6, after the n-type region 42 is formed, an oxide conformal layer 44 is applied over the insulator material 38, the electrode material 40 and the n-type region 42. The material of the conformal layer 44 can be selected according to the type of etching process employed.In FIG. 7, the conformal layer 44 is anisotropically etched back by an etching process, thereby forming a spacer 46 extending on a part of the n-type region 42. The etching process may be plasma etching or any well-known anisotropic etching process.As shown in FIG. 8, then, using a p-type dopant such as boron, a p-implant 48 is implanted next to the spacer 46 by any known p-type doping method. The spacer 46 obtained by the anisotropic etching process is provided to prevent the p implant 48 from completely covering the n-type doped region 42. Therefore, as shown in FIG. 8, after the p implant 48 is inserted, a small N-tip implant channel 50 is formed under the spacer 46. As also shown in FIG. 8, the N-terminal channel 50 is located directly under the spacer 46 and does not extend below the electrode material 40. The P implant 48 is separated from the electrode material 40 by the N-terminal channel 50.As shown in FIG. 9, the device assembly is then annealed, which will cause activation and diffusion of the P implant 48 and the N-terminal channel 50. The high temperature annealing process will cause the N-terminal channel 50 and the P implant 48 to diffuse in both the vertical and horizontal directions, thereby causing the final N-terminal channel 50 to be positioned below the electrode material 40. The P implant 48 also diffuses to a position where the edge of the P implant 48 is aligned with the edge of the electrode material 40. In the final position, the P implant 48 is no longer separated from the electrode material 40 by the N-terminal channel 50. The P substrate 36 still surrounds the N-terminal channel 50 from the side that never faces the p implant 48 and the oxide interface 38.After diffusion, the P implant 48 effectively serves as the first gate 30, and portions of the p substrate 36 serve as the second gate 32 and the third gate 34. In addition, the electrode material 40 effectively serves as the fourth gate 52. The repair of the lattice damage that may occur during the implantation process during the annealing process also causes activation of the doped regions 48 and 50. In addition, the N-terminal channel 50 will become the activated N-terminal channel 26.Referring again to FIGS. 2, 3 and 4, then a second insulator material is formed along the z direction, thereby forming a second insulator layer 54 surrounding the fourth gate 52. A contact material 56, which may be tungsten or any known contact material, is applied on top of the fourth gate 52 surrounded by the third insulator layer 60 formed on top of the second insulator layer 54. The final conductor layer 62 is applied to the top of the contact 56 and the third insulator layer 60 by a metallization process. The final conductor layer 62 is copper or any other acceptable conductive material.As shown in FIG. 1, the source electrode 22 and the drain electrode 24 are generally separated from each other in the y direction, and they are N + doped. The substrate 36 mainly extends in the x direction and the y direction. A P + depletor electrode (depletor electrode) 28 is separated from the source electrode 22 and the drain electrode 24 in the y direction, and the electrodes are connected to the first gate 30, the second gate 32, the third gate 34, and the fourth The gate 52 applies a gate voltage.Referring to FIG. 4, N + doped source 22 and drain 24 regions are provided to opposite sides of N-end channel 26. The source electrode 22 and the drain electrode 24 are placed in contact with the N-terminal channel 26. This arrangement allows current to flow between the source 22 and the drain 24 through contact with the N-terminal channel 26.As shown in FIG. 3, the source electrode 22 and the drain electrode 24 may have a voltage applied through a contact material 58, which may be selected from any known contact material, such as tungsten.As shown in FIG. 2, the fourth gate 52 is located above the N-terminal channel 26, the first gate 30 is located on the side of the N-terminal channel 26, and the p-substrate 36 surrounds the N-terminal channel 26 and serves as the second Grid 32 and the third grid 34. The first gate 30 and the second gate 32 are separated in the x direction.As further shown in FIGS. 1 and 2, the four N-terminal channels 26 extend in the y direction and are spaced apart from each other in the x direction. It should be noted that depending on the application and current requirements, a device with one or more N-terminal channels 26 can be formed.FIG. 2 shows a total of three first gates 30, four N-terminal channels 26, and two fourth gates 52. Referring to FIG. 2, the two N-terminal channels 26 are located under the fourth gate 52 while the p-substrate 36 separates the two. Two N-terminal channels 26 and a part of the p-substrate 36 are located between the two first gates 30.In use, referring to FIGS. 1 to 4, a set voltage is applied to the source electrode 22 and the drain electrode 24 through the contact material 58 so that a current flows through the N-terminal channel 26. The N-terminal channel 26 is surrounded by the first gate 30 and the p-substrate material 36 serving as the second gate 32 and the third gate 34. The fourth gate 52 is also disposed above the N-terminal channel 26 in the z direction.Referring to FIG. 1, when a negative gate voltage is then applied through the P + depletion device 28, the first gate 30, the second gate 32, the third gate 34 and the fourth gate 52 are at the N-terminal channel 26 A reverse biased region is formed inside, so that the N-terminal channel 26 is "pinched", so that it is completely depleted and non-conductive. The gate forms a negative bias region by repelling or pushing away holes in the N-terminal channel 26, thereby terminating the flow of electrons. In this embodiment, the majority carriers are holes, but in electron-based devices, the majority carriers may also be electrons.The first gate electrode 30 and the fourth gate electrode 52 are P + doped to make contact with the material more easily. When a gate voltage is applied to the first gate 30, the p-substrate 36 material will also be used as the second gate 32 and the third gate 34. Surrounding the N-terminal channel 26 with a gate can more effectively limit the current through the N-terminal channel 26. When the gate voltage is removed, current will continue to flow between source 22 and drain 24.When a positive gate voltage is applied, a typical metal oxide semiconductor field effect transistor (NMOS) with N + source and drain forms a channel just below the oxide layer. Because electrons will be trapped along the silicon-oxide interface as they flow between the source and drain, typical NMOS devices have higher flicker noise or 1 / f (1 / frequency) noise.Compared with NMOS, the improvement of the JFET described above can achieve a lower 1 / f flicker noise due to the absence of an oxide-trapping oxide interface because its conduction occurs through the bulk rather than through the surface of the substrate. However, common JFET settings require carefully designed buried implants to obtain the correct channel depth and control, and additional mask processing and manufacturing are required to achieve this. Increased manufacturing steps will lead to increased costs and product complexity. If a buried gate is not used in a typical JFET, the resulting JFET will require a few hundred volts to turn off the deep channel, which is inefficient.The main advantage of the embodiment of FIGS. 1 to 4 is that it uses the existing standard complementary metal oxide semiconductor manufacturing process, while relying on the N-terminal channel 26 to reduce 1 / f noise without a buried gate. Although there is an oxide or insulator layer 38 in the vicinity of the N-terminal channel 26, the 1 / f noise is also significantly reduced in the embodiments in FIGS. 1 to 4.Therefore, the JFET 20 of FIG. 1 is a microelectronic product having: a substrate 36 extending mainly in the x and y directions; a channel 26 formed on the substrate 36; a source 22 and a drain 24, both Spaced apart from each other in the y direction and connected to opposite sides of the channel 26 to provide a set voltage on the channel 26; and first and second gate portions 30 and 32, which are spaced apart from each other in the x direction and located in the trench The opposite side of the channel 26, allowing the application and removal of the gate voltage above the gate portion, the application of the gate voltage repels the majority carriers in the x direction, thereby reducing the source 22 and the drain 24 Conducted current.The substrate 36 includes a part of a wafer, and the JFET 20 also has a p-doped layer on the wafer and a p + doped implant next to the channel 26, which is on the p-doped layer Of the n-doped channel, the first gate portion 30 and the second gate portion 32 are the p + doped implant and a portion of the p-doped layer, respectively. The p-doped layer forms a portion of the third gate 34 under the channel 26.The JFET 20 also has an electrode in the form of a gate 52 above the p-doped layer, and the channel 26 is an end implant under the electrode.As is apparent from the description for FIGS. 1 to 9, a method of manufacturing a junction field effect transistor is described. Specifically, a channel 26, a source 22, a drain 24, and first and second gate portions 30 and 32 are formed on the substrate 24 extending mainly in the x and y directions. The source and drain electrodes 22 and 24 are spaced apart from each other in the y direction and are connected to opposite sides of the channel 26 to provide a set voltage across the channel 26. The first and second gate portions 30 and 32 are spaced apart from each other in the x direction and are located on opposite sides of the channel 26, so that the application and removal of the gate voltage on the gate portion decrease and increase respectively at the source The current conducted between the pole 22 and the drain 24.Obviously, a method of controlling current is also described. A set voltage is applied to the source 22 and the drain 24 connected across the channel 26 formed on the substrate 24 extending in the x and y directions. Gate voltages are alternately applied and removed on the first and second gate portions 30 and 32 spaced apart from each other in the x direction, the application of the gate voltage repels majority carriers in the x direction, thereby reducing the flow Current through channel 26.The JFET 20 formed according to the structure in FIGS. 1 to 9 includes a long channel 26 made of a semiconductor material. The material is doped so that it contains a large number of positive charge carriers (p-type) or a large number of negative charge carriers (n-type). There are contacts at each end; these are source and drain 22 and 24. The third control terminal (ie, the gate) surrounds the channel 26 and is doped opposite to the doping type of the channel 26.When there is no gate voltage, when a voltage is applied between the source electrode 22 and the drain electrode 24, the flow of current can be easily achieved. The current is modulated by applying a voltage between the gate and source terminals. The polarity of the gate voltage makes the p-n junction between the gate and the channel in a reverse bias state, thereby increasing the width of the depletion region within the junction. Because the current-carrying channel shrinks as the gate voltage increases, the current from the source to the drain also decreases. The gate controls the conduction of the channel 26 in this way, just as in a MOSFET. Unlike most MOSFETs, JFETs are usually depletion devices-they are all "on" unless a gate voltage is applied.The JFET gate has a small current load, which is the reverse leakage of the gate to the channel junction. The advantage of a MOSFET is that the gate current is extremely low (measured in picoamperes) because of the insulating oxide between the gate and the channel. However, compared to the base current of a bipolar junction transistor, the gate current of the JFET is much lower, and the JFET has a higher transconductance than the MOSFET. Therefore, it is advantageous to use JFET in some low noise, high input impedance operational amplifier applications, and sometimes JFET in switching applications.The current in the N-JFET caused by the small voltage VDS is given by the following formula:IDS=(2a)WQDDμVDSLamong them:2a = channel thicknessW = widthL = lengthQ = electron charge = 1.6 × 10-19C μ = electron mobilityIn the saturation zone,IDS=IDSS[1-VGSVP]2In the linear zone,ID=(2a)WQNDμDL[1-(VGSVP)1/2]VDSThe second embodiment shown in FIGS. 10 to 12 shows another alternative embodiment, which has a source 64, a drain 66, an n-well channel 72 and first and second gates 68 and 70. As shown in FIG. 10, in the p-substrate 74, the source electrode 64 and the drain electrode 66 are separated in the y direction, and the first gate 68 and the second gate electrode 70 are separated from each other in the x direction. The N-well channel 72 connects the source 64 and the drain 66 to allow current to flow between the source 64 and the drain 66 when a voltage is applied through the contact material 78.As shown in FIG. 12, the first gate 68 and the second gate 70 may have a voltage applied thereto through the gate contact material 80. As shown in FIG. 11, the n-well channel 72 has a source terminal 72a and a drain terminal 72b.The third gate 76 mainly extends in the x and y directions and is located on top of the first and second gates 68 and 70 and the n-well channel 72. The third gate 76 may be selected from any known effective conductor or gate material such as polysilicon.As shown in FIGS. 11 and 12, the n-well channel 72 extends below the first gate 68 and the second gate 70 to connect the source 64 and the drain 66. Specifically, referring to FIG. 12, the first gate 68 and the second gate 70 are aligned in the same plane above the n-well channel 72.Specifically, referring to FIG. 11, the source terminal 72 a of the n-well channel 72 is completely in contact with the source 64N + region; however, the drain terminal 72 b of the n-well channel 72 only slightly contacts the drain 66N + region. In addition, referring to FIG. 10, the first gate 68 and the second gate 70 are slightly shifted toward the drain 66 in the y direction.The n-well channel may have an impurity concentration of about 1 × 1018 cm-3, and the source and drain concentrations may be about 1 × 1020 cm-3.The first and second gates 68 and 70 and the source and drain electrodes 64 and 66 may be manufactured to have a depth of about .3 μm from the top of the p-substrate 36. The n-well 72 can be manufactured to have a depth of about 1.7 μm.In use, a set voltage is applied between the source 64 and the drain 66 through the contact material 78, thereby allowing current to flow between the source 64 and the drain 66 through the n-well channel 72. However, when a negative gate voltage is applied to the first gate 68 and the second gate 70 through the gate contact material 80, it is formed by pushing away holes in the n-well channel 72 and n-well channel terminals 72a and 72b Reverse bias region. As shown in FIG. 12, the reverse bias region will pinch off the n-well channel 72 in the z direction. As also shown in FIG. 12, the negative voltage on the third gate 76 will reverse the n-well channel 72, thereby causing further depletion. Typically, merely pinching off the current in the z direction may still not effectively prevent body current leakage from occurring at the bottom of the n-well channel 72.However, referring to FIG. 11, the n-well channel 72 only slightly contacts the drain 66N + region at the drain terminal 72b. In addition, the first gate 68 and the second gate 70 are laterally disposed on both sides of the n-well channel drain terminal 72b, and are shifted toward the drain 66 in the y direction. When a gate voltage is applied to all the gates 68, 70, and 76, the gate voltage is not only pinched off in the z direction, but also pinched in the x and y directions. The pinch off at the n-well channel drain terminal 72b causes the drain 66 to be isolated, and all current flow is terminated.The combination of providing a small drain 66 contact area with the n-well drain terminal 72b and placing the first gate 68 and the second gate 70 specifically near the n-well drain terminal 72b can achieve drain isolation. Therefore, the body leakage through the bottom of the n-well channel 72 will no longer be a problem because the drain 66 and the source 64 are pinched off. With this arrangement, a large voltage is no longer required to pinch off the n-well channel 72. The n-well channel 72 is sufficiently thin so that it can pinch off the channel with only a few volts.As mentioned above, the NMOS arrangement has a higher 1 / f noise due to the flow of electrons along the oxide-silicon interface. Moreover, a JFET arrangement without a buried gate may require several hundred volts to deplete the deep channel.The main advantage of the embodiment of FIGS. 10-12 is that the device can eliminate any 1 / f noise caused by the oxide-silicon interface, while cutting off the current without using a buried gate and without using a voltage of several hundred volts . In addition, the unique positions of the first gate 68, the second gate 70, and the n-well channel 72 allow the device to deplete the deep channel without using a buried gate, while also eliminating potential drain isolation. 66 through the body leakage generated at the bottom of the n-well channel.Although some exemplary embodiments of the present invention have been described in text and illustrated in the drawings, it should be understood that these embodiments are only exemplary and do not limit the present invention, and The present invention is not limited to the specific configurations and arrangements given in the drawings and the text, because various modifications can be conceived by those of ordinary skill in the art. |
In described examples, a micro-electro-mechanical system (MEMS) (100) is located on a substrate (102). A silicon nitride (SiN) layer (104, 134) is on a portion of the substrate. A mechanical structure (120, 132) has a first end (125) embedded in the SiN layer and a second end (124) overhanging from the SiN layer. |
1. A micro-electromechanical system is MEMS, comprising:Substrate;a layer of silicon nitride, SiN, on a portion of said substrate; andA mechanical structure having a first end embedded in the SiN layer and a second end cantilevered from the SiN layer.2. The MEMS of claim 1, wherein the SiN layer has a cavity, and wherein the second end is cantilevered within the cavity.3. The MEMS of claim 1, wherein the composition of the SiN layer is SiOxNyCz, where x is less than 0.1 and z is less than 0.3.4. The MEMS of claim 1, wherein the mechanical structure is a matrix of interconnected metal structures, wherein interstitial spaces are distributed throughout the matrix of interconnected metal structures.5. The MEMS of claim 4, wherein a portion of the SiN layer is within a portion of the interstitial space.6. The MEMS of claim 1, wherein the mechanical structure has metallic components.7. The MEMS of claim 6, wherein the metal member is tungsten.8. The MEMS of claim 1 , wherein the mechanical structure comprises at least one material selected from the group consisting of W, Ti, TiN, SiO2, Al, TiAl, TiN, Al, TiN, TiAl, TiW, SiOxNyCz, where x>0.1 or z>0.2, Ni, Co, NiW, Pt, Ir, IrOx, Ru, RuOx, Au, Ag, Pd, Cu, Ta, TaN, AlN and Al2O3.9. The MEMS of claim 1, wherein the mechanical structure is a first material, the MEMS further comprising a layer of a second material between a portion of the first material and the SiN layer.10. An integrated circuit package comprising:Integrated circuit dies comprising semiconductor circuitry, i.e. IC dies; andA microelectromechanical system (MEMS) integrated in the IC die, wherein the MEMS includes:Substrate;a layer of silicon nitride, SiN, on a portion of said substrate; andA mechanical structure having a first portion embedded in the SiN layer and a second portion cantilevered from the SiN layer.11. The integrated circuit package of claim 10, wherein the SiN layer has a cavity, and wherein the second portion of the mechanical structure is cantilevered within the cavity.12. The integrated circuit package of claim 10, wherein the composition of the SiN layer is SiOxNyCz, where x is less than 0.1 and z is less than 0.3.13. The integrated circuit package of claim 10, wherein the mechanical structure is a matrix of interconnected metal features, wherein interstitial spaces are distributed throughout the matrix of interconnected metal features.14. The integrated circuit package of claim 13, wherein a portion of the SiN layer is within a portion of the interstitial space.15. The integrated circuit package of claim 10, wherein the mechanical structure is a first material, the integrated circuit package further comprising a second layer between a portion of the first material and the SiN layer. material layer.16. The integrated circuit package of claim 10, further comprising a molding compound surrounding the IC.17. A method of fabricating a microelectromechanical system (MEMS), the method comprising:form the substrate;depositing an etch stop layer on the substrate;depositing a SiN layer on the etch stop layer;patterning the SiN layer to form trenches;depositing a metallic material in the trenches using vapor deposition;planarizing the metallic material using chemical mechanical polishing to form a mechanical structure; andA portion of the SiN layer around a portion of the mechanical structure is removed using vapor phase etching.18. The method of claim 17, further comprising depositing an additional SiN layer on top of the mechanical structure prior to removing the portion of the SiN layer around the portion of the mechanical structure .19. The method of claim 17, further comprising fabricating transistors on the base substrate.20. The method of claim 17, further comprising encapsulating the base substrate, the transistors, and the mechanical structure with a molding compound to form a packaged MEMS device. |
Microelectromechanical device with beam structure over silicon nitride undercuttechnical fieldThe present invention relates to a microelectromechanical device comprising a release mechanism over an undercut in a silicon nitride layer.Background techniqueMicroelectromechanical (MEM) relays can play an important role as a device for adding functionality and reducing power consumption in various applications, such as sensors and consumable devices for the Internet of Things (IoT) and wearable devices. One type of MEMS device is a mechanical relay. These devices are capable of quasi-ideal switching behavior with very abrupt switching transitions and zero current leakage during the off-state. The multi-terminal operation of the relay also saves energy. See, for example, "High Performance Seesaw Torsional CMOS-MEMS Relay Using Tungsten VIA Layer" by Martin Riverolo et al., 2018. A complementary metal-oxide-semiconductor (CMOS) platform can be used for the monolithic fabrication of such MEMS relays combined with classical CMOS devices.CMOS MEMS is a technology that uses Al (aluminum) metallization and chemical vapor deposition (CVD) of tungsten (W) in a VIA mask to create MEMS structures. A feature of this approach is that silicon dioxide (SiO2) between the metal layers is used as a removable spacer. SiO2 is typically removed using vaporous hydrogen fluoride (HF) or liquid HF. Some CMOS MEMS devices use silicon (Si) (single crystal or polycrystalline) as the MEMS removable layer. Silicon can be etched using a plasma fluorine (F) process or xenon difluoride (XeF2).Contents of the inventionIn the depicted example, a microelectromechanical system (MEMS) is located on the substrate. On a portion of the substrate there is a layer of silicon nitride (SiN). The mechanical structure has a first end and a second end. The first end is embedded in the SiN layer, and the second end is cantilevered from the SiN layer.Description of drawings1 is a cross-sectional view of a portion of an example CMOS integrated circuit chip including MEMS devices with beams formed in a silicon nitride layer.FIG. 2 is a top cross-sectional view of the MEMS device of FIG. 1 .3A-3F illustrate fabrication steps of the MEMS device of FIGS. 1 and 2 .4A is a top cross-sectional view of another example CMOS integrated circuit chip, and FIG. 4B is a cross-sectional view of the CMOS integrated circuit chip including MEMS devices with beams formed in a silicon nitride layer.5 is a cross-sectional view of another example CMOS integrated circuit chip including MEMS devices with beams formed in a silicon nitride layer.Figure 6 is an example packaged MEMS device.Detailed waysIn the figures, identical elements are denoted by identical reference numerals for the sake of consistency.CMOS MEMS has several features that make it attractive. CMOS is quite mature, and analog circuits or other circuits can be incorporated on the same wafer as MEMS devices. CMOS wafers are relatively inexpensive and can be fabricated using a variety of known process technologies.One problem with fabricating MEMS using an example CMOS process is the silicon dioxide (SiO2) undercut process. Typically, a SiO2 dielectric layer is formed, and then a metal structure is formed on the SiO2 layer. The wet hydrofluoric acid (HF) etch typically used to undercut a portion of SiO2 to release a portion of the metal structure to form a MEMS device creates high stresses that confine the MEMS structure. HF etching also attacks some other materials such as titanium (Ti) that are commonly used within CMOS structures. Vapor HF is even more reactive and corrodes SiN, which is commonly used as a dielectric material. This makes it difficult to generate dielectric elements that are part of MEMS structures.The example CMOS process also has a limitation in that it may not include the materials required for MEMS devices such as relays. Conductive materials such as W and titanium nitride (TiN) used in CMOS processes do not make good contacts for MEMS devices such as relays.In the example described, CMOS metals such as Aluminum (Al), Ti, TiN, Titanium Tungsten (TiW), W, Copper (Cu), Tantalum (Ta), Tantalum Nitride (TaN) can be used with alternative Alternative materials such as: tantalum pentoxide (Ta2O5), titanium dioxide (TiO2), aluminum oxide (A12O3), titanium aluminum nitride (TiAIN), chromium nitride (CrN), titanium aluminum oxynitride ( TiAlON), molybdenum (Mo), aluminum nitride (AlN), aluminum scandium nitride (AlScN), hafnium zirconium oxide (HfZrOx), platinum (Pt), iridium (Ir), iridium oxide (IrOx), lead zirconate titanate (Pb(Zr,Ti)O3), lead (Pd), lead oxide (PdO), gold (Au), silver (Ag), nickel-iron alloy (NiFe), iron (Fe), cobalt (Co), nickel (Ni ), cobalt-nickel-iron alloy (CoNiFe), ruthenium (Ru), ruthenium oxide (RuO2), etc., which can be used to create piezoelectric actuation and/or relay contacts with undercut SiN dielectrics.These alternative materials can be included in the CMOS process by using a SiN dielectric layer that is planarized during the fabrication process using a chemical mechanical polishing (CMP) step. This makes it possible to create CMOS using SiN as the dielectric between layers. A plasma process using carbon, fluorine, and oxygen (CxFy+O2) was used to provide selective gas removal of SiN without attacking most of these metals and dielectrics. For example, plasma carbon tetrafluoride plus oxygen (CF4+O2) provides a strong etch rate for SiN while only weakly etching SiO2. This selectivity applies to most other materials, but there is some corrosion of W, Mo, and Ru or RuO2. Use a downstream plasma with low substrate bias. This process is similar to that used for ashing resist. This plasma process etches SiN in an almost isotropic manner. This is a useful feature for undercut etching of MEMS structures. This is a completely different etch process than VIA etch, which requires a directional etch to create vias.In the described example, SiO2 and many other dielectrics such as Al2O3, Ta2O5, TiO2 can be used to create dielectric features that are not strongly corroded by the plasma etch undercut process.FIG. 1 is a cross-sectional view of a portion of a CMOS integrated circuit chip including a MEMS device 100 having a beam 120 formed in a silicon nitride layer 104 . FIG. 2 is a top cross-sectional view of the MEMS device of FIG. 1 . In this example, beam 120 is part of MEMS device 100 . This figure shows an illustration of a simple cantilever beam with two layers. The bottom floor is W120. This layer is formed using a typical W VIA damascene process flow. The VIA pattern has the most space per via, but the mesh structure can be used to create large features. The SiN etch on another patterned layer may or may not be stopped for better thickness definition. The trenches are then etched and cleaned to remove the remaining resist and any residue that may be present. The next step is to deposit a Ti adhesion and barrier layer 105, which is typically CVD TiN (actually TiCON). These barrier materials can be other materials such as Ta, TaN, Ru, etc. The trenches are then filled with CVD W, and then CMP is used to remove the W outside the trenches. CMP or subsequent cleaning removes the adhesion/barrier layer (Ti/TiN) 105 . Since W needs to be protected from SiN undercut etching, it needs to be protected by unetched material. One technique shown in this figure is to use layer 132, which in this example is Al on TiN. In practice, layer 132 may be other metals or dielectric materials that are not corroded rapidly by the SiN undercut etch process. Example dielectric materials for 132 are AlOx, AlN, SiO2, TaO, TiO2. This layer can be patterned using an etching process using another mask. Another option is to make it a self-aligned overlay. The process typically starts with a W recess etch (dry or wet), which removes W faster relative to SiN. A protective insulating or conductive barrier layer is then deposited, and the layers above the SiN are then removed using a CMP or etch-back process. In this way, W is protected without using an additional masking step. A key point is that these materials etch much slower than SiN in the undercut etch process.Although the diagram shows the creation of a cantilever, in reality, multiple patterned and unpatterned layers can be located below or above a MEMS layer that has been created using the undercut process. These additional layers can be used to create a variety of MEMS devices that can be created using this approach.In this example, a SiN layer 104 is formed over a substrate 102 , which in this example is silicon (Si). For simplicity, only a small upper portion of substrate 102 is illustrated. As is well known, CMOS processes typically process active devices in a thin epitaxial layer of silicon formed on top of a bulk wafer of silicon. Also, for simplicity, this example is not drawn to scale. The beam 120 is significantly longer than its thickness. Typical VIA thicknesses 121 are those used in CMOS devices, typically between 0.1 μm and about 5 μm. The total beam thickness in this example is the VIA thickness plus any additional layers above layer 132 or below layer 120 (not shown in this figure). The released beam length 124 is typically much longer than its width, with a typical aspect ratio (length to height) of 5 to 1 or even greater. For example, if the thickness 121 of the beam is 1 μm, the relief portion 124 of the beam 120 is typically much longer than 5 μm. In other examples, the released portion of such beams may be longer than 20 μm and possibly even longer than 100 μm, depending on the material and cross-sectional design. The total length 122 of the cantilever beam feature 120 is always longer than the released beam length 124 so that there is still a reasonable length 125 embedded in the remaining SiN 104 .In this example, the beam 120 is machined from tungsten (W) with a thin liner of TiN. Tungsten and TiN are deposited in a damascene process using known chemical vapor deposition (CVD) techniques, and CMP on W and TiN is used to remove unwanted metals. TiN also acts as a diffusion barrier and an etch protection layer for W after the SiN has been removed by an undercut etch process. Not shown is a thin layer of Ti, which is typically used as an adhesion layer and also to create a lower resistance electrical connection to the metal below the structure. Ti is typically deposited by directional sputter deposition using an ionized metal plasma to achieve a thin layer of metal on the bottom of VIA type features. Inlay is the technique of adorning gold, silver or copper wires on iron, steel, bronze or brass surfaces. Narrow undercuts are made in the surface of the metal with a chisel, and the wire is pressed into the undercuts with the aid of a hammer. In this example CMOS process, a CMP process is used to remove unwanted tungsten after the CVD process, as described in more detail below. This requires a flat surface, so either the surface has not been patterned, or it has been flattened using CMP prior to the W patterning step. Although this is similar to what is done in a CMOS process, in this case the W is surrounded on the bottom and sides by a SiN layer 103 instead of SiO2.In a damascene process, a dielectric layer is first deposited onto a substrate. The dielectric layer is then patterned and filled by metal deposition. The dual damascene process is characterized by patterning both the via and the trench so that the metal deposit fills both while leaving a gap area between the via and the trench. The damascene process of the beam 120 uses the existing interlayer dielectric in which vias and trenches for the conduction paths are etched. In this case, the dielectric layer 103 is SiN and the metal used for the beam 120 is tungsten. In another example, the metal used for beam 120 may be selected from other metals, such as TiW.In this example, W or TiW is used to create the MEMS beam structure 120 and provide low creep rates. Creep in this context is the change in deformation of a beam after exposure to time and temperature and possibly additional stress. Typical product times/temperatures are 10 years, 85°C, 105°C, 125°C or 150°C. Some products require longer times or even higher temperatures. Stress depends on application. Even if there are no added external stresses, there are always built-in internal stresses that can cause the beam to change position. Many devices require that the beam position (lateral and vertical) not change when not subjected to external stress. Of course, beams always have a spring constant, and do bend under stress. The thickness of W is selected to provide greater than about 75% stiffness to beam 120 so slow creep of other materials does not degrade the properties of the overall MEMS structure. In the example shown, a thin layer 105 of TiN surrounds the beam 120 and an aluminum conductor 132 is patterned on top of the beam 120 . Aluminum is known to have a high creep rate, but if W has a small relative creep and is the dominant part of the beam stiffness, the whole beam or MEMS structure will not move much. In this example, the Al layer can be used not only to protect the top of the W in the beam from SiN undercut etching, but also as an electrical contact to the beam. The Al layer can also be on the bottom of the beam and act as an etch stop layer for the beam. In various examples, the Al layer typically also includes other materials, such as Ti, TiAl, TiN. In various examples, other materials may be present as desired for a particular function.Sacrificial SiN layer 134 is formed over SiN layer 104 and various other elements such as Al conductor 132 . A SiO 2 dielectric layer 136 is formed on top of the SiN layer 134 . Openings 138 are patterned in dielectric layer 136 to guide the undercut process. Layer 136 may be multiple layers or even other materials than SiO2, such as AlOx, SiON.The undercut region 140 is formed using carbon tetrafluoride plus oxygen (CF4+O2) to provide a strong etch rate for SiN while only weakly etching the SiO2 etch stop layer 103 and the SiO2 top dielectric layer 136. In practice, the process requires plasma activated fluorine plus oxygen. Various options exist for fluorine (F), such as SF6 or other fluorocarbons, plus F2, NF3, HF, etc. There are also multiple options for the O2 source, such as H2O, O3, NO2, CO2, etc. The list of CF4+O2 is a common process that has demonstrated good results, but other processes do exist that use these alternative chemistries. As shown in FIGS. 1 and 2 , a portion 141 of the undercut region 140 extends across the bottom of the relief portion 124 of the beam 120 and up to the side of the portion 124 of the beam 120 . A portion 142 of SiN layer 134 is also etched away. Regions 140 , 141 , 142 together are cavity regions in SiN layers 104 , 134 into which a portion of beam 120 cantilevers. In this way, the cantilevered release portion 124 of the beam 120 is separated from the SiN layers 104, 134 and thus can move in response to forces such as electrostatic forces. Another portion 125 of the beam 120 remains firmly embedded and anchored in the SiN layer 104 . In this way, beam 120 can be used as part of MEMS device 100 .3A-3F illustrate fabrication steps of the MEMS device 100 of FIGS. 1 and 2 . An entire wafer 300 containing tens or hundreds of devices is fabricated as a unit. For simplicity, only the on-die portion of substrate 302 of wafer 300 is illustrated.At FIG. 3A , in this example, a SiO 2 layer 303 is formed over a substrate 302 , which in this example is silicon (Si). In this example, the CMOS process typically fabricates the active devices in a thin layer of epitaxial silicon formed on top of a bulk silicon wafer. During the undercut process, the SiO2 layer 303 will act as an etch stop layer. A SiN layer 304 is then formed over the SiO2 layer 303 . SiN layer 304 is thick enough to allow beam 120 to be formed therein (FIG. 2).One or more deposition steps may be required to achieve a sufficient thickness of SiN layer 304 . The generic term "SiN" for silicon nitride is used herein to refer to any of the different forms of silicon nitride, such as Si3N4, Si(x)N(y)H(z), and the like. In this example, the SiN layer has a composition of SiOxNyCz, where O is less than 0.1 and C is less than 0.3, while N makes up the remainder of the material except Si. The hydrogen (H) symbol is often omitted from chemical formulas (eg, SiN), but is commonly present in many such materials, including metals.In this example, SiN layer 104 is deposited using a chemical vapor deposition (CVD) process or plasma enhanced chemical vapor deposition (PECVD). Chemical vapor deposition is a coating process that uses heat-induced chemical reactions at a heated substrate surface, where the reagents are supplied in gaseous form. The most common CVD or PECVD silicon nitrides typically contain up to 8% hydrogen. Other methods of depositing SiN are sputter deposition or electron evaporation, but this is less common.After the deposition of the SiN layer 304 is complete, the wafer surface is typically planarized using a chemical mechanical polishing (CMP) process. This is necessary if there are layers underneath that have introduced topography. The CMP process uses abrasive and aggressive chemical slurries (usually colloids) in combination with polishing pads and retaining rings (which usually have a larger diameter than the wafer). The polishing pad and wafer are pressed together by the dynamic polishing head and held in place by the retaining ring. The dynamic polishing heads rotate with different axes of rotation (ie, not concentrically). This removes material and tends to flatten any topography irregularities, making the top surface 3041 of the wafer flat, also known as being "planar".At FIG. 3B , surface 3041 is patterned and etched to form trenches 306 within SiN layer 304 using known or later developed etching techniques. A thin layer 305 of TiN is then deposited over the wafer. This TiN layer coats the floor and walls of trench 306 .At FIG. 3C , a layer of tungsten 307 is deposited over the surface of the wafer and into cavity 306 . Tungsten layer 307 adheres to TiN layer 305 .At FIG. 3D , another CMP step has been performed to remove the tungsten layer 307 everywhere except within the trench 306 . In this way, beam 320 is formed within SiN layer 304 . As mentioned above, this is a mosaic process. An alternative to using CMP to remove these layers is an etch back process.TiN layer 330 is deposited over the surface of wafer 300 . An aluminum layer is then deposited over the surface of wafer 300 . A sacrificial layer (not shown) is then deposited, patterned, and etched to form aluminum conductors 332 that form the contacts of MEMS relay device 100 (FIG. 1).Another SiN layer 334 is then deposited over the surface of the wafer 300 followed by a CMP process to planarize the surface 3042 of the wafer 300 .At FIG. 3E , a SiO 2 dielectric layer 336 is deposited over the planarized surface of the wafer 300 . A sacrificial layer (not shown) is then deposited, patterned and etched to form openings 338 and 339 . The opening 338 will guide the undercut process around the beam 320 . The opening 339 will direct an etching process to form a via to contact the aluminum conductor 332 . Although only two openings are illustrated for simplicity, other openings are made for various points of contact with other features (not shown) on wafer 300 .At FIG. 3F, wafer 300 is exposed to carbon tetrafluoride plus oxygen (CF4+O2) plasma 350, 351 through openings 338, 339, thereby providing a strong etch rate for SiN layers 304, 334 while only weakly etching SiO2. stop layer 303 and a top dielectric layer 336 of SiO2. In this way, an undercut region 340 and a contact region 343 are formed. A portion 341 of the undercut region 340 extends across the bottom of the relief portion 324 of the beam 320 and up to the side of the portion 324 of the beam 320 . A portion 342 of SiN layer 334 is also etched away. Regions 340, 341, 342 together are cavity regions in SiN layers 304, 334 into which a portion of beam 320 cantilevers. The released portion 324 of the beam 320 is separated from the SiN layers 304, 334 to form a released mechanical structure and thus can move in response to a force such as an electrostatic force. Another portion 325 of the beam 320 is still firmly anchored in the SiN layer 304 . In this way, for example, beam 320 can be used as a resonator in a MEMS relay or as part of a supporting beam.Although not described herein, various CMOS transistors may also be processed on wafer 300 using known or later developed integrated circuit processing techniques. After completion, wafer 300 is sawed or otherwise separated into individual chips (also referred to as dies). The individual dies are then attached to a lead frame and encapsulated using known or later developed IC packaging techniques, such as molding with a molding compound, to provide a packaged MEMS device integrated with CMOS circuitry.4A is a top cross-sectional view, and FIG. 4B is a cross-sectional view, of a portion of another example CMOS integrated circuit chip including MEMS devices with beams 420 formed in silicon nitride layers 404, 434. 400. In this example, the beam 420 is machined with a matrix of vias and slots etched through a portion 424 of the beam 420 , indicated generally at 427 . The matrix 427 of vias and slots results in a matrix 427 of interconnected metal members with interstitial spaces 426 distributed throughout the beam portion 424 . Initially, interstitial space 426 will be filled with SiN of SiN remaining from SiN layer 404 . In other words, the interconnected metal members 427 of the beam portion 424 resemble a waffle pattern.MEMS device 400 is fabricated in a similar manner to that shown in FIGS. 3A-3F , but with the addition of the steps of machining vias 426 and slots 427 . Referring to FIG. 4B, during the plasma etch process described in FIG. 3F, the wafer on which the MEMS device 400 is processed is exposed to carbon tetrafluoride plus oxygen (CF4+O2) plasma 450, 451 through openings 438, 439, This provides a strong etch rate for the SiN layers 404, 434, while only the SiO2 etch stop layer 403 is weakly etched on the substrate 402 and SiO2 top dielectric layer 436. In another example, it is possible to replace or add other dielectrics, such as Al2O3, to further reduce the etching of the dielectric. Furthermore, metals that are not strongly corroded by the SiN undercut etch process can be used above and below the MEMS beams, as long as SiN is above and below these new structural layers. In this way, an undercut region 440 and a contact region 443 are formed. A portion 441 of the undercut region 440 extends across the bottom of a portion 424 of the beam 420 and up to the side of the portion 424 of the beam 420 . A portion 442 of the SiN layer 434 is also etched away such that the portion 424 of the beam 420 separates from the SiN layers 404, 434 to form a freed mechanical structure and thus can move in response to forces such as electrostatic forces. Another portion 425 of the beam 420 is still firmly anchored in the SiN layer 404 . In this way, beam 420 can act as a MEMS relay.In this case, plasma etch 450 removes SiN from interstitial space 426 and then diffuses through interstitial space 426 to form portions of undercut region 441 . In this manner, a large area undercut region 441 may be formed under the beam that is larger than might be formed under a solid beam such as beam 120 (FIG. 1).5 is a cross-sectional view of a portion of another example CMOS integrated circuit chip including a release mechanism with a release mechanism formed in a silicon nitride layer(s) 504, 534 on a silicon substrate 502. 520 of the MEMS device 500 . In this example, W VIA features 520 are surrounded by CVD TiN 505 on all sides except the top, and are initially embedded in SiN pads on the patterned metal features. In this example, the bottom patterned feature includes an iridium (Ir) layer 560 that is located on the bottom of the beam 520 during the fabrication process. The bottom patterned feature can be made of any dielectric material or metallic material that is not strongly corroded by SiN undercut etch. In this example, the bottom layer consists of a TiAIN layer 561 on top of Ir 560 . In this example, Ir 560 is on the bottom of the moving MEMS beam 520 . This could be used as the top contact on the bottom layer not shown in this figure to create a relay which closes when the beam bends down to make an electrical connection with the bottom electrode which is not moved in this example. As mentioned earlier, the SiO2536 on top of the W protects the W from the SiN undercut etch process. CVD TiN can protect the W on the sides and bottom if needed. In this case, this protective top layer is patterned and etched prior to the SiN undercut process.In this example, the undercut region 540 is processed using a carbon, fluorine, and oxygen plasma (CxFy+O2) to provide selective gas removal of the SiN layers 503, 534 through openings in the dielectric SiO2 layer 536 without etching Ir layer 560 or etch stop SiO2 layer 503.FIG. 6 is an example packaged MEMS device 600 . In this example, integrated circuit chip 671 is fabricated using known or later developed CMOS fabrication techniques. CMOS circuitry 672 is formed in IC 671 and includes CMOS transistors, passive devices, and interconnecting conductors. One or more MEMS devices 673 are formed in IC 671 . MEMS device 673 may be similar to any of device 100 (FIG. 1), 400 (FIGS. 4A, 4B), 500 (FIG. 5) or other MEMS devices fabricated within a SiN layer using a plasma etch process, as described further above. as described in detail.IC 671 is attached to lead frame 670 including contacts 674 . Bond wires 675 connect bond pads on IC 671 to contacts 674 using known or later developed wire bonding techniques.Molding compound 676 encapsulates IC 671 using known or later developed encapsulation techniques. In this example, the completed MEMS device 600 is packaged as a surface mount device.other examplesIn the example described, CVD tungsten protected by CVD TiN on 3 sides was used to form beam structures within SiN. The top side can be protected by another patterning and etching layer or by a self-aligned process such as recessing W, then forming a barrier metal such as TiN and using more CMP. In other examples, for example, physical vapor deposition (PVD) of Titanium, Ta, TiW, TiN, TaN may be used to form beam structures within SiN. In each case, using carbon, fluorine, and oxygen plasmas to undercut SiN had less impact on other materials. SiN is stronger and has higher thermal conductivity than SiO2.SiN undercut etching results in a clean surface where residual carbon or fluorine can be removed using a plasma or vapor cleaning process with H2, H2O, O2, N2, NH3, NO, etc.In the example described, W is used for the low creep material. However, there are other low creep materials, examples of which are included in Table 1. All materials in Table 1 have extremely high melting temperatures above 1500°C, and most of them have melting temperatures above 2000°C. Materials commonly used in semiconductor processes that qualify as low-creep materials are C, Ta, Mo, Ir, Ru, Ti, and Pd. In addition, compounds such as TiN or TaN can also be used as high melting point and low creep materials. Alloys including commonly used W alloys such as TiW and NiW also have high melting points and low creep. With a few exceptions, most of the materials listed in Table 1 are not typically used in CMOS processes and therefore do not have well-established material handling facilities for deposition, etching, and cleaning. Most of these materials can be deposited by sputter deposition, and thus need to be patterned by a patterning and etching process rather than the damascene process discussed in the described examples. Some of these materials will be etched by the SiN undercut etch process and therefore need to be protected. This can be done using protective layers such as Ta or even a thin layer of SiO2, TiN, Al or AlOx at the top and bottom of the stack. If necessary, the sides can also be protected by depositing a protective material (CVD TiN, AlOx), followed by an etch-back process to remove material on the flat exposed surface. One advantage of using an etch process instead of a damascene process to create beams is that solid beams can be created.Table 1 - Example Low Creep MaterialsIn the described example, a simple beam structure that can be used as a relay is described. In the examples described, beams having a generally rectangular shape are described. In other examples, more complex structures can be formed in SiN using the plasma etch process described herein. These structures can be used in an extremely wide variety of different MEMS structures. A variety of materials compatible with the plasma etch undercut process can be used to create the wide variety of possible device types. These can be simple structures using electrostatic, magnetic, piezoelectric, thermal to create actuators. High temperature metals such as Pt, Ir, W, Ru, Mo, Ti can be used to create high temperature heaters with various applications such as gas flow sensors, IR sources. MEMS structures with these can be used to create resonators, IR detectors, thermal detectors. Electrical structures such as relays or RF switches are possible with reliable contact materials. Variable capacitor devices can be fabricated using flexible beams as described herein.In the example described, a part of the beam is released from the SiN layer, while another part remains embedded in the SiN layer. In other examples, a released mechanical structure with no remainder in the SiN layer may be fabricated. In some examples, the released mechanical structure may be supported by torsion bars or similar support mechanisms attached to the SiN layer. As used herein, the term "mechanical structure" refers to fully and partially released structures of various shapes and sizes.In the example described, the cantilever beams are positioned within cavities in the SiN layer. In other examples, the SiN layer may be configured such that it does not completely surround the cantilever mechanical structure. For example, there may not be a top layer above the mechanical structure. In another example, a substantial portion of the SiN layer may be removed, in which case the cantilevered mechanical structure may protrude from the edge of the SiN layer into a substantially open space.In the example described, the completed packaged device is a surface mount device with multiple contacts on the bottom side of the package. However, in other examples, IC packages may have any number of known or later developed configurations and may have various forms, materials, shapes, dimensions, number of contacts, contact shapes, and the like. Furthermore, the MEMS resonator and/or any other components may be packaged, mounted, etc. in the IC package in various configurations. Other examples of IC packages include wafer level packages and die level packages.Many devices are packaged with epoxy plastic that adequately protects the semiconductor device and has the mechanical strength to support the leads and packaging process. Some integrated circuits have leadless packages, such as quad flat no-lead (QFN) and dual flat no-lead (DFN) devices, which physically and electrically couple the integrated circuit to a printed circuit board. Flat no-lead devices (also known as micro-leadframe (MLF) and small-outline no-lead (SON) devices) are based on surface-mount technology that connects integrated circuits without the need for through-holes in printed circuit boards. to the surface of the printed circuit board. Perimeter pads on the package provide electrical coupling to the printed circuit board. Another example may include a package completely encapsulated in a molding compound, such as a dual in-line package (DIP).In this specification, the term "couple" and its derivatives mean an indirect, direct, optical and/or radio connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical connection, and/or through a radio connection.Modifications to the described embodiments are possible, and other embodiments are possible, within the scope of the claims. |
Embodiments of the disclosure are directed to controlling an endpoint device running an endpoint device using a central control server. The central controller server is configured to communicate with the endpoint device across a communications interface compliant with a remote direct access (RDMA) compliant protocol. The central control server includes an RDMA network interface controller and a control process. The control process can execute an endpoint device algorithm to identify read and write commands to be sent across the RDMA protocol-compliant interface to the endpoint device. The RDMA network interface controller can convert messages into RDMA compliant messages that include direct read or write commands and memory location information. The endpoint device can also include a network interface controller that can understand the RDMA message, identify the memory location from the message, and execute the direct read or write access command. |
CLAIMS:1. A control server apparatus comprising:a processor implemented at least in hardware to execute a control process representing an endpoint device to identify a next action for the endpoint device;a network interface controller implemented at least in hardware to communicate a message across a communications interface compliant with a remote direct memory access (RDMA) protocol with an endpoint device, the message comprising a steering tag, a steering tag offset, and a command for direct memory access of the endpoint.2. The control server apparatus of claim 1, wherein the processor identifies a steering tag value for the direct memory access for the endpoint device based on executing a control process for the endpoint device, and wherein the memory location comprises a steering tag offset value.3. The control server apparatus of claim 1, further comprising an integrated switch connecting the network interface controller with the endpoint.4. The control server apparatus of claim 3, wherein the processor identifies a MAC address of the endpoint device based on executing the control process and the integrated switch routes the message to the endpoint device based on the MAC address.5. The control server apparatus of claim 1, wherein network interface controller comprises an RDMA controller to configure an RDMA message for transmission to the endpoint device, the RDMA message comprising a direct memory access command and the memory location.6. The control server apparatus of claim 1, further comprising a steering tag table that includes steering tag values that correspond to memory locations of the endpoint device, and wherein the processor executes a control process corresponding to the endpoint device to identify a steering tag that corresponds to a memory location for a direct memory access of the endpoint device.7. A computer program product tangibly embodied on non-transitory computer readable media, the computer program product including instructions that when executed are operable to: execute, at a central server, a control process of an endpoint device; identify a memory location for direct memory access of the endpoint device based on the control process of the endpoint device;construct a remote direct memory access (RDMA) message that includes the memory location and a direct memory access command; andtransmit the RDMA message to the endpoint device across a communications interface compliant with an RDMA protocol.8. The computer program product of claim 7, the instructions further operable to identify, based on the control process, a steering tag value that corresponds to the memory location of the endpoint device for the direct memory access command.9. The computer program product of claim 7 or 8 the instructions further operable to identify a machine address for the endpoint device and wherein constructing the RDMA message comprises adding the machine address to the endpoint device to the RDMA message.10. The computer program product of claim 7, the instructions further operable to receive, from the endpoint device across the communications interface compliant with the RDMA protocol, a read response from the endpoint device.11. The computer program product of claim 7, wherein transmit the RDMA message to the endpoint device across a communications interface compliant with an RDMA protocol comprises transmitting the RDMA message to an endpoint control interface associated with the endpoint device.12. An endpoint device in communication with a central control server across a communications interface compliant with a remote direct memory access (RDMA) protocol, the endpoint device comprising:a memory mapped register; anda network interface controller implemented at least in hardware to:receive an RDMA message from the central control server across the communications interface;identify a memory location in the memory mapped register for direct memory access from the RDMA message;identify a command for the direct memory access from the RDMA message; and directly access the memory location to satisfy the command.13. The endpoint device of claim 12, wherein the RDMA message identifies a memory location in the memory mapped register, and wherein the network interface controller is configured to directly access the memory location in the memory mapped register.14. The endpoint device of claim 13, wherein the memory location of the message comprises a steering tag offset value that corresponds to a memory location in the memory of the endpoint device.15. The endpoint device of claim 14, wherein the network interface controller comprises a hardwired memory register address, and the network interface controller is configured to: identify the memory register address in the memory based on comparing the memory register address with a steering tag offset value.16. The endpoint device of claim 12, wherein the network interface controller comprises an RDMA network interface controller.17. The endpoint device of claim 12, wherein the endpoint device lacks one or both of a microcontroller or a network processor.18. A computer program product tangibly embodied on non-transitory computer readable media, the computer program product including instructions that when executed are operable to:receive a message from across a communications interface compliant with a remote direct memory access (RDMA) compliant protocol;identify a memory location from the message for a direct memory access;identify a command from the message; andexecute the direct memory access based on the command from the message.19. The computer program product of claim 18, wherein the message comprises a steering tag offset value that identifies a memory location of a memory of the endpoint device.20. The computer program product of claim 19, the instructions further operable to compare the steering tag value in the message with a steering tag value at the endpoint device, the steering tag value at the endpoint device corresponding to a memory location of the memory at the endpoint device.21. A system comprising:a central control server comprising:a processor implemented at least in hardware to execute a control process representing an endpoint device to identify a next action for the endpoint device, anda network interface controller implemented at least in hardware to communicate a message across a communications interface compliant with a remote direct memory access (RDMA) protocol with an endpoint device, the message comprising a steering tag, a steering tag offset, and a command for direct memory access of the endpoint; andan endpoint device in communication with a central control server across acommunications interface compliant with a remote direct memory access (RDMA) protocol, the endpoint device comprising:a memory mapped register; anda network interface controller implemented at least in hardware to:receive an RDMA message from the central control server across the communications interface;identify a memory location in the memory mapped register for direct memory access from the RDMA message;identify a command for the direct memory access from the RDMA message; and directly access the memory location to satisfy the command.22. The system of claim 21, wherein the endpoint device lacks one or both of a microcontroller or a network processor.23. The system of claim 21, wherein network interface controller comprises an RDMA controller to configure an RDMA message for transmission to the endpoint device, the RDMA message comprising a direct memory access command and the memory location.24. The system of claim 21, wherein the network interface controller comprises a hardwired memory register address, and the network interface controller is configured to: identify the memory register address in the memory based on comparing the memory register address with a steering tag offset value.25. The endpoint device of claim 21, wherein the network interface controller comprises an RDMA network interface controller. |
DIRECT MEMORY ACCESS FOR ENDPOINT DEVICESCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims the benefit of priority to U.S. Nonprovisional (Utility) Patent Application No. 14/953,750 filed 30 November 2015 entitled, "DIRECT MEMORY ACCESS FOR ENDPOINT DEVICES", which is incorporated herein by reference in its entirety.FIELD[0002] This disclosure pertains to direct memory accesses, and more particularly, for direct memory accesses for endpoint devices.BACKGROUND[0003] Communicating with remote hardware applications may include the use of network packet processing at the remote hardware. The use of additional processing of incoming and outgoing packets may result in increased resource requirements, increased latency, and cost.[0004] Complex allocations of resources for sending and/or receiving packets are used to schedule communications between a controller and the endpoint devices. Scheduling of transactions by preallocated transmit time windows can result in complications, such as increased latency and overhead, decreased usefulness of the communications link, and requiring specialized hardware.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 is a schematic block diagram for a remote direct memory access control system in accordance with embodiments of the present disclosure.[0006] FIG. 2 is a schematic block diagram for an apparatus for executing an endpoint device in accordance with embodiments of the present disclosure.[0007] FIG. 3 is a process flow diagram for communicating with an endpoint device across a remote direct memory access compliant protocol in accordance with embodiments of the present disclosure.[0008] FIG. 4 is a process flow diagram for performing direct memory accesses based on a command received across a remote direct memory access compliant protocol in accordance with embodiments of the present disclosure. DETAILED DESCRIPTION[0009] Automation systems can include autonomously operating subsystems. The protocols used by some automation systems are designed for serial communications. TCP/IP connections and Ethernet is used by others. To maintain serial protocol compatibility, the Ethernet media uses time domain multiplexing to mimic the legacy serial protocol. Using Ethernet reduces latency, but automation protocols often do not take full advantage of the features that Ethernet provides.[0010] This disclosure describes a central control server that monitors and controls one or more endpoint devices, such as those in a workflow for an automation system. The central control server uses the RDMA protocol to directly read and/or write endpoint device operational parameters, e.g., via memory mapped control registers of the endpoint device state machine. This allows for low latency real time control over the managed system. Examples of endpoint devices include automatum, robot, machine, process flow, industrial process, mechanical device, power system, etc.[0011] Instead of each endpoint device being controlled by a micro-controller or network processor local to the endpoint, this disclosure describes moving control to a central control server. By moving control to a central control server, end-to-end workflow analysis and optimization can be realized. Endpoint devices would no longer be limited to data within their immediate subsystem. Endpoint device subsystems can be repurposed as needed because specific control functionality for applications is moved to the central control server, while endpoint devices retain functionality to implement direct read or write access to execute commands received from the central control server. For example, assembly robots can automatically change extensions to perform different tasks, which can reduce idle time.[0012] Additionally, the endpoint devices no longer need a micro-controller or network processor. Instead, an endpoint controller interface in communication with the endpoint device can include an RDMA network interface controller (RNIC) can be used to parse commands received from the central control server across the RDMA interface. Moreover, a network interface controller with reduced complexity can further reduce the costs and latency for receiving and executing commands (e.g., an rNIC with a lower case "r" is introduced). This less complex rNIC can further reduce the unit cost and points of failure while maintaining functionality to parse RDMA messages.[0013] An additional advantage of the present disclosure is end-to-end safety. Instead of the subsystems operating independently, in which case the subsystems have a limited ability to detect or adapt to other systems, the present disclosure improves the overall system safety by centralizing control of the entire line. Interactions can be determined before commands are issued by, e.g., confirming machine addresses in RDMA messages received from the central control server. Additionally, instead of each endpoint device holding its own state information, the central control server can hold the state information for each endpoint device, and therefore, for the entire system. Changes to the state of an endpoint device can compromise safety; by having the central control server monitor the state information for each endpoint device and responding to errant states or changes to states, the central control server can address issues, shut down endpoint devices, or shut down the entire workflow. The central control server would also be able to alert emergency responders, other workflows of upstream issues, and track valuable metrics. Further, state information can be updated quickly and often without burdening the communications interface or the central control server processing.[0014] This disclosure can utilize auto-configuration of the endpoint and controller utilizing an XML file to exchange capabilities. These can include sensor types, number of axis, range of motion, extension limits, attachment types, security protocols supported, power levels, belt rates, and other parameters (e.g., endpoint device parameters, etc.).[0015] FIG. 1 is a schematic block diagram for a remote direct memory access (RDMA) control system 100 in accordance with embodiments of the present disclosure. RDMA control system 100 includes a central control server 102 in communication with one or more endpoint control interfaces 122 across an RDMA protocol compliant communications interface 130 (in short, an RDMA interface 130). In some embodiments, each endpoint control interface 122 can be part of a process workflow 120 (e.g., an industrial workflow or manufacturing plant). In some embodiments, each endpoint control interface 122 can be autonomous from each other and/or part of different workflows. Each endpoint control interface 122 can be connected to, integrated into, or otherwise in communication with an endpoint device 124. Endpoint device 124 can be an automatum, robot, machine, process flow, industrial process, mechanical device, power system, etc. Endpoint device 124 can be implemented at least partially in hardware. Each endpoint device 124 can be the same or can be different.[0016] The central control server 102 can execute a control process 119 that models the processes of the endpoint device 124, or can simulate the endpoint device 124. The processor 104 can execute the control process 119 using state information 107 and endpoint device models 116. In some embodiments, the control process can include one or more endpoint device models 116 that model the control processes for each process or procedure or action associated with an endpoint device. Endpoint device models 116 can include a local or internal model of each endpoint device 124. The endpoint device models 116 can include each process or procedure that the endpoint device 124 would perform to derive a next state of the endpoint device 124. Endpoint device models 116 can make use of state information 107 received from the endpoint control interface 122 across the RDMA interface 130 and/or state information 107 stored in memory 106.[0017] Processor 104 can be implemented at least partially in hardware, and can include software and firmware. The processor 104 can include any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SOC), or other device to execute code.[0018] The control process 119 uses the endpoint device model 116 and state information 107 to identify a next action or next state for the endpoint device 124, essentially running the model at the central control server 102 to mimic the processes or procedures of the endpoint device 124. To achieve the next desired state, a command may be needed to be sent to the endpoint device, indicating what needs to change in order to achieve it. The next state information can correspond to a read or write command, which can include a read length and memory location or a write length, value, and memory location. The RNIC 108 can convert the command into a message compliant with the RDMA protocol. The message can include, among other things, a machine address for the endpoint device and a steering tag. The steering tag represents a memory region for the read/write command at the endpoint device. The steering tag can include a steering tag offset value to specify the memory location for the read/write command. The steering tag offset may be associated with a control register in the endpoint device state machine. The message can also include the command and a value.[0019] In some embodiments, the central control server 102 includes a virtual machine 110. In some implementations, the control process 119 can reside in the virtual machine 110 or be implemented by the virtual machine 110. While only one virtual machine is shown, central control server 102 may include more virtual machines than those illustrated. Virtual machine 110 may make use of hardware resources, including processor 104 and memory 106. Hardware resources may be virtualized, meaning that a single physical hardware resource may be partitioned into multiple virtual hardware resources to enable system 100 to use the single physical hardware resource in multiple virtual machine 110. Virtualization may be implemented using virtual machine monitor (VVM) 112. In an embodiment, VMM 112 includes software that imposes a virtualization layer in central control server 102 in which hardware resources may be virtualized into a virtual machine 110. The virtual machine 110 can make use of state information 107 and endpoint device models 116 to, e.g., determine a next state for the endpoint device 124.[0020] The virtual machine 110 can execute endpoint device models 116 to execute operations associated with the endpoint device 122. The virtual machine 110 can use state information received from the endpoint control interface 122 across the RDMA interface 130 to execute algorithms, thereby moving processing of algorithms from the endpoint control interface 122 to the central control server 102. The virtual machine 110 can execute commands to alter the state of the endpoint control interface 122. This state information is communicated across the RDMA interface 130 in an RDMA message that includes a write command, a memory location indicator (e.g., a steering tag), and other information, such as a machine address and connection address. Essentially, the virtual machine 110 can perform processing for the endpoint device 124; the RNIC 108 and the RDMA interface 130 allow for low latency communications between the central control server 102 and the endpoint control interface 122 so that the virtual machine 110 can read the state of the endpoint device 124, process that information, and send write information to the endpoint control interface 122 with low latency.[0021] The memory 106 may include memory location information 114 about one or more endpoint devices 124 to which the central control server 102 is connected. The memory 106 can include memory location information 114 for each of the one or more endpoint devices 124. The memory information can include a steering tag value mapped to a memory location. For example, the memory location information 114 can include a steering tag value that maps to a memory location in endpoint device 124. The memory location at the endpoint device 124 can be associated with a function of the endpoint device 124.[0022] The memory 106 can also store a lookup table. After the control process 119 determines the next state for the endpoint device 124, the lookup table can be used by the RNIC 108 to figure out which memory mapped register(s) (in FIG. 2, memory mapped register 206) at the endpoint device 124 need to be accessed to execute the next stage, and what are the endpoint machine address, steering tag, and steering tag offset for accessing the register(s).[0023] RDMA network interface controller (RNIC) 108 can be used to encapsulate information from the control process 119 or virtual machine 110 into an RDMA message that is compliant with the RDMA protocol. RDMA facilitates direct memory access to memory on a remote system (e.g., endpoint device 108) in a manner that bypasses the system CPU and operating system of the receiving device. The bypassing of CPU and operating system means that RDMA messaging can be low latency. RDMA supports zero-copy networking by enabling an RNIC to transfer data directly to or from application memory (i.e., a memory space in system memory allocated to an application) that is maintained separate from kernel memory used by an operating system, eliminating the need to copy data between application memory and data buffers in kernel memory employed by the operating system.[0024] The central control server 102 can use a mechanism for allocating memory called Memory Registration. Memory registration facilitates access to a Memory Region by the RNIC 108. Binding a Memory Window allows the RNIC 108 to access memory represented by that Memory Window. Memory registration provides mechanisms that allow the RNIC 108 to access a memory mapped register at the endpoint device 124 using a Steering Tag (STag) and a Tagged Offset. Memory registration provides the RNIC 108 with a mapping between a STag and a memory location at the endpoint device 124. The memory registration also provides the RNIC 108 with a description of the access control associated with the memory location 114. The set of memory locations that have been registered are referred to as a Memory Region. Before an RNIC 108 can use a Memory Region, the resources associated with the Memory Region and the Memory Region itself can be registered with the RNIC 108.[0025] There are local STags which represent registered memory on this system, and there are remote STags, which the system on the other side of the connections has registered memory to. The remote memory is abstract in the sense that the local side is not aware of its exact location[0026] As mentioned previously, the message transmitted by the RNIC 108 to the endpoint control interface 122 is a message that is compliant with an RDMA protocol. The message includes memory information, such as a steering tag (or sTag), as well as a machine address for the destination endpoint device and data that represents a read or write operation. In some embodiments, the message may also include a connection address so the endpoint device can verify that the source of the message is a known connection and not an intruder.[0027] Central control server 102 can also include a system supervisor 117 implemented at least in hardware to supervise each of the endpoint devices 124. The system supervisor 117 can monitor state information for each of the endpoint device 124. Based on the state information, the system supervisor 117 can identifier errant states for each endpoint device. The system supervisor 117 can shut down the endpoint device 124 if an errant state is detected. The system supervisor 117 can also shut down a whole workflow 120 if warranted (e.g., by the identification of an errant state of one or more endpoint devices 124).[0028] System 100 also includes a switch 118. Switch 118 can be an integrated switch in the central control server. An integrated switch can include a multi-host Ethernet controller silicon with integrated Ethernet switching resources. An example of an integrated switch includes RED ROCK CANYON™. Since the traffic is primarily top-down, the congestion is minimal and flow control is minimal. A free-standing switch can also be used in some implementations.[0029] FIG. 2 is a schematic block diagram 200 for an endpoint control interface 202 for controlling an endpoint device 212 in accordance with embodiments of the present disclosure. The endpoint control interface 202 may be implemented at least in hardware. Endpoint control interface 202 may be integrated into or otherwise in communication with an endpoint device 212. Endpoint device 212 can be an automatum, robot, machine, or other component of an automated or remotely controlled/monitored process architecture. Endpoint control interface 202 may include logic that includes a network interface controller 204 and a memory mapped register 206. The memory mapped register 206 includes register addresses corresponding to pins on the endpoint device 212 and allow for direct access to the endpoint device 212 through the memory mapped register 206.[0030] The network interface controller (NIC) in FIG. 2 may be a full RDMA NIC (RNIC) 204B or may be a modified version of an RNIC (referred to as rNIC 204A, labeled with a lowercase "r" to denote a simplified or limited implementation of the RNIC or RDMA protocol).[0031] The rNIC 204A implements a subset of the full RDMA protocols. For example, the rNIC 204A can support a single connection or several connections to a central control server, as opposed to supporting thousands or millions of connections. Instead of building a large table or lookup mechanism, the rNIC 204A can perform a direct comparison of received addressing and memory location values in parallel (e.g., by hardcoding the values within the rNIC 204A). In some instances, the rNIC 204 A can be specifically tailored to the endpoint device 212, and the machine address values and memory locations can be hardcoded to the rNIC 204A.[0032] The rNIC 204A can be configured to handle 3 types of messages: a write command, a read command, and a read response. Additionally, the rNIC 204A can forgo retransmit operations, such as TCP/IP retransmit protocols. The central control server 102 can be configured to send additional read requests if prior read requests go unanswered within a predetermined amount of time (microseconds, milliseconds, seconds, minutes, etc.).[0033] The rNIC 204A is configured to receive an RDMA message that includes a direct access command, such as a read or write, and includes a memory location identifier. The memory location can be a steering tag value that maps to a memory location in the memory mapped register 206. For an rNIC 204A, the RDMA message can include a reduced number of steering tag values as compared to an RDMA message for an RNIC 204B. For example, the rNIC 204A is configured to communicate with a single peer or at most a several peers.[0034] In some embodiments, the endpoint control interface 202 can include memory location information 210, which can be a library or table of information or a hardcoded set of information. In some embodiments, the memory location information 210 can include specific memory locations to allow the rNIC 204A to translate steering tag offset values into memory locations in the memory mapped register 206.[0035] The endpoint control interface 202 can also include machine identifiers 214. The endpoint control interface 202 can compare machine identifiers in the message received from the central control server 102 with the machine identifier 214 of the endpoint device to confirm that the message is intended for the endpoint control interface 202.[0036] In some embodiments, the RNIC 204B would include an interface to a table of values for connection addresses and memory addresses. The rNIC 204A does not need to include the interface to a table of values because there is less information in the received RDMA message, and the rNIC 204A can compare the information in the RDMA message against one or two values. Further, the rNIC 204A can transmit read responses back to the RNIC in the central control server. The rNIC 204A may not need to retransmit messages (including partial messages). Rather, the central control server can resend read requests after the expiration of a predetermined time. Additionally, the rNIC 204A generally does not initiate a connection with the central control server 102. Rather, the rNIC 204A accepts messages from the central control processor and can respond to read requests, using an existing connection established by the central control server 102.[0037] Because the endpoint device functionality is simulated on the central control server, short (e.g., up to one maximum transmission unit) RDMA messages using only a handful of RDMA STags can be used to read or write map register contents. This allows the endpoint control interface 202 to only implement a fraction of the RDMA and TCP functionality while maintaining low latency read and write operations. The rNIC 204A can connect to a fully implemented RNIC on the central control server, thereby reducing the hardware requirement from the rNIC 204A.[0038] The memory location information 210 can point to memory locations in the memory mapped register 206. Memory locations may represent access points to the endpoint device 212 for directly accessing command functions. A read from the memory location can indicate a present state of the endpoint control interface 202 (or more specifically, a state of a function of the endpoint control interface 202 from a state machine 208). A write to a memory location can cause the endpoint device to change its state in the state machine 208 or perform a function.[0039] The rNIC 204 A can receive a message from across a communications interface compliant with an RDMA protocol. The message may include a steering tag (sTag) that represents window or region of the memory mapped register 206. The sTag also includes an sTag offset value that represents a specific portion of the window or region of the memory mapped register 206 to be accessed. For example, the sTag can indicate a window of memory registers, say registers 1-10, and the offset can represent register 1+x, where x is the offset from 1.[0040] The memory mapped register 206 can include silicon logic that directly interfaces with the endpoint device 212. An rNIC 204A reads or writes to the memory mapped register 206. The endpoint device 212 control is based on the values at each register of the memory mapped register 206. For example, the rNIC 204A can write to the memory mapped register to cause the endpoint device 212 to change its state. Similarly, state information of the endpoint device 212 can be read from a memory mapped register location.[0041] The endpoint device 212 can also include a state machine 208. The state machine 208 can include silicon logic. The state machine 208 can interface with the endpoint device 212. In state machine 208, every value has an address. The rNIC 204A can write or read to the state machine 208 by memory mapped registers 206.[0042] FIG. 3 is a process flow diagram 300 for communicating with an endpoint device across a remote direct memory access compliant protocol in accordance with embodiments of the present disclosure. A central control server can receive from across a communications interface compliant with a remote direct memory access (RDMA) protocol an RDMA message containing state information for an endpoint device from an endpoint control interface (302). The central control server can receive the RDMA message via an RDMA network interface controller (RNIC). The central control server can run a control process or virtual machine representing the endpoint device using the state information received from across the RDMA communications interface (304). The output of the control process or virtual machine can include an identification of a desired state of the endpoint device (306) For example, the result may include a command to change the state of the endpoint device (e.g., a write command) or a command to provide further state information (e.g., a read command).[0043] The central control server can identify a memory location for the read or write command (308). The memory location can be identified based on a memory information library, which can include one or more steering tag values mapped to memory locations in a memory mapped register at the endpoint device controlling the endpoint device. Additionally, the central control server can identify a machine address identifier and a connection identifier for the endpoint device.[0044] The central control server, via the RNIC, can encapsulate the command and the memory location information into a message, such as an RDMA message (310). The RNIC can transmit the RDMA message to the endpoint device across a communications interface compliant with the RDMA protocol (312). In some embodiments, the central control server can, via the RNIC, receive a read response from the endpoint device, which is indicated as the dotted arrow returning to (302).[0045] FIG. 4 is a process flow diagram 400 for performing direct memory accesses based on a command received across a remote direct memory access (RDMA) compliant protocol in accordance with embodiments of the present disclosure. An RDMA-compliant network interface controller (rNIC) on an endpoint device can receive an RDMA message (402) from a central control server from across an RDMA compliant communications interface. The rNIC can identify a machine address from the RDMA message (404) to confirm that the message is meant for the endpoint device. In some implementations, the rNIC includes a filter that can filter out packets that do not have MAC addresses configured for the receiving rNIC. The rNIC can identify a command from the RDMA message (406). For example, the command can be a read command or a write command. The rNIC can identify a memory location for the command (408). The rNIC can be hardcoded with a memory location mapping to a memory mapped register. The memory location can be identified by a memory location identifier, such as a steering tag value. The rNIC can directly access the memory mapped register based on the memory location from the message (410). The access can be a write operation, in which the rNIC directly writes to a location in the memory mapped register. The access can be a read operation, in which the rNIC reads from a location in the memory mapped register. The rNIC can send a read response to the central control server across the RDMA compliantcommunications interface (412).[0046] This disclosure allows for multiple security options to be used:[0047] 1. A key embedded in the device VNM, scanned via QR code at installation time to load other side onto the server (public/private key).[0048] 2. One time initial handshake with the endpoint connected to the server directly.[0049] 3. MACSeC /LinkSec[0050] 4. IPSec [0051] This disclosure also includes the ability to have the equipment halt or return to a safe position (depending on the machine type) if/when network connectivity is lost. This can be implemented with a simple periodic heart beat packet, detection of link loss or other mechanism.[0052] The systems and apparatuses described herein can reduce the computation power needed on the equipment side by an order of magnitude making it extremely simple. This is important because industrial components are designed with durability as one of the topmost priorities; they need withstand vibration, heat, and other harsh environments with minimal maintenance for their service life.[0053] The present disclosure may also apply to Internet of Things (IOT) devices. As shown in FIG. 1, the central control processor 102 can transmit RDMA messages (or messages that conform at least in part to the RDMA protocol). The messages can be transmitted across a wireless network (e.g., a cellular network or WIFI network or other wireless technology). The network interface controller on the endpoint device can receive the messages from across a wireless network.[0054] This disclosure describes the use of RDMA protocols. Among the various RDMA protocols contemplated by this disclosure are internet Wide Area RDMA Protocol (iWARP), RDMA over Converged Ethernet (RoCE), and INFINIBAND™.[0055] It should be appreciated that the examples presented above are non-limiting examples provided merely for purposes of illustrating certain principles and features and not necessarily limiting or constraining the potential embodiments of the concepts described herein. For instance, a variety of different embodiments can be realized utilizing various combinations of the features and components described herein, including combinations realized through the various implementations of components described herein. Other implementations, features, and details should be appreciated from the contents of this specification.[0056] In example 1, aspects of the embodiments are directed to a control server that includes a central processor implemented at least in hardware to execute a control process representing an endpoint device to identify a memory location for direct memory access for the endpoint device and a network interface controller implemented at least in hardware to communicate a message across a communications interface compliant with a remote direct memory access (RDMA) protocol with an endpoint executing the endpoint device, the message comprising the memory location for direct memory access of the endpoint.[0057] In example 2, the subject matter of example 1 further includes that the processor identifies a steering tag value for the direct memory access for the endpoint device based on executing the control process, and wherein the memory location comprises a steering tag value. [0058] In example 3, the subject matter of examples 1 or 2 may include an integrated switch connecting the network interface controller with the endpoint.[0059] In example 4, the subject matter of examples 1 or 2 or 3 may include that the central processor identifies a routing address of the endpoint based on executing the control process and the integrated switch routes the message to the endpoint based on the routing address.[0060] In example 5, the subject matter of any of examples 1 or 2 or 3 or 4 may also include that the network interface controller comprises an RDMA controller to configure an RDMA message for transmission to the endpoint, the RDMA message comprising a direct memory access command and the memory location.[0061] In example 6, the subject matter of any of examples 1 or 2 or 3 or 4 or 5 may also include a steering tag library that includes steering tag values that correspond to memory locations of the endpoint, and wherein the processor executes a control process corresponding to the end point endpoint device to identify a steering tag that corresponds to a memory location for a direct memory access of the endpoint.[0062] In example 7, aspects of the embodiments are directed to receiving, at a central control server, state information for an endpoint device of an endpoint device from across a communications interface compliant with a remote direct memory access (RDMA) protocol; executing, at the central server, a simulation of the endpoint device based on the state information; identifying a memory location for direct memory access of the endpoint device based on the simulation of the endpoint device; constructing an RDMA message that includes the memory location and a direct memory access command; and transmitting the RDMA message to the endpoint device across a communications interface compliant with an RDMA protocol.[0063] In example 8, the subject matter of example 7 can also include identifying, based on the simulation, a steering tag value that corresponds to the memory location of the endpoint device for the direct memory access command.[0064] In example 9, the subject matter of any of examples 7 or 8 can also include identifying a machine address for the endpoint device and wherein constructing the RDMA message comprises adding the machine address to the endpoint device to the RDMA message.[0065] In example 10, the subject matter of example 7 can also include receiving, from the endpoint device across the communications interface compliant with the RDMA protocol, a read response from the endpoint device. [0066] In example 11, aspects of the embodiments are directed to computer program product tangibly embodied on non-transitory computer readable media, the computer program product including instructions that when executed are operable to execute, at a central server, a simulation of an endpoint device of an endpoint device; identify a memory location for direct memory access of the endpoint device based on the simulation of the endpoint device; construct a remote direct memory access (RDMA) message that includes the memory location and a direct memory access command; and transmit the RDMA message to the endpoint device across a communications interface compliant with an RDMA protocol.[0067] In example 12, the subject matter of example 11 can also include instructions further operable to identify, based on the simulation, a steering tag value that corresponds to the memory location of the endpoint device for the direct memory access command.[0068] In example 13, the subject matter of example 11 or 12 can also include instructions further operable to identify a machine address for the endpoint device and wherein constructing the RDMA message comprises adding the machine address to the endpoint device to the RDMA message.[0069] In example 14, the subject matter of example 11 can also include instructions further operable to receive, from the endpoint device across the communications interface compliant with the RDMA protocol, a read response from the endpoint device.[0070] In example 15, aspects of the embodiments are directed to an endpoint device in communication with a central control server across a communications interface compliant with a remote direct memory access (RDMA) protocol. The endpoint device can include a memory mapped register and a network interface controller implemented at least in hardware. The network interface controller can be configured to receive an RDMA message from the central control server across the communications interface; identify a memory location in the memory mapped register for direct memory access from the RDMA message; identify a command for the direct memory access from the RDMA message; and directly access the memory location to satisfy the command.[0071] In example 16, the subject matter of example 15 may include that the RDMA message identifies a memory location in the memory mapped register, and wherein the network interface controller is configured to directly access the memory location in the memory mapped register.[0072] In example 17, the subject matter of example 15 or 16 may include that the memory location of the message includes a steering tag value that corresponds to a memory location in the memory of the endpoint device. [0073] In example 18, the subject matter of example 15 or 16 or 17 may include that the network interface controller includes a hardwired steering tag value, and the network interface controller is configured to identify the memory location in the memory based on comparing the memory location in the message with a hardwired steering tag value.[0074] In example 19, the subject matter of example 15 or 16 or 17 or 18 may include that the network interface controller includes at least a portion of an RDMA controller.[0075] In example 20, aspects of the embodiments are directed to a method performed in an endpoint device. The method may include receiving, by a network interface controller, a message from across a communications interface compliant with a remote direct memory access (RDMA) compliant protocol; identifying, by the network interface controller, a memory location from the message for a direct memory access; identifying, by the network interface controller, a command from the message; and executing, by the network interface controller, the direct memory access based on the command from the message.[0076] In example 21, the subject matter of example 20 can also include that the message includes a steering tag value that identifies a memory location of a memory of the endpoint device.[0077] In example 22, the subject matter of example 20 can also include comparing, by the network interface controller, the steering tag value in the message with a steering tag value at the endpoint device, the steering tag value at the endpoint device corresponding to a memory location of the memory at the endpoint device.[0078] In example 23, the subject matter of example 20 can also include identifying a machine address from the message and confirming the machine address from the message matches a machine address of the endpoint device.[0079] In example 24, aspects of the embodiments are directed to a computer program product tangibly embodied on non-transitory computer readable media, the computer program product including instructions that when executed are operable to receive a message from across a communications interface compliant with a remote direct memory access (RDMA) compliant protocol; identify a memory location from the message for a direct memory access; identify a command from the message; and execute the direct memory access based on the command from the message.[0080] In example 25, the subject matter of example 24 can also include that the message comprises a steering tag value that identifies a memory location of a memory of the endpoint device. [0081] In example 26, the subject matter of example 24 can also include instructions further operable to compare the steering tag value in the message with a steering tag value at the endpoint device, the steering tag value at the endpoint device corresponding to a memory location of the memory at the endpoint device.[0082] In example 27, aspects of the embodiments are directed to an endpoint device in communication with a central control server across a communications interface compliant with a remote direct memory access (RDMA) protocol. The endpoint device can include a memory mapped register means and a network interface controller means implemented at least in hardware. The network interface controller means can be configured to receive an RDMA message from the central control server across the communications interface; identify a memory location in the memory mapped register means for direct memory access from the RDMA message; identify a command for the direct memory access from the RDMA message; and directly access the memory location to satisfy the command.[0083] In example 28, aspects of the embodiments are directed to an endpoint device in communication with a central control server across a communications interface compliant with a remote direct memory access (RDMA) protocol. The endpoint device can include a memory mapped register and a network interface controller implemented at least in hardware. The network interface controller can be configured to receive an RDMA message from the central control server across the communications interface; identify a memory location in the memory mapped register for direct memory access from the RDMA message; identify a command for the direct memory access from the RDMA message; and directly access the memory location to satisfy the command. In some embodiments, the endpoint device does not include a microcontroller or a network processor, but rather includes a rNIC or RNIC for parsing messages sent by the central control server over the RDMA protocol.[0084] In example 29, aspects of the embodiments are directed to a system that includes a central control server that includes a central processor implemented at least in hardware to execute a control process representing an endpoint device to identify a memory location for direct memory access for the endpoint device and a network interface controller implemented at least in hardware to communicate a message across a communications interface compliant with a remote direct memory access (RDMA) protocol with an endpoint executing the endpoint device, the message comprising the memory location for direct memory access of the endpoint. The system also includes one or more endpoint devices. Each endpoint device can include a memory mapped register and a network interface controller implemented at least in hardware. The network interface controller can be configured to receive an RDMA message from the central control server across the communications interface; identify a memory location in the memory mapped register for direct memory access from the RDMA message; identify a command for the direct memory access from the RDMA message; and directly access the memory location to satisfy the command. The endpoint device does not include a microcontroller or a network processor, but rather includes a rNIC or RNIC for parsing messages sent by the central control server over the RDMA protocol.[0085] Example 30 may include the subject matter of example 29, wherein the endpoint device lacks one or both of a microcontroller or a network processor.[0086] Example 31 may include the subject matter of example 29 or 30, wherein network interface controller comprises an RDMA controller to configure an RDMA message for transmission to the endpoint device, the RDMA message comprising a direct memory access command and the memory location.[0087] Example 32 may include the subject matter of any of examples 29 or 30 or 31, wherein the network interface controller comprises a hardwired memory register address, and the network interface controller is configured to identify the memory register address in the memory based on comparing the memory register address with a steering tag offset value.[0088] Example 33 may include the subject matter of any of examples 29 or 30 or 31 or 32, wherein the network interface controller comprises an RDMA network interface controller.[0089] Example 34 may include the subject matter of any of examples 29 or 30 or 31 or 32 or 33, wherein the endpoint device does not include a micro-controller or a network processor, but rather includes a rNIC or RNIC for parsing messages sent by the central control server over the RDMA protocol.[0090] Although this disclosure has been described in terms of certain implementations and generally associated methods, alterations and permutations of these implementations and methods will be apparent to those skilled in the art. For example, the actions described herein can be performed in a different order than as described and still achieve the desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve the desired results. In certain implementations, multitasking and parallel processing may be advantageous. Additionally, other user interface layouts and functionality can be supported. Other variations are within the scope of the claims.[0091] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.[0092] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.[0093] Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. |
Method and apparatuses for constructing a grammar to describe interactions among a plurality of devices in a network are disclosed. An aspect receives, by a network interface of a device, device capabilities of each of the plurality of devices, generates, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities, models, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list, constructs, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions, and stores the grammar in a memory of the device. |
CLAIMSWhat is claimed is:1. A method for constructing a grammar to describe interactions among a plurality of devices in a network, comprising:receiving, by a network interface of a device, device capabilities of each of the plurality of devices;generating, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities;modeling, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list;constructing, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions; andstoring the grammar in a memory of the device.2. The method of claim 1, wherein the generating the reduced device list comprises clustering the device capabilities to generate the reduced list.3. The method of claim 2, further comprising:receiving one or more observation logs, the one or more observation logs including information about one or more interactions among a subset of the plurality of devices.4. The method of claim 3, wherein the clustering comprises:generating one or more feature vectors representing the received one or more observation logs; andclustering the one or more feature vectors to generate the reduced device list.5. The method of claim 1, further comprising:receiving a log of one or more interactions among a subset of the plurality of devices.6. The method of claim 5, wherein the log is received from a device of the plurality of devices that is not involved in the one or more interactions.7. The method of claim 5, wherein the modeling the one or more sequences of interactions comprises:assigning a sequence of the one or more interactions to a sequence of one or more centroids to model the one or more sequences interactions.8. The method of claim 1, wherein the device comprises an Internet of Things (IoT) server.9. The method of claim 1, wherein the device comprises a supervisor device in the network other than an IoT server.10. The method of claim 1, wherein the plurality of devices comprises a plurality of IoT devices.1 1. An apparatus for constructing a grammar to describe interactions among a plurality of devices in a network, comprising:a network interface configured to receive device capabilities of each of the plurality of devices;a reduced device list generator configured to generate a reduced device list representing groupings of the plurality of devices based on the device capabilities;an interaction sequence modeler configured to model one or more sequences of interactions among the plurality of devices using the reduced device list;a grammar construction module configured to construct the grammar based on the modeled one or more sequences of interactions; anda memory configured to store the grammar.12. The apparatus of claim 11, wherein the reduced device list generator being configured to generate the reduced device list comprises the reduced device list generator being configured to cluster the device capabilities to generate the reduced list.13. The apparatus of claim 12, wherein the network interface is further configured to receive one or more observation logs, the one or more observation logs including information about one or more interactions among a subset of the plurality of devices.14. The apparatus of claim 13, wherein the reduced device list generator being configured to cluster the device capabilities comprises the reduced device list generator being configured to:generate one or more feature vectors representing the received one or more observation logs; andcluster the one or more feature vectors to generate the reduced device list.15. The apparatus of claim 11, wherein the network interface is further configured to receive a log of one or more interactions among a subset of the plurality of devices.16. The apparatus of claim 15, wherein the log is received from a device of the plurality of devices that is not involved in the one or more interactions.17. The apparatus of claim 15, wherein the interaction sequence modeler being configured to model one or more sequences of interactions comprises the interaction sequence modeler being configured to assign a sequence of the one or more interactions to a sequence of one or more centroids to model the one or more sequences of interactions.18. The apparatus of claim 1 1, wherein the apparatus comprises an Internet of Things (IoT) server.19. The apparatus of claim 1 1, wherein the apparatus comprises a supervisor device in the network other than an IoT server.20. The apparatus of claim 11, wherein the plurality of devices comprises a plurality of IoT devices.21. An apparatus for constructing a grammar to describe interactions among a plurality of devices in a network, comprising:means for receiving, by a network interface of a device, device capabilities of each of the plurality of devices; means for generating, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities; means for modeling, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list;means for constructing, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions; andmeans for storing the grammar in a memory of the apparatus.22. The apparatus of claim 21, wherein the means for generating the reduced device list comprises means for clustering the device capabilities to generate the reduced list.23. The apparatus of claim 22, further comprising:means for receiving one or more observation logs, the one or more observation logs including information about one or more interactions among a subset of the plurality of devices.24. The apparatus of claim 23, wherein the means for clustering comprises:means for generating one or more feature vectors representing the received one or more observation logs; andmeans for clustering the one or more feature vectors to generate the reduced device list.25. A non-transitory computer-readable medium for constructing a grammar to describe interactions among a plurality of devices in a network, comprising:at least one instruction to receive, by a network interface of a device, device capabilities of each of the plurality of devices;at least one instruction to generate, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities;at least one instruction to model, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list; at least one instruction to construct, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions; andat least one instruction to store the grammar in a memory of the device.26. The non-transitory computer-readable medium of claim 25, wherein the at least one instruction to generate the reduced device list comprises at least one instruction to cluster the device capabilities to generate the reduced list.27. The non-transitory computer-readable medium of claim 26, further comprising:at least one instruction to receive one or more observation logs, the one or more observation logs including information about one or more interactions among a subset of the plurality of devices.28. The non-transitory computer-readable medium of claim 27, wherein the at least one instruction to cluster comprises:at least one instruction to generate one or more feature vectors representing the received one or more observation logs; andat least one instruction to cluster the one or more feature vectors to generate the reduced device list.29. The non-transitory computer-readable medium of claim 25, further comprising:at least one instruction to receive a log of one or more interactions among a subset of the plurality of devices.30. The non-transitory computer-readable medium of claim 29, wherein the at least one instruction to model the one or more sequences of interactions comprises:at least one instruction to assign a sequence of the one or more interactions to a sequence of one or more centroids to model the one or more sequences interactions. |
METHODS AND APPARATUSES FOR QUANTIFYING THE HOLISTIC VALUE OF AN EXISTING NETWORK OF DEVICES BY MEASURING THE COMPLEXITY OF A GENERATED GRAMMARCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present Application for Patent claims the benefit of U.S. Provisional ApplicationNo. 61/926, 162, entitled "METHOD FOR QUANTIFYING THE HOLISTIC VALUE OF AN EXISTING NETWORK OF DEVICES BY MEASURING THE COMPLEXITY OF A GENERATED GRAMMAR," filed January 10, 2014, assigned to the assignee hereof, and expressly incorporated herein by reference in its entirety.TECHNICAL FIELD[0002] The disclosure relates to quantifying the holistic value of an existing network of devices by measuring the complexity of a generated grammar.BACKGROUND[0003] The Internet is a global system of interconnected computers and computer networks that use a standard Internet protocol suite (e.g., the Transmission Control Protocol (TCP) and Internet Protocol (IP)) to communicate with each other. The Internet of Things (IoT) is based on the idea that everyday objects, not just computers and computer networks, can be readable, recognizable, locatable, addressable, and controllable via an IoT communications network (e.g., an ad-hoc system or the Internet).[0004] A number of market trends are driving development of IoT devices. For example, increasing energy costs are driving governments' strategic investments in smart grids and support for future consumption, such as for electric vehicles and public charging stations. Increasing health care costs and aging populations are driving development for remote/connected health care and fitness services. A technological revolution in the home is driving development for new "smart" services, including consolidation by service providers marketing 'N' play (e.g., data, voice, video, security, energy management, etc.) and expanding home networks. Buildings are getting smarter and more convenient as a means to reduce operational costs for enterprise facilities.[0005] There are a number of key applications for the IoT. For example, in the area of smart grids and energy management, utility companies can optimize delivery of energy to homes and businesses while customers can better manage energy usage. In the area of home and building automation, smart homes and buildings can have centralized control over virtually any device or system in the home or office, from appliances to plug-in electric vehicle (PEV) security systems. In the field of asset tracking, enterprises, hospitals, factories, and other large organizations can accurately track the locations of high-value equipment, patients, vehicles, and so on. In the area of health and wellness, doctors can remotely monitor patients' health while people can track the progress of fitness routines.SUMMARY[0006] The following presents a simplified summary relating to one or more aspects and/or embodiments associated with the mechanisms disclosed herein. As such, the following summary should not be considered an extensive overview relating to all contemplated aspects and/or embodiments, nor should the following summary be regarded to identify key or critical elements relating to all contemplated aspects and/or embodiments or to delineate the scope associated with any particular aspect and/or embodiment. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects and/or embodiments relating to the mechanisms disclosed herein in a simplified form to precede the detailed description presented below.[0007] The disclosure is related to constructing a grammar to describe interactions among a plurality of devices in a network. A method for constructing a grammar to describe interactions among a plurality of devices in a network includes receiving, by a network interface of a device, device capabilities of each of the plurality of devices, generating, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities, modeling, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list, constructing, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions, and storing the grammar in a memory of the device.[0008] An apparatus for constructing a grammar to describe interactions among a plurality of devices in a network includes a network interface configured to receive device capabilities of each of the plurality of devices, a reduced device list generator configured to generate a reduced device list representing groupings of the plurality of devices based on the device capabilities, an interaction sequence modeler configured to model one or more sequences of interactions among the plurality of devices using the reduced device list, a grammar construction module configured to construct the grammar based on the modeled one or more sequences of interactions, and a memory configured to store the grammar.[0009] An apparatus for constructing a grammar to describe interactions among a plurality of devices in a network includes means for receiving, by a network interface of a device, device capabilities of each of the plurality of devices, means for generating, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities, means for modeling, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list, means for constructing, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions, and means for storing the grammar in a memory of the apparatus.[0010] An apparatus for constructing a grammar to describe interactions among a plurality of devices in a network includes logic configured to receive, by a network interface of a device, device capabilities of each of the plurality of devices, logic configured to generate, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities, logic configured to model, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list, logic configured to construct, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions, and logic configured to store the grammar in a memory of the apparatus.[0011] A non-transitory computer-readable medium for constructing a grammar to describe interactions among a plurality of devices in a network includes at least one instruction to receive, by a network interface of a device, device capabilities of each of the plurality of devices, at least one instruction to generate, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities, at least one instruction to model, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list, at least one instruction to construct, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions, and at least one instruction to store the grammar in a memory of the device.[0012] Other objects and advantages associated with the mechanisms disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0013] A more complete appreciation of aspects of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the disclosure, and in which:[0014] FIG. 1A illustrates a high-level system architecture of a wireless communications system in accordance with an aspect of the disclosure.[0015] FIG. IB illustrates a high-level system architecture of a wireless communications system in accordance with another aspect of the disclosure.[0016] FIG. 1C illustrates a high-level system architecture of a wireless communications system in accordance with an aspect of the disclosure.[0017] FIG. ID illustrates a high-level system architecture of a wireless communications system in accordance with an aspect of the disclosure.[0018] FIG. IE illustrates a high-level system architecture of a wireless communications system in accordance with an aspect of the disclosure.[0019] FIG. 2A illustrates an exemplary Internet of Things (IoT) device in accordance with aspects of the disclosure, while FIG. 2B illustrates an exemplary passive IoT device in accordance with aspects of the disclosure.[0020] FIG. 3 illustrates a communication device that includes logic configured to perform functionality in accordance with an aspect of the disclosure.[0021] FIG. 4A illustrates an exemplary server according to various aspects of the disclosure.[0022] FIG. 4B illustrates an exemplary processor of the server illustrated in FIG. 4A according to various aspects of the disclosure.[0023] FIG. 5A-D illustrate an example of converting a scattergram to a state machine.[0024] FIG. 6A illustrates an exemplary sequence of interactions between devices in a firstIoT network that can be used to construct a grammar of interactions.[0025] FIG. 6B illustrates an exemplary sequence of interactions between devices in a secondIoT network that can be used to construct a grammar of interactions.[0026] FIG. 7 illustrates an exemplary sequence of proximity detections between a first user, a second user, and a third user. [0027] FIG. 8 illustrates an exemplary flowchart for constructing a grammar to describe interactions among a plurality of devices in a network.[0028] FIG. 9 is a simplified block diagram of several sample aspects of an apparatus configured to support communication as taught herein.DETAILED DESCRIPTION[0029] The disclosure is directed to constructing a grammar to describe interactions among a plurality of devices in a network. An aspect receives, by a network interface of a device, device capabilities of each of the plurality of devices, generates, by a reduced device list generator of the device, a reduced device list representing groupings of the plurality of devices based on the device capabilities, models, by an interaction sequence modeler of the device, one or more sequences of interactions among the plurality of devices using the reduced device list, constructs, by a grammar construction module of the device, the grammar based on the modeled one or more sequences of interactions, and stores the grammar in a memory of the device.[0030] These and other aspects are disclosed in the following description and related drawings to show specific examples relating to exemplary embodiments of quantifying the holistic value of an existing network of devices by measuring the complexity of a generated grammar. Alternate embodiments will be apparent to those skilled in the pertinent art upon reading this disclosure, and may be constructed and practiced without departing from the scope or spirit of the disclosure. Additionally, well-known elements will not be described in detail or may be omitted so as to not obscure the relevant details of the aspects and embodiments disclosed herein.[0031] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments" does not require that all embodiments include the discussed feature, advantage or mode of operation.[0032] The terminology used herein describes particular embodiments only and should not be construed to limit any embodiments disclosed herein. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.[0033] Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., an application specific integrated circuit (ASIC)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the disclosure may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example, "logic configured to" perform the described action.[0034] As used herein, the term "Internet of Things device" (or "IoT device") may refer to any object (e.g., an appliance, a sensor, etc.) that has an addressable interface (e.g., an Internet protocol (IP) address, a Bluetooth identifier (ID), a near-field communication (NFC) ID, etc.) and can transmit information to one or more other devices over a wired or wireless connection. An IoT device may have a passive communication interface, such as a quick response (QR) code, a radio-frequency identification (RFID) tag, an NFC tag, or the like, or an active communication interface, such as a modem, a transceiver, a transmitter-receiver, or the like. An IoT device can have a particular set of attributes (e.g., a device state or status, such as whether the IoT device is on or off, open or closed, idle or active, available for task execution or busy, and so on, a cooling or heating function, an environmental monitoring or recording function, a light-emitting function, a sound-emitting function, etc.) that can be embedded in and/or controlled/monitored by a central processing unit (CPU), microprocessor, ASIC, or the like, and configured for connection to an IoT network such as a local ad-hoc network or the Internet. For example, IoT devices may include, but are not limited to, refrigerators, toasters, ovens, microwaves, freezers, dishwashers, dishes, hand tools, clothes washers, clothes dryers, furnaces, air conditioners, thermostats, televisions, light fixtures, vacuum cleaners, sprinklers, electricity meters, gas meters, etc., so long as the devices are equipped with an addressable communications interface for communicating with the IoT network. IoT devices may also include cell phones, desktop computers, laptop computers, tablet computers, personal digital assistants (PDAs), etc. Accordingly, the IoT network may be comprised of a combination of "legacy" Internet-accessible devices (e.g., laptop or desktop computers, cell phones, etc.) in addition to devices that do not typically have Internet-connectivity (e.g., dishwashers, etc.).[0035] FIG. 1A illustrates a high-level system architecture of a wireless communications system 100A in accordance with an aspect of the disclosure. The wireless communications system 100A contains a plurality of IoT devices, which include a television 110, an outdoor air conditioning unit 112, a thermostat 1 14, a refrigerator 1 16, and a washer and dryer 1 18.[0036] Referring to FIG. 1A, IoT devices 110-118 are configured to communicate with an access network (e.g., an access point 125) over a physical communications interface or layer, shown in FIG. 1A as air interface 108 and a direct wired connection 109. The air interface 108 can comply with a wireless Internet protocol (IP), such as IEEE 802.1 1. Although FIG. 1A illustrates IoT devices 110-118 communicating over the air interface 108 and IoT device 1 18 communicating over the direct wired connection 109, each IoT device may communicate over a wired or wireless connection, or both.[0037] The Internet 175 includes a number of routing agents and processing agents (not shown in FIG. 1A for the sake of convenience). The Internet 175 is a global system of interconnected computers and computer networks that uses a standard Internet protocol suite (e.g., the Transmission Control Protocol (TCP) and IP) to communicate among disparate devices/networks. TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination.[0038] In FIG. 1A, a computer 120, such as a desktop or personal computer (PC), is shown as connecting to the Internet 175 directly (e.g., over an Ethernet connection or Wi-Fi or 802.11- based network). The computer 120 may have a wired connection to the Internet 175, such as a direct connection to a modem or router, which, in an example, can correspond to the access point 125 itself (e.g., for a Wi-Fi router with both wired and wireless connectivity). Alternatively, rather than being connected to the access point 125 and the Internet 175 over a wired connection, the computer 120 may be connected to the access point 125 over air interface 108 or another wireless interface, and access the Internet 175 over the air interface 108. Although illustrated as a desktop computer, computer 120 may be a laptop computer, a tablet computer, a PDA, a smart phone, or the like. The computer 120 may be an IoT device and/or contain functionality to manage an IoT network/group, such as the network/group of IoT devices 1 10-118.[0039] The access point 125 may be connected to the Internet 175 via, for example, an optical communication system, such as FiOS, a cable modem, a digital subscriber line (DSL) modem, or the like. The access point 125 may communicate with IoT devices 110-120 and the Internet 175 using the standard Internet protocols (e.g., TCP/IP).[0040] Referring to FIG. 1A, an IoT server 170 is shown as connected to the Internet 175.The IoT server 170 can be implemented as a plurality of structurally separate servers, or alternately may correspond to a single server. In an aspect, the IoT server 170 is optional (as indicated by the dotted line), and the group of IoT devices 110-120 may be a peer-to-peer (P2P) network. In such a case, the IoT devices 110-120 can communicate with each other directly over the air interface 108 and/or the direct wired connection 109. Alternatively, or additionally, some or all of IoT devices 1 10-120 may be configured with a communication interface independent of air interface 108 and direct wired connection 109. For example, if the air interface 108 corresponds to a Wi-Fi interface, one or more of the IoT devices 1 10- 120 may have Bluetooth or NFC interfaces for communicating directly with each other or other Bluetooth or NFC-enabled devices.[0041] In a peer-to-peer network, service discovery schemes can multicast the presence of nodes, their capabilities, and group membership. The peer-to-peer devices can establish associations and subsequent interactions based on this information.[0042] In accordance with an aspect of the disclosure, FIG. IB illustrates a high-level architecture of another wireless communications system 100B that contains a plurality of IoT devices. In general, the wireless communications system 100B shown in FIG. IB may include various components that are the same and/or substantially similar to the wireless communications system 100A shown in FIG. 1A, which was described in greater detail above (e.g., various IoT devices, including a television 1 10, outdoor air conditioning unit 1 12, thermostat 114, refrigerator 116, and washer and dryer 118, that are configured to communicate with an access point 125 over an air interface 108 and/or a direct wired connection 109, a computer 120 that directly connects to the Internet 175 and/or connects to the Internet 175 through access point 125, and an IoT server 170 accessible via the Internet 175, etc.). As such, for brevity and ease of description, various details relating to certain components in the wireless communications system 100B shown in FIG. IB may be omitted herein to the extent that the same or similar details have already been provided above in relation to the wireless communications system 100A illustrated in FIG. 1A. [0043] Referring to FIG. IB, the wireless communications system 100B may include a supervisor device 130, which may alternatively be referred to as an IoT manager 130 or IoT manager device 130. As such, where the following description uses the term "supervisor device" 130, those skilled in the art will appreciate that any references to an IoT manager, group owner, or similar terminology may refer to the supervisor device 130 or another physical or logical component that provides the same or substantially similar functionality.[0044] In one embodiment, the supervisor device 130 may generally observe, monitor, control, or otherwise manage the various other components in the wireless communications system 100B. For example, the supervisor device 130 can communicate with an access network (e.g., access point 125) over air interface 108 and/or a direct wired connection 109 to monitor or manage attributes, activities, or other states associated with the various IoT devices 110-120 in the wireless communications system 100B. The supervisor device 130 may have a wired or wireless connection to the Internet 175 and optionally to the IoT server 170 (shown as a dotted line). The supervisor device 130 may obtain information from the Internet 175 and/or the IoT server 170 that can be used to further monitor or manage attributes, activities, or other states associated with the various IoT devices 110-120. The supervisor device 130 may be a standalone device or one of IoT devices 1 10-120, such as computer 120. The supervisor device 130 may be a physical device or a software application running on a physical device. The supervisor device 130 may include a user interface that can output information relating to the monitored attributes, activities, or other states associated with the IoT devices 110-120 and receive input information to control or otherwise manage the attributes, activities, or other states associated therewith. Accordingly, the supervisor device 130 may generally include various components and support various wired and wireless communication interfaces to observe, monitor, control, or otherwise manage the various components in the wireless communications system 100B.[0045] The wireless communications system 100B shown in FIG. IB may include one or more passive IoT devices 105 (in contrast to the active IoT devices 110-120) that can be coupled to or otherwise made part of the wireless communications system 100B. In general, the passive IoT devices 105 may include barcoded devices, Bluetooth devices, radio frequency (RF) devices, RFID tagged devices, infrared (IR) devices, NFC tagged devices, or any other suitable device that can provide its identifier and attributes to another device when queried over a short range interface. Active IoT devices may detect, store, communicate, act on, and/or the like, changes in attributes of passive IoT devices. [0046] For example, passive IoT devices 105 may include a coffee cup and a container of orange juice that each have an RFID tag or barcode. A cabinet IoT device and the refrigerator IoT device 116 may each have an appropriate scanner or reader that can read the RFID tag or barcode to detect when the coffee cup and/or the container of orange juice passive IoT devices 105 have been added or removed. In response to the cabinet IoT device detecting the removal of the coffee cup passive IoT device 105 and the refrigerator IoT device 1 16 detecting the removal of the container of orange juice passive IoT device, the supervisor device 130 may receive one or more signals that relate to the activities detected at the cabinet IoT device and the refrigerator IoT device 116. The supervisor device 130 may then infer that a user is drinking orange juice from the coffee cup and/or likes to drink orange juice from a coffee cup.[0047] Although the foregoing describes the passive IoT devices 105 as having some form ofRFID tag or barcode communication interface, the passive IoT devices 105 may include one or more devices or other physical objects that do not have such communication capabilities. For example, certain IoT devices may have appropriate scanner or reader mechanisms that can detect shapes, sizes, colors, and/or other observable features associated with the passive IoT devices 105 to identify the passive IoT devices 105. In this manner, any suitable physical object may communicate its identity and attributes and become part of the wireless communication system 100B and be observed, monitored, controlled, or otherwise managed with the supervisor device 130. Further, passive IoT devices 105 may be coupled to or otherwise made part of the wireless communications system 100A in FIG. 1A and observed, monitored, controlled, or otherwise managed in a substantially similar manner.[0048] In accordance with another aspect of the disclosure, FIG. 1 C illustrates a high-level architecture of another wireless communications system lOOC that contains a plurality of IoT devices. In general, the wireless communications system lOOC shown in FIG. 1C may include various components that are the same and/or substantially similar to the wireless communications systems 100A and 100B shown in FIGS. 1A and IB, respectively, which were described in greater detail above. As such, for brevity and ease of description, various details relating to certain components in the wireless communications system lOOC shown in FIG. 1C may be omitted herein to the extent that the same or similar details have already been provided above in relation to the wireless communications systems 100A and 100B illustrated in FIGS. 1A and IB, respectively.[0049] The communications system lOOC shown in FIG. 1C illustrates exemplary peer-to- peer communications between the IoT devices 1 10-1 18 and the supervisor device 130. As shown in FIG. 1C, the supervisor device 130 communicates with each of the IoT devices 1 10-1 18 over an IoT supervisor interface. Further, IoT devices 110 and 1 14, IoT devices 1 12, 1 14, and 1 16, and IoT devices 1 16 and 1 18, communicate directly with each other.[0050] The IoT devices 110-118 make up an IoT group 160. An IoT device group 160 is a group of locally connected IoT devices, such as the IoT devices connected to a user's home network. Although not shown, multiple IoT device groups may be connected to and/or communicate with each other via an IoT SuperAgent 140 connected to the Internet 175. At a high level, the supervisor device 130 manages intra-group communications, while the IoT SuperAgent 140 can manage inter-group communications. Although shown as separate devices, the supervisor device 130 and the IoT SuperAgent 140 may be, or reside on, the same device (e.g., a standalone device or an IoT device, such as computer 120 in FIG. 1A). Alternatively, the IoT SuperAgent 140 may correspond to or include the functionality of the access point 125. As yet another alternative, the IoT SuperAgent 140 may correspond to or include the functionality of an IoT server, such as IoT server 170. The IoT SuperAgent 140 may encapsulate gateway functionality 145.[0051] Each IoT device 110-118 can treat the supervisor device 130 as a peer and transmit attribute/schema updates to the supervisor device 130. When an IoT device needs to communicate with another IoT device, it can request the pointer to that IoT device from the supervisor device 130 and then communicate with the target IoT device as a peer. The IoT devices 110-1 18 communicate with each other over a peer-to-peer communication network using a common messaging protocol (CMP). As long as two IoT devices are CMP-enabled and connected over a common communication transport, they can communicate with each other. In the protocol stack, the CMP layer 154 is below the application layer 152 and above the transport layer 156 and the physical layer 158.[0052] In accordance with another aspect of the disclosure, FIG. ID illustrates a high-level architecture of another wireless communications system 100D that contains a plurality of IoT devices. In general, the wireless communications system 100D shown in FIG. ID may include various components that are the same and/or substantially similar to the wireless communications systems 100A-C shown in FIGS. 1-C, respectively, which were described in greater detail above. As such, for brevity and ease of description, various details relating to certain components in the wireless communications system 100D shown in FIG. ID may be omitted herein to the extent that the same or similar details have already been provided above in relation to the wireless communications systems lOOA-C illustrated in FIGS. 1A-C, respectively.[0053] The Internet 175 is a "resource" that can be regulated using the concept of the IoT.However, the Internet 175 is just one example of a resource that is regulated, and any resource could be regulated using the concept of the IoT. Other resources that can be regulated include, but are not limited to, electricity, gas, storage, security, and the like. An IoT device may be connected to the resource and thereby regulate it, or the resource could be regulated over the Internet 175. FIG. ID illustrates several resources 180, such as natural gas, gasoline, hot water, and electricity, wherein the resources 180 can be regulated in addition to and/or over the Internet 175.[0054] IoT devices can communicate with each other to regulate their use of a resource 180.For example, IoT devices such as a toaster, a computer, and a hairdryer may communicate with each other over a Bluetooth communication interface to regulate their use of electricity (the resource 180). As another example, IoT devices such as a desktop computer, a telephone, and a tablet computer may communicate over a Wi-Fi communication interface to regulate their access to the Internet 175 (the resource 180). As yet another example, IoT devices such as a stove, a clothes dryer, and a water heater may communicate over a Wi-Fi communication interface to regulate their use of gas. Alternatively, or additionally, each IoT device may be connected to an IoT server, such as IoT server 170, which has logic to regulate their use of the resource 180 based on information received from the IoT devices.[0055] In accordance with another aspect of the disclosure, FIG. IE illustrates a high-level architecture of another wireless communications system 100E that contains a plurality of IoT devices. In general, the wireless communications system 100E shown in FIG. IE may include various components that are the same and/or substantially similar to the wireless communications systems 100A-D shown in FIGS. 1-D, respectively, which were described in greater detail above. As such, for brevity and ease of description, various details relating to certain components in the wireless communications system 100E shown in FIG. IE may be omitted herein to the extent that the same or similar details have already been provided above in relation to the wireless communications systems 100A-D illustrated in FIGS. 1A-D, respectively.[0056] The communications system 100E includes two IoT device groups 160A and 160B.Multiple IoT device groups may be connected to and/or communicate with each other via an IoT SuperAgent connected to the Internet 175. At a high level, an IoT SuperAgent may manage inter-group communications among IoT device groups. For example, in FIG. IE, the IoT device group 160A includes IoT devices 1 16A, 122A, and 124A and an IoT SuperAgent 140A, while IoT device group 160B includes IoT devices 116B, 122B, and 124B and an IoT SuperAgent 140B. As such, the IoT SuperAgents 140A and 140B may connect to the Internet 175 and communicate with each other over the Internet 175 and/or communicate with each other directly to facilitate communication between the IoT device groups 160A and 160B. Furthermore, although FIG. IE illustrates two IoT device groups 160A and 160B communicating with each other via IoT SuperAgents 140A and 140B, those skilled in the art will appreciate that any number of IoT device groups may suitably communicate with each other using IoT SuperAgents.[0057] FIG. 2A illustrates a high-level example of an IoT device 200A in accordance with aspects of the disclosure. While external appearances and/or internal components can differ significantly among IoT devices, most IoT devices will have some sort of user interface, which may comprise a display and a means for user input. IoT devices without a user interface can be communicated with remotely over a wired or wireless network, such as air interface 108 in FIGS. 1A-B.[0058] As shown in FIG. 2A, in an example configuration for the IoT device 200A, an external casing of IoT device 200A may be configured with a display 226, a power button 222, and two control buttons 224A and 224B, among other components, as is known in the art. The display 226 may be a touchscreen display, in which case the control buttons 224A and 224B may not be necessary. While not shown explicitly as part of IoT device 200A, the IoT device 200A may include one or more external antennas and/or one or more integrated antennas that are built into the external casing, including but not limited to Wi-Fi antennas, cellular antennas, satellite position system (SPS) antennas (e.g., global positioning system (GPS) antennas), and so on.[0059] While internal components of IoT devices, such as IoT device 200A, can be embodied with different hardware configurations, a basic high-level configuration for internal hardware components is shown as platform 202 in FIG. 2A. The platform 202 can receive and execute software applications, data and/or commands transmitted over a network interface, such as air interface 108 in FIGS. 1A-B and/or a wired interface. The platform 202 can also independently execute locally stored applications. The platform 202 can include one or more transceivers 206 configured for wired and/or wireless communication (e.g., a Wi-Fi transceiver, a Bluetooth transceiver, a cellular transceiver, a satellite transceiver, a GPS or SPS receiver, etc.) operably coupled to one or more processors 208, such as a microcontroller, microprocessor, application specific integrated circuit, digital signal processor (DSP), programmable logic circuit, or other data processing device, which will be generally referred to as processor 208. The processor 208 can execute application programming instructions within a memory 212 of the IoT device. The memory 212 can include one or more of read-only memory (ROM), random-access memory (RAM), electrically erasable programmable ROM (EEPROM), flash cards, or any memory common to computer platforms. One or more input / output (I/O) interfaces 214 can be configured to allow the processor 208 to communicate with and control from various I/O devices such as the display 226, power button 222, control buttons 224A and 224B as illustrated, and any other devices, such as sensors, actuators, relays, valves, switches, and the like associated with the IoT device 200A.[0060] Accordingly, an aspect of the disclosure can include an IoT device (e.g., IoT device200A) including the ability to perform the functions described herein. As will be appreciated by those skilled in the art, the various logic elements can be embodied in discrete elements, software modules executed on a processor (e.g., processor 208) or any combination of software and hardware to achieve the functionality disclosed herein. For example, transceiver 206, processor 208, memory 212, and I/O interface 214 may all be used cooperatively to load, store and execute the various functions disclosed herein and thus the logic to perform these functions may be distributed over various elements. Alternatively, the functionality could be incorporated into one discrete component. Therefore, the features of the IoT device 200A in FIG. 2A are to be considered merely illustrative and the disclosure is not limited to the illustrated features or arrangement.[0061] FIG. 2B illustrates a high-level example of a passive IoT device 200B in accordance with aspects of the disclosure. In general, the passive IoT device 200B shown in FIG. 2B may include various components that are the same and/or substantially similar to the IoT device 200A shown in FIG. 2A, which was described in greater detail above. As such, for brevity and ease of description, various details relating to certain components in the passive IoT device 200B shown in FIG. 2B may be omitted herein to the extent that the same or similar details have already been provided above in relation to the IoT device 200A illustrated in FIG. 2A.[0062] The passive IoT device 200B shown in FIG. 2B may generally differ from the IoT device 200A shown in FIG. 2A in that the passive IoT device 200B may not have a processor, internal memory, or certain other components. Instead, in one embodiment, the passive IoT device 200B may only include an I/O interface 214 or other suitable mechanism that allows the passive IoT device 200B to be observed, monitored, controlled, managed, or otherwise known within a controlled IoT network. For example, in one embodiment, the I/O interface 214 associated with the passive IoT device 200B may include a barcode, Bluetooth interface, radio frequency (RF) interface, RFID tag, IR interface, NFC interface, or any other suitable I/O interface that can provide an identifier and attributes associated with the passive IoT device 200B to another device when queried over a short range interface (e.g., an active IoT device, such as IoT device 200A, that can detect, store, communicate, act on, or otherwise process information relating to the attributes associated with the passive IoT device 200B).[0063] Although the foregoing describes the passive IoT device 200B as having some form of RF, barcode, or other I/O interface 214, the passive IoT device 200B may comprise a device or other physical object that does not have such an I/O interface 214. For example, certain IoT devices may have appropriate scanner or reader mechanisms that can detect shapes, sizes, colors, and/or other observable features associated with the passive IoT device 200B to identify the passive IoT device 200B. In this manner, any suitable physical object may communicate its identity and attributes and be observed, monitored, controlled, or otherwise managed within a controlled IoT network.[0064] FIG. 3 illustrates a communication device 300 that includes logic configured to perform functionality. The communication device 300 can correspond to any of the above- noted communication devices, including but not limited to IoT devices 1 10-120, IoT device 200A, any components coupled to the Internet 175 (e.g., the IoT server 170), and so on. Thus, communication device 300 can correspond to any electronic device that is configured to communicate with (or facilitate communication with) one or more other entities over the wireless communications systems 100A-B of FIGS. 1A-B.[0065] Referring to FIG. 3, the communication device 300 includes logic configured to receive and/or transmit information 305. In an example, if the communication device 300 corresponds to a wireless communications device (e.g., IoT device 200A and/or passive IoT device 200B), the logic configured to receive and/or transmit information 305 can include a wireless communications interface (e.g., Bluetooth, Wi-Fi, Wi-Fi Direct, Long-Term Evolution (LTE) Direct, etc.) such as a wireless transceiver and associated hardware (e.g., an RF antenna, a MODEM, a modulator and/or demodulator, etc.). In another example, the logic configured to receive and/or transmit information 305 can correspond to a wired communications interface (e.g., a serial connection, a USB or Firewire connection, an Ethernet connection through which the Internet 175 can be accessed, etc.). Thus, if the communication device 300 corresponds to some type of network-based server (e.g., the IoT server 170), the logic configured to receive and/or transmit information 305 can correspond to an Ethernet card, in an example, that connects the network-based server to other communication entities via an Ethernet protocol. As an example, where the communication device 300 is configured to construct a grammar to describe interactions among a plurality of devices in a network, as described herein, the logic configured to receive and/or transmit information 305 may include logic configured to receive device capabilities of each of the plurality of devices. In a further example, the logic configured to receive and/or transmit information 305 can include sensory or measurement hardware by which the communication device 300 can monitor its local environment (e.g., an accelerometer, a temperature sensor, a light sensor, an antenna for monitoring local RF signals, etc.). The logic configured to receive and/or transmit information 305 can also include software that, when executed, permits the associated hardware of the logic configured to receive and/or transmit information 305 to perform its reception and/or transmission function(s). However, the logic configured to receive and/or transmit information 305 does not correspond to software alone, and the logic configured to receive and/or transmit information 305 relies at least in part upon hardware to achieve its functionality.Referring to FIG. 3, the communication device 300 further includes logic configured to process information 310. In an example, the logic configured to process information 310 can include at least a processor. Example implementations of the type of processing that can be performed by the logic configured to process information 310 includes but is not limited to performing determinations, establishing connections, making selections between different information options, performing evaluations related to data, interacting with sensors coupled to the communication device 300 to perform measurement operations, converting information from one format to another (e.g., between different protocols such as .wmv to .avi, etc.), and so on. For example, where the communication device 300 is configured to construct a grammar to describe interactions among a plurality of devices in a network, as described herein, the logic configured to process information 310 may include logic configured to generate a reduced device list representing groupings of the plurality of devices based on the device capabilities, logic configured to model one or more sequences of interactions among the plurality of devices using the reduced device list, and/or logic configured to construct the grammar based on the modeled one or more sequences of interactions. The processor included in the logic configured to process information 310 can correspond to a general purpose processor, a DSP, an ASIC, a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The logic configured to process information 310 can also include software that, when executed, permits the associated hardware of the logic configured to process information 310 to perform its processing function(s). However, the logic configured to process information 310 does not correspond to software alone, and the logic configured to process information 310 relies at least in part upon hardware to achieve its functionality.[0067] Referring to FIG. 3, the communication device 300 further includes logic configured to store information 315. In an example, the logic configured to store information 315 can include at least a non-transitory memory and associated hardware (e.g., a memory controller, etc.). For example, where the communication device 300 is configured to construct a grammar to describe interactions among a plurality of devices in a network, as described herein, the logic configured to store information 315 may include logic configured to store the grammar in a memory of the communication device 300. The non-transitory memory included in the logic configured to store information 315 can correspond to RAM, flash memory, ROM, erasable programmable ROM (EPROM), EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. The logic configured to store information 315 can also include software that, when executed, permits the associated hardware of the logic configured to store information 315 to perform its storage function(s). However, the logic configured to store information 315 does not correspond to software alone, and the logic configured to store information 315 relies at least in part upon hardware to achieve its functionality.[0068] Referring to FIG. 3, the communication device 300 further optionally includes logic configured to present information 320. In an example, the logic configured to present information 320 can include at least an output device and associated hardware. For example, the output device can include a video output device (e.g., a display screen, a port that can carry video information such as USB, HDMI, etc.), an audio output device (e.g., speakers, a port that can carry audio information such as a microphone jack, USB, HDMI, etc.), a vibration device and/or any other device by which information can be formatted for output or actually outputted by a user or operator of the communication device 300. For example, if the communication device 300 corresponds to the IoT device 200A as shown in FIG. 2A and/or the passive IoT device 200B as shown in FIG. 2B, the logic configured to present information 320 can include the display 226. In a further example, the logic configured to present information 320 can be omitted for certain communication devices, such as network communication devices that do not have a local user (e.g., network switches or routers, remote servers, etc.). The logic configured to present information 320 can also include software that, when executed, permits the associated hardware of the logic configured to present information 320 to perform its presentation function(s). However, the logic configured to present information 320 does not correspond to software alone, and the logic configured to present information 320 relies at least in part upon hardware to achieve its functionality.Referring to FIG. 3, the communication device 300 further optionally includes logic configured to receive local user input 325. In an example, the logic configured to receive local user input 325 can include at least a user input device and associated hardware. For example, the user input device can include buttons, a touchscreen display, a keyboard, a camera, an audio input device (e.g., a microphone or a port that can carry audio information such as a microphone jack, etc.), and/or any other device by which information can be received from a user or operator of the communication device 300. For example, if the communication device 300 corresponds to the IoT device 200A as shown in FIG. 2 A and/or the passive IoT device 200B as shown in FIG. 2B, the logic configured to receive local user input 325 can include the buttons 222, 224A, and 224B, the display 226 (if a touchscreen), etc. In a further example, the logic configured to receive local user input 325 can be omitted for certain communication devices, such as network communication devices that do not have a local user (e.g., network switches or routers, remote servers, etc.). The logic configured to receive local user input 325 can also include software that, when executed, permits the associated hardware of the logic configured to receive local user input 325 to perform its input reception function(s). However, the logic configured to receive local user input 325 does not correspond to software alone, and the logic configured to receive local user input 325 relies at least in part upon hardware to achieve its functionality.[0070] Referring to FIG. 3, while the configured logics of 305 through 325 are shown as separate or distinct blocks in FIG. 3, it will be appreciated that the hardware and/or software by which the respective configured logic performs its functionality can overlap in part. For example, any software used to facilitate the functionality of the configured logics of 305 through 325 can be stored in the non-transitory memory associated with the logic configured to store information 315, such that the configured logics of 305 through 325 each performs their functionality (i.e., in this case, software execution) based in part upon the operation of software stored by the logic configured to store information 315. Likewise, hardware that is directly associated with one of the configured logics can be borrowed or used by other configured logics from time to time. For example, the processor of the logic configured to process information 310 can format data into an appropriate format before being transmitted by the logic configured to receive and/or transmit information 305, such that the logic configured to receive and/or transmit information 305 performs its functionality (i.e., in this case, transmission of data) based in part upon the operation of hardware (i.e., the processor) associated with the logic configured to process information 310.[0071] Generally, unless stated otherwise explicitly, the phrase "logic configured to" as used throughout this disclosure is intended to invoke an aspect that is at least partially implemented with hardware, and is not intended to map to software-only implementations that are independent of hardware. Also, it will be appreciated that the configured logic or "logic configured to" in the various blocks are not limited to specific logic gates or elements, but generally refer to the ability to perform the functionality described herein (either via hardware or a combination of hardware and software). Thus, the configured logics or "logic configured to" as illustrated in the various blocks are not necessarily implemented as logic gates or logic elements despite sharing the word "logic." Other interactions or cooperation between the logic in the various blocks will become clear to one of ordinary skill in the art from a review of the aspects described below in more detail.[0072] The various embodiments may be implemented on any of a variety of commercially available server devices or supervisor devices, such as IoT server 170 or supervisor device 130, respectively, in FIG. IB. For simplicity, the functionality described herein is described as being performed by the IoT server 170, but it will be apparent that the supervisor device 130 may perform the functions described herein. [0073] FIG. 4A illustrates a simplified diagram of IoT server 170 according to an aspect of the disclosure. In FIG. 4A, the IoT server 170 includes a processor 401 coupled to volatile memory 402 and a large capacity nonvolatile memory, such as a disk drive 403. The IoT server 170 may also include a floppy disc drive, compact disc (CD) or digital video disc (DVD) disc drive 406 coupled to the processor 401. The IoT server 170 may also include network access ports 404 coupled to the processor 401 for establishing data connections with a network 407, such as a local area network coupled to other broadcast system computers and servers or to the Internet.[0074] In context with FIG. 3, it will be appreciated that the IoT server 170 of FIG. 4 illustrates one example implementation of the communication device 300, whereby the logic configured to transmit and/or receive information 305 corresponds to the network access points 404 (which may be wired or wireless) used by the IoT server 170 to communicate with the network 407, the logic configured to process information 310 corresponds to the processor 401, and the logic configuration to store information 315 corresponds to any combination of the volatile memory 402, the disk drive 403 and/or the disc drive 406. The optional logic configured to present information 320 and the optional logic configured to receive local user input 325 are not shown explicitly in FIG. 4 and may or may not be included therein. Thus, FIG. 4 helps to demonstrate that the communication device 300 may be implemented as a server, in addition to an IoT device implementation as in FIG. 2A.[0075] FIG. 4B illustrates exemplary components of the processor 401 of IoT server 170 inFIG. 4A according to an embodiment of the disclosure. Specifically, the processor 401 includes a reduced device list generator 412 configured to generate a reduced device list representing groupings of a plurality of devices in an IoT network based on device capabilities received via the network access points 404, as described herein. The processor 401 also includes an interaction sequence modeler 414 configured to model one or more sequences of interactions among the plurality of devices using the reduced device list, as described herein. The processor 401 further includes a grammar construction module 416 configured to construct a grammar based on the modeled one or more sequences of interactions, as described herein. The grammar may then be stored in memory 403.[0076] In an embodiment, the reduced device list generator 412 may generate the reduced device list based on device capabilities retrieved from memory 403, where memory 403 stored the device capabilities after the IoT server 170 received the device capabilities via the network access points 404. Similarly, rather than merely store the generated grammar in memory 403, the IoT server 170 can transmit the grammar to one or more other IoT devices or servers via network access ports 404.[0077] "Devices" are becoming smaller and are embedded in many different products through the "Internet of Things." Networks of devices communicating with each other are therefore more dynamic and harder to identify. As the IoT evolves, and as devices work together in ways that are far more sophisticated and powerful than any single device could possibly act on its own, it would be beneficial to understand the meaning and implementation of the concept of "value." In evaluating data points in a conventional statistical system, it is difficult to discern, and thus mine, the real value of that data because it is largely detached from reference points that would put it in context, such as time and space. Thus, it is difficult to truly understand the value inherent in an IoT ecosystem and how to leverage devices accordingly. To this end, the question can be asked, what is the quantitative measure to know by how much the whole exceeds the sum of its parts?[0078] When attempting to tap into the holistic aspect of the IoT, there is a question of how to capture added value. This requires the ability to measure the significance of a holistic system where the sum is greater than its individual parts. It also requires the ability to quantify and define "value" in a way that can be measured. This is particularly complicated because the value of IoT devices cannot be measured as A + B + C = D. Instead, because these devices are functioning together as a whole, they have some additional value.[0079] For example, if five IoT devices are acting as speakers for a surround sound system, and a sixth IoT device is added, there is an added value by going from five to six devices that is greater than the value of each individual speaker. That is, the added value is the value of a surround sound system having six speakers versus a surround sound system having five speakers. In addition, when a single IoT device is functioning as a single speaker in a room and then a second IoT device is added, creating surround sound, the added value accrued to the system may be a different additional value than adding the sixth device to the five devices. That is, the value added by going from a single-speaker system to a dual-speaker system is different than the value added by going from a surround sound system having five speakers to a surround sound system having six speakers.[0080] To quantify the value of each IoT device, the classic formula of A + B should give way to the new formula of A + delta. This raises the question of how to measure "delta." The proposed system creates a function that includes time and space as parameters to assign and quantify the delta. Bringing time-space parameters into play allows for a new way of quantifying value.[0081] For example, given six mobile phones, each valued at $100, their combined value is a total of $600, as each device is evaluated independently, irrespective of its position in time or space. However, referring to the surround sound system example above, if the six mobile phones work together in an IoT network to create a surround-sound system, their combined value may be different than $600. Assuming a surround-sound system is valued at $ 1,000, the six mobile phones are now worth $ 1,000. Their combined value has increased by $400 by virtue of the time-space factor, as all devices must be present in the same time and space in order to provide surround sound.[0082] Accordingly, the various aspects of the disclosure are related to quantifying the holistic value of an existing network of devices by measuring the complexity of a generated grammar. Initially, the capabilities of the devices in an IoT network are sent to the server for the IoT network, such as IoT server 170 or supervisor 130 in FIG. IB. The IoT server 170, specifically the reduced device list generator 412, clusters the capabilities, reducing the number of devices into groups based on their capabilities. For example, each device in the IoT network can log various observations, such as the number of packets sent and received in a 60 second window and the time of transmission of each packet. The devices can send the observation log files to the IoT server 170. The IoT server 170, specifically the reduced device list generator 412, can construct a feature vector for observation in the observation logs, such as a three dimensional feature vector including time, number of packets sent, and number of packets received. The feature vectors can then be statistically clustered as is known in the art, assigning each feature vector to a centroid, and thereby grouping the devices into a reduced device list based on their capabilities.[0083] Next, one or more "spy" devices can monitor interactions between devices in the IoT network and report the interactions to the IoT server 170. The spy device(s) may monitor interactions explicitly by, for example, packet sniffing in the network, or implicitly by, for example, inferring that a first device sent a packet to a second device. For example, if the first device sends a packet at a first time and the second device receives the packet at a second time, a spy device may infer that the first device sent the packet to the second device. Spy devices may be one or more of the interacting devices or any device that can detect and report the interactions.[0084] Using the reduced device list, the IoT server 170, specifically the interaction sequence modeler 414, can model sequences of interactions. Interactions may manifest in the clustered space. The IoT server 170 can analyze the observations in the received observation logs in sequence, sorted by time. At each time step, the IoT server 170 / interaction sequence modeler 414 can assign an observation to a centroid. The sequence of centroids associated with each observation is referred to as a "sequence of interactions."[0085] The IoT server 170, specifically the grammar construction module 416, can then construct a "grammar" based on these sequences of interactions. The IoT server 170 / grammar construction module 416 evaluates the reported interactions and generates grammars that characterize particular interaction sequences from the reported interactions. For example, a first grammar may comprise the sequence of interactions consisting of 1) Remote Control Signal detected from Remote Control Device, 2) TV signal indicating that the TV is turning on, and 3) Light Adjustment Signal indicating that lighting has been changed to TV mode based on detection of TV turning on.[0086] To generate the reduced device list and model the sequences of interactions, the IoT server 170 can first generate a scattergram and then convert it to a state model. A scattergram depicts structured knowledge, while a state diagram depicts a narrative. As such, a state diagram captures more information. The relative value of different data sets can be compared in the narrative space.[0087] Any set of data can be structured and mapped to a scattergram. The IoT server 170 clusters the data, finds the centroids, defines the axes, and maps the data. Any data that is mapped to clusters can then be mapped to a state machine, where each centroid is a state and each data point is a transition.[0088] FIG. 5A-D illustrate an example of converting a scattergram to a state machine. InFIG. 5A, scattergram 500 includes one point, which corresponds to one centroid/state CI 510. In FIG. 5B, a second point has been added to scattergram 500. The second point is in a separate cluster with a second centroid/state C2 520. The second data point indicates a transition "d2" from centroid/state CI 510 to C2 520.[0089] In FIG. 5C, a third point has been added to scattergram 500. Centroid/state CI 510 now includes two points, and is shifted to the mean of the two points. The third data point indicates a transition "d3" from centroid/state CI 510 back to centroid/state CI 510.[0090] In FIG. 5D, a fourth point has been added to scattergram 500. Centroid/state CI 510 now includes three points, and is shifted to the mean of the three points. The fourth data point indicates a transition "d4" from centroid/state CI 510 back to centroid/state CI 510. The transition d4 reflects the distance between d4 and centroid/state CI 510. [0091] After generating the reduced device list and modeling the sequences of interactions, the IoT server 170, specifically the grammar construction module 416, constructs a grammar to describe the interactions among the devices in the IoT network. FIG. 6A illustrates an exemplary sequence of interactions between devices in an IoT network 600A that can be used to construct a grammar of interactions. In FIG. 6A, Device A 610, Device B 620, and Device C 630 form the exemplary IoT network 600A. The various arrows between devices 610-630 illustrate sequences of interactions between the devices. A spy device, which may be, but need not be, one of devices 610-630, detects and logs the various interactions between devices 610-630. The spy device sends the logged interactions to the IoT server, such as the IoT server 170 or the supervisor 130 in FIG. IB.[0092] In FIG. 6A, the sequence of interactions is [A] [B] [C] [A] [B] [A] [C] [A]. Using the well-known SEQUITUR algorithm, the following grammar may be constructed based on this sequence of interactions: S -> 1 2 3 2, where "1" indicates the interaction "AB," "2" indicates the interaction "CA," and "3" indicates the interaction "BA."[0093] FIG. 6B illustrates an exemplary sequence of interactions between devices in an IoT network 600B that can be used to construct a grammar of interactions. In FIG. 6B, Device A 610, Device D 650, and Device E 660 form the exemplary IoT network 600B. The various arrows between devices 610, 650, 660 illustrate sequences of interactions between the devices. A spy device, which may be, but need not be, one of devices 610, 650, 660, detects and logs the various interactions between devices 610, 650, 660. The spy device sends the logged interactions to IoT server 170.[0094] In FIG. 6B, the sequence of interaction is [A] [E] [D] [E] [A] [E] [D] [E]. Using the well-known SEQUITUR algorithm, the following grammar may be constructed based on this sequence of interactions: S -> 3 3, where "1" indicates the interaction "AE," "2" indicates the interaction "DE," and "3" indicates the interaction "1 2," i.e., "AEDE."[0095] The IoT server 170 can then compare the constructed grammars to determine similarities or derive other information, as described below:[A] [B] [C] [A] [B] [A] [C] [A] [A] [E] [D] [E] [A] [E] [D] [E]S -> 1 2 3 2 S -> 3 31 -> AB 1 -> AE2 -> CA 2 -> DE3 -> BA 3-> 1 2[0096] The above-described techniques can also be used to determine relationships between users based on their proximity to each other. FIG. 7 illustrates an exemplary sequence of proximity detections between User A 710, User B 720, and User C 730. The proximity detections may be determined by the users' IoT devices, such as the users' smartphones. In FIG. 7, the various arrows between users 710-730 illustrate the users coming into proximity with each other. A spy device, which may be, but need not be, one of the IoT devices belonging to users 710-730, detects and logs the various proximity detections between users 710-730. The spy device sends the logged proximity detections to an IoT server, such as the IoT server 170 or the supervisor 130 in FIG. IB.[0097] In FIG. 7, the sequence of proximity detections is [A] [B] [C] [A] [B] [C] [A] [B] [C][A]. The following grammar may be constructed based on this sequence of proximity detections: S -> 1 2 3 1 2, where "1" indicates the interaction "AB," "2" indicates the interaction "CA," and "3" indicates the interaction "BC."[0098] After generating the reduced device list, modeling the sequences of interactions, and constructing the grammar, the IoT server 170 can then define one or more actions to occur in the IoT network in response to detecting the constructed grammar. The IoT server 170 may determine the one or more actions to perform through prior knowledge, expert system analysis, and/or previous examples from other IoT networks. Referring to the example above of detecting that a TV is turning on (2) and that the lighting has been changed to TV mode (3), the action triggered by the first grammar may be to 4) turn on rear speakers whenever Grammar #1 + #2 + #3 is detected. The grammar may be detected by the IoT server 170 or the spy device. The IoT server 170 may instruct the appropriate device(s) to initiating the one or more defined actions (e.g., action #4) in response to detecting the constructed grammar (e.g., action #1 + #2 + #3).[0099] The IoT server 170 can compare the grammars of different interactions to each other to derive the value of different networks of IoT devices. A number of techniques are possible to quantify a grammar and facilitate grammar comparison, such as the depth of the grammar or the complexity of the grammar (e.g., context free grammars versus context grammars). For example, a metric for the SEQUITUR algorithm could be defined, such as how many numbers versus how many letters are in the grammar. Two grammars can then be compared using the result.[00100] Optionally, once the IoT server 170 constructs the various grammars, it can send the set of grammars to the spy device. In that case, the spy device can report grammar detection instead of interaction detections to the IoT server 170. [00101] FIG. 8 illustrates an exemplary flowchart for constructing a grammar to describe interactions among a plurality of devices in a network. The flow illustrated in FIG. 8 may be performed by the IoT server 170 (or the supervisor 130 in FIG. IB).[00102] At 810, the IoT server 170, specifically the network access ports 404, receives device capabilities of each of the plurality of devices in the network. The IoT server 170 may also receive one or more observation logs including information about one or more interactions among a subset of the plurality of devices.[00103] At 820, the IoT server 170, specifically the reduced device list generator 412, generates a reduced device list representing groupings of the plurality of devices based on the device capabilities. Generating the reduced device list may include clustering the device capabilities to generate the reduced device list. Clustering the device capabilities may include generating one or more feature vectors representing the received one or more observation logs and clustering the one or more feature vectors to generate the reduced device list.[00104] At 830, the IoT server 170, specifically the interaction sequence modeler 414, models one or more sequences of interactions among the plurality of devices using the reduced device list. At some point during or after the flow illustrated in FIG. 8, the IoT server 170 may receive a log of one or more interactions among a subset of the plurality of devices. The IoT server 170 may receive the log from a device of the plurality of devices that is not involved in the one or more interactions, i.e., a "spy" device. In this case, modeling at 830 may include assigning a sequence of the one or more interactions to a sequence of one or more centroids to model the sequence of one or more interactions.[00105] At 840, the IoT server 170, specifically the grammar construction module 416, constructs the grammar based on the modeled one or more sequences of interactions. At 850, the IoT server 170 stores the grammar in a memory of the IoT server 170, such as memory 403.[00106] FIG. 9 illustrates an example server apparatus 900 represented as a series of interrelated functional modules. A module for receiving 902 may correspond at least in some aspects to, for example, a communication device, such as network access ports 404 in FIG. 4B, as discussed herein. A module for generating 904 may correspond at least in some aspects to, for example, a processing system, such as processor 401 in conjunction with the reduced device list generator 412 in FIG. 4B, as discussed herein. A module for modeling 906 may correspond at least in some aspects to, for example, a processing system, such as processor 401 in conjunction with the interaction sequence modeler 414 in FIG. 4B, as discussed herein. A module for constructing 908 may correspond at least in some aspects to, for example, a processing system, such as processor 401 in conjunction with the grammar construction module 416 in FIG. 4B, as discussed herein. A module for storing 910 may correspond at least in some aspects to, for example, a memory, such as memory 403 in FIG. 4B, as discussed herein.[00107] The functionality of the modules of FIG. 9 may be implemented in various ways consistent with the teachings herein. In some designs, the functionality of these modules may be implemented as one or more electrical components. In some designs, the functionality of these blocks may be implemented as a processing system including one or more processor components. In some designs, the functionality of these modules may be implemented using, for example, at least a portion of one or more integrated circuits (e.g., an ASIC). As discussed herein, an integrated circuit may include a processor, software, other related components, or some combination thereof. Thus, the functionality of different modules may be implemented, for example, as different subsets of an integrated circuit, as different subsets of a set of software modules, or a combination thereof. Also, it will be appreciated that a given subset (e.g., of an integrated circuit and/or of a set of software modules) may provide at least a portion of the functionality for more than one module.[00108] In addition, the components and functions represented by FIG. 9, as well as other components and functions described herein, may be implemented using any suitable means. Such means also may be implemented, at least in part, using corresponding structure as taught herein. For example, the components described above in conjunction with the "module for" components of FIG. 9 also may correspond to similarly designated "means for" functionality. Thus, in some aspects one or more of such means may be implemented using one or more of processor components, integrated circuits, or other suitable structure as taught herein.[00109] Those skilled in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[00110] Further, those skilled in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted to depart from the scope of the present disclosure.[00111] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[00112] The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in an IoT device. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[00113] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer- readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes CD, laser disc, optical disc, DVD, floppy disk and Blu-ray disc where disks usually reproduce data magnetically and/or optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.While the foregoing disclosure shows illustrative aspects of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the disclosure described herein need not be performed in any particular order. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
Provided are devices having at least three and at least four different types of transistors wherein the transistors are distinguished at least by the thicknesses and or compositions of the gate dielectric regions. Methods for making devices having three and at least four different types of transistors that are distinguished at least by the thicknesses and or compositions of the gate dielectric regions are also provided. |
CLAIMS We claim: 1. A device comprising, at least four different types of transistors on a substrate, wherein the transistors each comprise a gate structure, wherein the gate structure of a first transistor comprises a high-k dielectric layer having a first thickness, the gate structure of a second transistor comprises a high-k dielectric layer having a second thickness, wherein the first and the second high-k dielectric layer thicknesses are not the same, wherein the gate structure of a third transistor comprises a silicon dioxide layer having a first thickness, the gate structure of a fourth transistor comprises silicon dioxide layer having a second thickness, wherein the first and the second silicon dioxide layer thicknesses are not the same, and wherein the gate structure of each of the transistors additionally comprises an electrode disposed proximate to the gate dielectric layer so that at least a portion of the gate dielectric layer is between a channel region of the transistor and the electrode and wherein the thickness of gate dielectric layers is measured as the thickness of the dielectric layer between the gate electrode and the channel region of a transistor. 2. The device of claim 1 wherein the gate structure of the third transistor additionally comprises a high-k dielectric layer disposed so that at least a portion of the high-k dielectric layer is between the electrode and the channel region of the transistor. 3. The device of claim 1 or 2 wherein the gate structure of the fourth transistor additionally comprises a high-k dielectric layer disposed so that at least a portion of the high-k dielectric layer is between the electrode and the channel region of the transistor. 4. The device of claim 1 wherein the substrate additionally comprises a source and a drain for each transistor. 5. The device of claim 1 wherein a high-k dielectric layer material is selected from the group consisting of hafnium dioxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium dioxide, zirconium silicon oxide, titanium dioxide, tantalum oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. 6. The device of claim 1 wherein the electrode is comprised of a material selected from the group consisting of hafnium, zirconium, titanium, tantalum, aluminum, titanium nitride, titanium carbide, zirconium carbide, tantalum carbide, hafnium carbide, aluminum carbide, ruthenium, palladium, platinum, cobalt, nickel, and ruthenium oxide. 7. The device of claim 1 wherein the first high-k dielectric layer has a thickness of between 1 nm and 4 nm. 8. The device of claim 1 wherein the second high-k dielectric layer has a thickness of between 1 nm and 4 nm. 9. The device of claim 1 wherein the first silicon dioxide layer has a thickness of between 1 nm and 6 nm. 10. The device of claim 1 wherein the second silicon dioxide layer has a thickness of between 1 nm and 6 nm. 1 1. A device comprising, at least three different types of transistors on a substrate, wherein the transistors each comprise a gate structure, wherein the gate structure of a first transistor comprises a high-k dielectric layer having a first thickness, the gate structure of a second transistor comprises a high-k dielectric layer having a second thickness, wherein the first and the second high-k dielectric layer thicknesses are not the same, wherein the gate structure of a third transistor comprises a silicon dioxide layer having a first thickness, and wherein the gate structure of each of the transistors additionally comprises an electrode disposed proximate to the gate dielectric layer so that at least a portion of the gate dielectric layer is between a channel region of the transistor and the electrode and wherein the thickness of gate dielectric layers is measured as the thickness of the dielectric layer between the gate electrode and the channel region of a transistor. 12. The device of claim 11 wherein the gate structure of the third transistor additionally comprises a high-k dielectric layer disposed so that at least a portion of the high-k dielectric layer is between the electrode and the channel region of the transistor. 13. The device of claim 11 wherein the gate structure of the second transistor additionally comprises a silicon dioxide layer disposed so that at least a portion of the silicon dioxide layer is between the electrode and the channel region of the transistor. 14. The device of claim 11 wherein the substrate additionally comprises a source and a drain for each transistor. 15. The device of claim 11 wherein a high-k dielectric layer material is selected from the group consisting of hafnium dioxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium dioxide, zirconium silicon oxide, titanium dioxide, tantalum oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. 16. The device of claim 11 wherein the electrode is comprised of a material selected from the group consisting of hafnium, zirconium, titanium, tantalum, aluminum, titanium nitride, titanium carbide, zirconium carbide, tantalum carbide, hafnium carbide, aluminum carbide, ruthenium, palladium, platinum, cobalt, nickel, and ruthenium oxide. 17. The device of claim 11 wherein the first high-k dielectric layer has a thickness of between 1 nm and 4 nm. 18. The device of claim 11 wherein the second high-k dielectric layer has a thickness of between 1 nm and 4 nm. 19. The device of claim 11 wherein the silicon dioxide layer has a thickness of between 1 nm and 6 nm. |
MULTI-GATE TRANSISTORS BACKGROUND OF THE INVENTION FIELD OF THE INVENTION The embodiments of the present invention relate generally to semiconductor microelectronic devices, semiconductor logic devices, and transistors. BACKGROUND INFORMATION The desire for ever-smaller more highly integrated circuits (IC) devices places enormous demands on the techniques and materials used to construct the devices. In general, an integrated circuit chip is also known as a microchip, a silicon chip, or a chip. IC chips are found in a variety of common devices, such as the microprocessors in computers, cars, televisions, CD players, and cellular phones. A plurality of IC chips are typically built on a silicon wafer (a thin silicon disk, having a diameter, for example, of 300 mm) and after processing the wafer is diced apart to create individual chips. A 1 cm2 IC chip having feature sizes around of about 90 nm can comprise hundreds of millions of components. Current technologies are pushing feature sizes even smaller than 45 nm. Components of IC chips include solid-state logic devices (transistors) such as CMOS (complementary metal-oxide-semiconductor) devices. Generally, computing devices associate a computational state (information) with electronic charge. Logic operations within the computing device are then performed by manipulating, detecting, and storing electronic charges. BRIEF DESCRIPTION OF THE FIGURES FIGURES 1A-C illustrate integrated circuit devices having at least four different types of transistors that are distinguished by the thickness and composition of the gate dielectric employed. FIGURES 2A-C show integrated circuit devices having at least three different types of transistors that are distinguished by the thickness and composition of the gate dielectric employed. FIGURES 3A-C show methods for forming transistor gates having silicon dioxide gate dielectric regions. FIGURES 4A-B describe additional methods for forming silicon dioxide transistor gate dielectric regions. FIGURES 5A-B illustrate a process for the formation of four different types of transistors on a substrate. FIGURE 6 illustrates a method for forming two different S1O2 transistor gate thicknesses. FIGURES 7A-B show a method for forming transistors on a substrate having two different S1O2 gate thicknesses. DETAILED DESCRIPTION OF THE INVENTION Embodiments of the present invention provide devices housing a plurality of different types of transistors having different composite gate dielectric stacks and methods for manufacturing these devices. The formation of devices having a plurality of transistor types can address divergent circuit requirements, such as for example, high speed logic operation, low power usage, high voltage input output (I/O), and extremely high voltage, which are desirable attributes for components of system-on-a-chip (SOC) integrated circuits. System-on-a-chip devices integrate a wide variety of circuit functions such as, processor cores, analog functions, and mixed signal blocks onto a single integrated circuit chip. Embodiments of the invention provide devices and methods of forming devices comprised of different types of transistors having two or three high-k gate dielectric thicknesses, one or two silicon oxide (S1O2) thicknesses, and gate dielectric combinations thereof. Transistors having varied gate dielectrics are capable of providing performance characteristics that span a wide range of operating speeds, leakage characteristics, and high voltage tolerances. Figure 1A illustrates transistors located in an integrated circuit device. The integrated circuit device has at least four different transistors, 101, 102, 103, and 104 that are distinguished at least by the thickness and composition of the gate dielectric employed. Transistors 101, 102, 103, and 104 may have other distinguishing features. Typically a device having a plurality of different transistors will have a large number of instances of each type of transistor arranged in various formats (e.g., arrays). For simplicity, one instance of each type of transistor 101, 102, 103, and 104 is shown in Figure 1A as an isolated transistor, although the transistors illustrated 101, 102, 103, and 104 are typically found in various places and arrangements in the integrated circuit chip in which they are located. In Figure 1A, a semiconductor substrate 105 has sources 1 10 and drains 1 15 associated with the proximate transistor gate structures. Other shapes and sizes for the source 110 and drain 115 are possible. The channel region 106 proximate to the gate dielectric and between the source 1 10 and the drain 115 can be a p-type channel or an n-type channel. The transistor gate structures include high-k dielectric layers 120 having heights hi, h2, l , and h6 (as shown in Figure 1A) that correspond to the thickness of the high-k layer between the gate electrode 125 and the channel region 106 of the substrate 105. The first height hi is less than the second height h2. In some embodiments, the fourth height, I14, is less than the sixth height, h6. In some embodiments, the fourth height, h4, and the sixth height, he, are the same. In some embodiments, the fourth height, h4, is the same as the first height, hi, and or the sixth height, h6, is the same as the first height, hi, or the second height, h2. Transistors 103 and 104 additionally include S1O2 layers 121 in the gate dielectric structure. Oxide layers 121 have associated heights, 1¾ and hs, that correspond to the thickness of the S1O2 layer between the gate electrode 125 and the channel region 106 of the substrate 105. The third height, I13, is less than the fifth height, I15. A range of values for hi, !¾, lu, and he is 1 nm to 10 nm. In embodiments of the invention, the range of values for hi, !¾, lu, and h is 1 nm to 4 nm. A range of values for 1¾ and I15 (a silicon dioxide dielectric layer) is 1 nm to 11 nm. In embodiments of the invention, the range of values for 1¾ and I15 is 1 nm to 6 nm. In embodiments of the invention I15 is larger than I13 by an amount that is from 1 nm to 3 nm. Typically, transistor structures 101, 102, 103, and 104 are at least partially surrounded by a dielectric material 130, which in some embodiments is an interlay er dielectric (ILD) material. Posts or spacers 135 are located on sides of the transistor gate. Spacers 135 are comprised of a dielectric material, such as for example, silicon nitride, silicon dioxide, silicon oxynitride, or other material known in the semiconductor art. One or more layers of a dielectric material 140 (an etch stop layer, for example) may be located between the spacers 135 and the dielectric layer 130. The dielectric layer(s) 140 are comprised, for example, of silicon nitride, silicon oxynitride, silicon carbide, or other material known in the art. Figure IB illustrates transistors located in an integrated circuit device. The integrated circuit device has at least four different transistors, 151, 152, 153, and 154 that are distinguished at least by the thickness and composition of the gate dielectric employed. Transistors 151, 152, 153, and 154 may have other distinguishing features. Typically a device having a plurality of different transistors will have a large number of instances of each type of transistor arranged in various formats (e.g., arrays). For simplicity, one instance of each type of transistor 151, 152, 153, and 154 is shown in Figure IB as an isolated transistor, although the transistors illustrated 151, 152, 153, and 154 are typically found in various places and arrangements in the integrated circuit chip in which they are located. Elements of transistors 151, 152, 153, and 154 in Figure IB are the same as the elements of transistor structures 101, 102, 103, and 104 in Figure 1A, except as discussed below. In Figure IB, the transistor gate structures include high-k dielectric layers 120 having heights hi, h2, h4, and h5 that correspond to the thickness of the high-k layer between the gate electrode 125 and the channel region 106 of the substrate 105. The first height, hi, is less than the second height, h2. The fourth height, h4, is less than the fifth height, h5. In some embodiments, the fourth height, h4, is the same as the first height, hi, and or the fifth height, h5, is the same as the second height, h2. A range of values for hi, h2, h4, and h5 is 1 nm to 10 nm. In embodiments of the invention, the range of values for hi, h2, h4, and h is 1 nm to 4 nm. Differences between heights is in the nanometer range. Transistors 153 and 154 additionally include Si02 layers 121 in the gate dielectric structure. Oxide layers 121 have an associated height, h3, which corresponds to the thickness of the Si02 layer 121 between the gate electrode 125 and the channel region 106 of the substrate 105. A range of heights, I13, for the silicon dioxide dielectric layer 121 is 1 nm to 10 nm. In embodiments of the invention, the range of thicknesses is between 2 nm and 6 nm. Figure 1C illustrates transistors located in an integrated circuit device. The integrated circuit device has at least four different transistors, 161, 162, 163, and 164 that are distinguished at least by the thickness and composition of the gate dielectric employed. Transistors 161, 162, 163, and 164 may have other distinguishing features. Typically a device having a plurality of different transistors will have a large number of instances of each type of transistor arranged in various formats (e.g., arrays). For simplicity, one instance of each type of transistor 161, 162, 163, and 164 is shown in Figure 1C as an isolated transistor, although the transistors illustrated 161, 162, 163, and 164 are typically found in various places and arrangements in the integrated circuit chip in which they are located. Elements of transistors 161, 162, 163, and 164 in Figure 1C are the same as the elements of transistor structures 101, 102, 103, and 104 in Figure 1A, except as discussed below. In Figure 1C, the transistor gate structures include high-k dielectric layers 120 having heights hi, h2, 113, and 1¾ that represent the thickness of the high-k layer between the gate electrode 125 and the channel region 106 of the substrate 105. The first height hi is less than the second height h2. The third height, h3, is greater than both the first height, hi, and the second height, h2. In embodiments of the invention, the fifth height, I15, is the same as the height hi, h2, or I13. A range of values for hi, h2, 1¾ and I15 is 1 nm to 10 nm. In embodiments of the invention, the range of values for hi, h2, 1¾, and I15 is 1 nm to 4 nm. Differences between heights is in the nanometer range. Transistor 164 additionally includes a Si02 layer 121 in the gate dielectric structure. Oxide layer 121 has an associated height, h4 that corresponds to the thickness of the Si02 layer 121 between the gate electrode 125 and the channel region 106 of the substrate 105. A range of heights, l , for the silicon dioxide dielectric layer 121 is 1 nm to 1 1 nm. In embodiments of the invention, the range of values for I13 and 1¾ is 1 nm to 6 nm or 2 nm to 5 nm. Figure 2A illustrates transistors located in an integrated circuit device. The integrated circuit device has at least three different types of transistors, 201, 202, and 203 that are distinguished at least by the thickness and composition of the gate dielectric employed. Transistors 201, 202, and 203 may have other distinguishing features. Typically a device having a plurality of different transistors will have a large number of instances of each type of transistor arranged in various formats (e.g., arrays). For simplicity, one instance of each type of transistor 201, 202, and 203 is shown in Figure 2A as an isolated transistor, although the transistors illustrated 201, 202, and 203 are typically found in various places and arrangements in the integrated circuit chip in which they are located. In Figure 2A, a semiconductor substrate 205 has sources 210 and drains 215 associated with the proximate transistor gate structures. Other shapes and sizes for the sources 110 and drains 115 are possible. The channel region 206 proximate to the gate dielectric and between a source 210 and a drain 115 can be a p-type channel or an n-type channel. The transistor gate structures include high-k dielectric layers 220 having heights hi, I13, and hs (as shown in Figure 2A) that correspond to the thickness of the high-k layer between the gate electrode 225 and the channel region 206 of the substrate 205. In embodiments of the invention, heights hi, 1¾, and hs are not all the same value. In other embodiments of the invention two of the three heights, hi, h3, and h5, are the same. In embodiments of the invention, the first height hi is less than the fifth height, hs. In some embodiments, the first height, hi, is the same as the third height, I13, and both hi and 1¾ are less than the fifth height, hs. A range of values for hi, 1¾, and I15 is 1 nm to 10 nm. In embodiments of the invention, the range of values for hi, 1¾, and hs is 1 nm to 4 nm. Differences between heights is in the nanometer range. Transistors 202 and 203 additionally include S1O2 layers 221 in the gate dielectric structure. Oxide layers 221 have associated heights, h2 and lu, which correspond to the thickness of the S1O2 layer between the gate electrode 225 and the channel region 206 of the substrate 205. In embodiments of the invention, the second height, is less than the fourth height, I14. In other embodiments, and h4 are equal and hi, I13, and hs are not equal. A range of values for and h4 is 1 nm to 11 nm. In embodiments of the invention, the range of values for and h4 is 1 nm to 6 nm or 2 nm to 5 nm. Differences between heights is in the nanometer range. Typically, transistor structures 201, 202, 203 are at least partially surrounded by a dielectric material 230, which in some embodiments is an interlay er dielectric (ILD) material. Posts or spacers 235 are located on sides of the transistor gate. Spacers 235 are comprised of a dielectric material, such as for example, silicon nitride, silicon dioxide, silicon oxynitride, or other material known in the art. One or more layers of a dielectric material 240 (an etch stop material, for example) may be located between the spacers 235 and the dielectric layer 230. The dielectric layer(s) 240 are comprised, for example, of silicon nitride, silicon oxynitride, silicon carbide, or other material known in the art. Figure 2B illustrates transistors located in an integrated circuit device. The integrated circuit device has at least three different types of transistors, 251, 252, and 253 that are distinguished at least by the thickness and composition of the gate dielectric employed. Transistors 251, 252, and 253 may have other distinguishing features. Typically a device having a plurality of different transistors will have a large number of instances of each type of transistor arranged in various formats (e.g., arrays). For simplicity, one instance of each type of transistor 251, 252, and 253 is shown in Figure 2B as an isolated transistor, although the transistors illustrated 251, 252, and 253 are typically found in various places and arrangements in the integrated circuit chip in which they are located. Elements of transistors 251, 252, and 253 in Figure 2B are the same as the elements of transistor structures 201, 202, and 203 in Figure 2A, except as discussed below. In Figure 2B, the transistor gate structures include high-k dielectric layers 220 having heights hi, !¾, and I14 that correspond to the thickness of the high-k layer between the gate electrode 225 and the channel region 206 of the substrate 205. The second height is greater than the first height, hi. In embodiments of the invention h4 is the same as either hi or !¾. A range of values for hi, !¾, and h4 is 1 nm to 10 nm. In embodiments of the invention, the range of values for hi, !¾, and h4 is 1 nm to 4 nm. Differences between heights is in the nanometer range. Transistor 253 additionally includes a S1O2 layer 221 in the gate dielectric structure. Oxide layer 221 has associated height, h3, which corresponds to the thickness of the S1O2 layer between the gate electrode 225 and the channel region 206 of the substrate 205. A range of values for 1¾ is 1 nm to 11 nm. In embodiments of the invention, the range of values for 1¾ 1 nm to 6 nm or 2 nm to 5 nm. Figure 2C illustrates transistors located in an integrated circuit device. The integrated circuit device has at least three different types of transistors, 261, 262, and 263 that are distinguished at least by the thickness and composition of the gate dielectric employed. Transistors 261, 262, and 263 may have other distinguishing features. Typically a device having a plurality of different transistors will have a large number of instances of each type of transistor arranged in various formats (e.g., arrays). For simplicity, one instance of each type of transistor 261, 262, and 263 is shown in Figure 2C as an isolated transistor, although the transistors illustrated 261, 262, and 263 are typically found in various places and arrangements in the integrated circuit chip in which they are located. Elements of transistors 261, 262, and 263 in Figure 2C are the same as the elements of transistor structures 201, 202, and 203 in Figure 2A, except as discussed below. In Figure 2C, the transistor gate structures include high-k dielectric layers 220 having heights hi, !¾, and 1¾ that correspond to the thickness of the high-k layer between the gate electrode 225 and the channel region 206 of the substrate 205. The first height, hi, is different from the second height, h2, and the third height, h3, is different from the first and second heights, hi and !¾. A range of values for hi, !¾, and 1¾ is 1 nm to 1 1 nm. In embodiments of the invention, the range of values for hi, I12, and I13 is 1 nm to 4 nm. Differences between heights is in the nanometer range. With respect to the previously described embodiments, it should be noted that it is also possible to vary characteristics such as the width of the gate, the width of the channel region, the types of sources and drains used, among other device characteristics, as is understood by those of skill in the art. In general, a high-k dielectric is a dielectric material having a dielectric constant greater than that of S1O2. The dielectric constant of S1O2 is 3.9. Exemplary high-k dielectric materials include hafnium dioxide (Hf02), hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium dioxide (Zr02), zirconium silicon oxide, titanium dioxide (T1O2), tantalum oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, lead zinc niobate, and other materials known in the semiconductor art. Materials that can comprise the gate electrode include, for example, metal gate materials, such as, hafnium, zirconium, titanium, TiN, tantalum, aluminum, and combinations thereof. Additional materials include, metal carbides, such as, for example, titanium carbide, zirconium carbide, tantalum carbide, hafnium carbide and aluminum carbide. Further materials that are used include ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, such as, for example, ruthenium oxide. Other materials are possible. Typical dielectric materials used for dielectric layers, features, and or interlayer dielectrics (ILD) include silicon dioxide and low-k dielectric materials. Additional dielectric materials that may be used include, carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicate glass (FSG), and organosilicates such as silsesquioxane, siloxane, or organosilicate glass. The dielectric layer may include pores to further reduce the dielectric constant. In manufactured devices, layers of materials can deviate in appearance from the simplified illustrations provided herein for clarity, and can be, for example, slightly thicker or thinner in areas. Additionally, what is described here as a "layer" of material may be made up of a plurality of layers of the material that essentially function as one layer. Figures 3A-C describe methods for the formation of transistor gates comprised of S1O2 gate dielectric material. The methods of Figures 3A-C are useful for forming integrated circuit devices comprising different types of transistors that have either three or four different gate dielectric structures. In Figure 3A, a substrate 305 having a S1O2 layer 310 on a surface is provided. A photoresist material is deposited on the substrate 305. The photoresist is photolithographically patterned so that the photoresist 315 covers the area in which a transistor gate having a thicker S1O2 gate is to be formed. The silicon dioxide layer 310 is etched from the surface of the substrate in the areas that are not covered by the photoresist 315 and the photoresist 315 is removed. Silicon dioxide is then grown on the substrate 305 creating regions on the substrate 305 comprising two different thicknesses of silicon dioxide 310. The silicon dioxide tends to grow faster in areas in which there is not already existing oxide. In Figure 3B, a substrate 305 having a S1O2 layer 310 on a surface is provided. A photoresist material is deposited on the substrate 305. The photoresist is photolithographically patterned so that the photoresist 315 covers the area in which a transistor gate having a thicker S1O2 gate is to be formed. The silicon dioxide layer 310 is partially etched from the surface of the substrate in the areas that are not covered by the photoresist 315 and the photoresist 315 is removed. The resulting substrate 305 has regions comprising two different thicknesses of silicon dioxide 310. In Figure 3C, a substrate 305 having a S1O2 layer 310 on a surface is provided. A photoresist material is deposited on the substrate 305. The photoresist is photolithographically patterned so that the photoresist 315 covers the area in which a transistor gate having a thicker S1O2 gate is to be formed. The silicon dioxide layer 310 is partially etched from the surface of the substrate in the areas that are not covered by the photoresist 315 and the photoresist 315 is removed. A partial layer of silicon dioxide is then grown on the substrate 305 creating regions on the substrate 305 comprising two different thicknesses of silicon dioxide 310. Figures 4A-B provide additional methods for forming transistor gates comprised of S1O2. The methods of Figures 4A-B are useful for forming integrated circuit devices comprising different types of transistors that have either three or four different gate dielectric structures. In Figure 4A, a substrate 305 is provided and a photoresist material is deposited and lithographically patterned to create photoresist layer 315. An ion implant process implants ions into the substrate 305 in the regions of the substrate 305 that are not masked by the photoresist layer 315. The species implanted is a species that once implanted enhances the oxidation rate of Si, for example, an inert species or a Group IV element, such as, Ar, O, As, Ge, or Si, or other species known in the art. The ion implant process forms implant region 320. The photoresist 315 is removed and silicon dioxide 310 is grown on the substrate surface. The silicon dioxide 310 growth rate is enhanced in the area in which ions have been implanted into the substrate and two different thicknesses of silicon dioxide 310 are produced. The two different thicknesses of silicon dioxide form two different gates for transistors after additional processing. In Figure 4B, a substrate 305 has silicon dioxide layer 310. A photoresist is deposited and patterned, creating patterned photoresist 315. An ion implant process implants ions into the silicon dioxide layer 310 in the regions where the oxide layer 310 is not masked by the photoresist layer 315. The ion implant process forms implant region 325. The species implanted is a species that once implanted enhances the etch rate of S1O2, for example, carbon or a heavy ion, or species known in the art. The photoresist 315 is removed and the silicon dioxide layer 310 is etched. The silicon dioxide 310 etch rate is enhanced in the area in which ions have been implanted into the silicon dioxide layer 325 and two different thicknesses of silicon dioxide are produced. The two different thicknesses of silicon dioxide form two different gates for transistors after additional processing. Figures 5A-B illustrate the formation of four different types of transistors on a substrate surface. The transistor regions in Figures 5 A and 5B are labeled Tl, T2, T3, and T4. Transistors Tl and T2 have, at the end of the process, high-k dielectric gates of different thicknesses, and transistors T3 and T4 have different composite S1O2 and high-k dielectric gates. An integrated circuit chip typically comprises multiple copies of the same transistor in various locations, however, one of each type of transistor is shown in Figures 5A-B for clarity. In Figure 5A, a substrate 505 has a silicon dioxide layer 510 having two regions of different thicknesses. In the structure labeled (i) in Figure 5A, a substrate 505 having a silicon dioxide layer 510 having two regions of different thicknesses is formed, for example, according to the methods described with respect to Figures 3A-C or Figures 4A-B. The substrate 505 comprises a source and a drain (not shown) for each transistor region. In embodiments of the invention, the substrate is a silicon substrate. In the structure labeled (ii), structural components for four different transistors have been formed on the substrate 505. Methods for forming structure (ii) are known in the art of semiconductor manufacturing. In Figure 5A(ii), the gate regions of the transistors comprise a polysilicon region 515 and a silicon dioxide region 510. The structure in Figure 5A(ii) additionally comprises a spacer layer 520, a first dielectric layer 525, and a second dielectric layer 530. The spacer layer 520 and the first dielectric layer 525 are comprised, for example, of silicon nitride. The second dielectric layer 530 is, for example, an interlayer dielectric, comprising a dielectric material, such as, for example, S1O2, silicon nitride, or a low-k dielectric material. In Figure 5A, structure (ii) is then given a chemical mechanical polish, removing material down to the polysilicon layer 515 so that the polysilicon layer515 is exposed. The polysilicon layer 515 is removed using a wet or dry etch process and the surface of the structure is cleaned yielding structure (iii). A further wet etch process partially removes the silicon dioxide 510 in the gate regions of the transistors, so that transistors Tl and T2 no longer have S1O2 in the gate region. The wet etch comprises, for example, HF. Structure (iii) additionally comprises a spacer layer 520, a first dielectric layer 525, and a second dielectric layer 530, as from structure (iii), but having been modified in shape by the polish process. A high-k film 535 and a hard mask 540 are then deposited creating the structure illustrated in Figure 5A (iv). The high-k dielectric material 535 is deposited for example, by chemical vapor deposition (CVD), atomic layer deposition (ALD), metal organic chemical vapor deposition (MOCVD), or physical vapor deposition (PVD). The hard mask 535 comprises, for example, an organic or an inorganic hard mask material, such as for example, SiC, S1O2, SiON, TiN, carbon, or other material known in the art. A photoresist layer 545 is deposited and patterned providing structure (v) of Figure 5A. In Figure 5B, structure (vi), the hard mask 540 is etched away in regions not covered by the photoresist exposing the underlying high-k dielectric material 535. The hard mask 540 is etched with either a dry etch or a wet etch process. The hard mask 535 is removed, for example, using a wet or dry etch. The exposed high-k dielectric 535 is the etched to provide structure (vii). High- k dielectric layers are etched, for example, using a dry etch. The remaining hard mask 540 is then etched away yielding structure (viii) having a high-k dielectric layer in transistor regions T2 and T4. The hard mask is selectively etched away, using, for example, a wet etch, for embodiments in which the hard mask is resistant to a wet etch. A layer of low-k dielectric is deposited on the surface of structure (viii) yielding structure (ix). In structure (ix), transistor regions Tl and T3 have thinner layers of low-k dielectric 535 than transistor regions T2 and T4. In alternate embodiments, devices having transistors having three different thicknesses of low-k dielectric material are formed in a method that is similar to the method associated with structures (iv) through (ix). In this alternate embodiment, transistor regions T2-T4 are masked, the low-k dielectric is etched from transistor region T 1 (for example, although a different transistor region could be chosen), and an additional layer of low-k dielectric is deposited, yielding three regions having different thicknesses of low-k dielectric. In this embodiment, it is also possible to begin the method of Figures 5A-B with a substrate having a thicker region of S1O2 associated with transistor T4 and thinner regions associated with transistors T1-T3, so that the resulting device has three different transistors that do not have S1O2 layers in the gate dielectric regions. Other similar modifications to the method are possible to produce transistors having desired gate dielectric regions as can be understood by one of skill in the art, such as, for example, beginning the process without a S1O2 layer, in order to produce a device having three different types of transistors each of which has a gate dielectric region having a different thickness of low-k dielectric. The metal of the metal gate 550 is deposited and the surface is given a chemical- mechanical polish forming structure (x). In structure (x), substrate 505 comprises a source and a drain region (not shown) proximate to each transistor, and the transistors Tl and T2 comprise a low-k dielectric layer 535 in which the low-k dielectric layer of Tl is less thick than the low-k dielectric layer of T2. Transistors T3 and T4 comprise a Si02 layer 510 and the S1O2 layer 510 of transistor T3 is less thick than the S1O2 layer of transistor T4. Other configurations are possible for the dielectric layers that make up the gate regions of the transistors and can be achieved by modifications to the described procedures which are capable of being made by one of skill in the art. Figure 6 provides and additional method for forming a device having transistor regions T2 and T2 with different gate S1O2 layer thicknesses. In Figure 6, a polysilicon gate structure (i) is formed. The polysilicon gate structure (i) comprises a substrate 605 having an associated proximate source and drain (not shown) for each transistor gate. The polysilicon gate structure (i) additionally comprises S1O2 gate dielectric layer 610, a polysilicon gate region 615, a first dielectric spacer layer 620, a second dielectric layer 625, and a third dielectric layer 630. The third dielectric layer 630 is, for example, an interlayer dielectric layer (ILD). The polysilicon gate structure (i) is chemically mechanically polished exposing the polysilicon layer, the polysilicon is etched away, and a photoresist layer 635 (or other masking layer) is deposited and patterned, yielding structure (ii). The polysilicon layer is etched out, for example, using a combination of a wet and dry etch. The exposed Si02 gate material 610 is then etched away and the photoresist layer 635 is removed yielding structure (iii). The S1O2 gate material 610 is etched using, for example, a HF etchant. Structure (iii) of Figure 6 is usable, for example, in the method of Figures 5A-B, such that structure (iii) of Figure 6 is usable as structure (iii) of Figure 5A. Figures 7A-B illustrate a method for forming transistors having gate dielectric structures comprising two different thicknesses of S1O2 and a high-k dielectric layer. In Figure 7A, a polysilicon gate structure (i) is formed. The polysilicon gate structure (i) comprises a substrate 705 having an associated proximate source and drain (not shown) for each transistor gate. The polysilicon gate structure (i) additionally comprises a S1O2 gate dielectric layer 710, a polysilicon gate region 715, a first dielectric spacer layer 720, a second dielectric layer 725, and a third dielectric layer 730. The third dielectric layer 730 is, for example, an interlayer dielectric layer. The polysilicon gate structure (i) is chemically mechanically polished exposing the polysilicon layer and the polysilicon is etched away yielding structure (ii). The polysilicon layer is etched out, for example, using a combination of a wet and dry etch. A photoresist layer 735 is deposited and patterned and an ion implant process is used to implant ions into a Si02 layer 710 of one of the two gate regions (in this case, T2) yielding structure (iii) having implant region 711. The species implanted are, for example, Si, O, N, or C. It is also possible to use other species, as understood in the art. The patterned photoresist 735 is removed and the S1O2 layer is removed from the gate regions with an HF etch, yielding structure (iv) having implant region 711. A low-k dielectric layer 740 is deposited yielding structure (v). A metal gate 745 is deposited and the structure is chemically and mechanically polished yielding structure (vi) of Figure 7B. After annealing, structure (v) is formed having two different gate regions, Tl and T2, in which T 1 has a low-k dielectric gate region, and T2 has both a low-k dielectric 740 gate region and a S1O2 dielectric gate region 712. The S1O2 gate region 712 is formed through the interaction of the high-k gate region 712 with the implanted region 71 1 of the substrate 705. The process of Figures 7A-B is compatible with integration into the process of Figures 5A-B in which a device having transistors with different thicknesses of low-k dielectric and S1O2 in the transistor gate regions is formed. Structure (iv) of Figure 7B is used for structure (iii) of Figure 5A. Annealing of structure (x) of Figure 5B forms a S1O2 region. In general, photoresists are removed by processes used in the semiconductor industry. Photoresists can be removed, for example, through dry plasma processes. The resist is removed in an oxygen plasma in processes, frequently called ashing, which is designed to remove organic residues. The plasma is generated, for example, by microwave, rf (radio frequency), or UV-ozone sources. Alternately, the photoresist can be removed using a solvent or mixture of solvents. Persons skilled in the relevant art appreciate that modifications and variations are possible throughout the disclosure and combinations and substitutions for various components shown and described. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, material, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, but does not necessarily denote that they are present in every embodiment. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. Various additional layers and/or structures may be included and/or described features may be omitted in other embodiments. |
A processor (210 in Figure 2) comprises: digital signal processing (DSP) cores (211 in Figure 2), each comprising a decoder, core execution circuitry, and a core register file 328 to store context data for a first thread and a second thread; shared vector processing circuitry 314 coupled to the DSP cores, comprising: first registers (e.g. 328) to store first context data associated with the first thread; second registers (e.g. 340) to store second context data associated with the second thread; and vector execution circuitry 316 to execute single instruction multiple data (SIMD) instructions of the first and second threads; and memory management circuitry (331, 341) to translate virtual addresses, within a virtual address space shared by the DSP cores and the shared vector processing circuitry, to physical addresses of a system memory. Sharing circuitry between cores may be more efficient than replication of logic for each core, in terms of reduced die area, manufacturing cost, and power consumption, for example. |
CLAIMS1. A processor comprising: a plurality of digital signal processing, DSP, cores, each DSP core comprising: a decoder to decode instructions of a first thread and a second thread; core execution circuitry to execute one or more instructions of the first thread and the second thread; a core register file to store context data for the first thread and the second thread; shared vector processing circuitry coupled to and shared by the plurality of DSP cores, the shared vector processing circuitry comprising: a first plurality of registers to store first context data associated with the first thread, and a second plurality of registers to store second context data associated with the second thread; vector execution circuitry to execute single instruction multiple data (SIMD) instructions of the first thread and the second thread; and memory management circuitry to translate virtual addresses within a virtual address space shared by the plurality of DSP cores and the shared vector processing circuitry, the virtual addresses to be translated to physical addresses of a system memory. 2. The processor of claim 1, wherein the memory management circuitry comprises a translation lookas de buffer, TLB, to cache virtual to physical address translations. 3. The processor of claim 1 or claim 2, wherein the first and second context data comprise portions of first context data and second context data, respectively, of the first thread and the second thread, respectively, the DSP to store at least some of the first context data and second context data in the core register file. 4. The processor of any one of claims 1 to 3, wherein the SIMD instructions executed by the vector execution circuitry comprise an extension to an instruction set architecture, ISA. 5. The processor of claim 4, wherein the S1MD instructions include 1024-bit operands. 6. The processor of any one of claims 1 to 5, wherein the first and second pluralities of registers are architecturally-visible registers of an instruction set architecture, ISA. 7. The processor of any one of claims 1 to 6, wherein the shared vector processing circuitry comprises at least one of: histogram circuitry to perform a histogram computation, matrix multiplication circuitry and sum of absolute differences circuitry. 8. The processor of any one of claims I to 7, wherein the shared vector processing circuitry comprises matrix multiplication circuitry. 9. The processor of any one of claims I to 8, wherein the shared vector processing circuitry comprises sum of absolute differences circuitry. 10. The processor of any one of claims 1 to 9, further comprising a first interconnect to couple the plurality of DSP cores to the shared vector processing circuitry. 11. The processor of claim 10, further comprising a second interconnect to couple the shared vector processing circuitry to a memory subsystem including one or more levels of shared memory. 12. A system comprising the processor of any one of claims 1 to 11 and a system memory to store instructions and data. 13. The system of claim 10, further comprising a storage device coupled to the processor to store instructions and data. 14. The system of claim 12 or claim 13, further comprising an input/output (1/0) interconnect to couple the processor to one or more I/O devices. 15. The system of any one of claims 12 to 14, wherein the system memory comprises a dynamic random access, DRAM, memory. 16. The system of any one of claims 12 to 15, further comprising a graphics processor coupled to the processor to perform graphics processing operations. 17. The system of any one of claims 12 to 16, further comprising a network processor coupled Lo the processor. 18. The system of any one of claims 12 to 17, further comprising an audio input/output device coupled to the processor. 19. The system of any one of claims 13 to 1%, wherein the processor comprises a first processor and the instructions comprise a first type of instructions, the system further comprising a second processor coupled to the DSP cores to process a second type of instructions not processed by the first processor. 20. The system of claim 19, further comprising a shared cache to be shared by the first processor and the second processor. 21. The system of any one of claims 12 to 20, further comprising compression circuitry coupled to the processor. |
PROCESSOR HAVING MULTIPLE CORES, SHARED CORE EXTENSION LOGIC,AND SHARED CORE EXTENSION UTILIZATION INSTRUCTIONSBACKGROUNDFieldEmbodiments relate to processors. In particular, embodiments relate to processors having multiple cores.Background InformationFigure 1 is a block diagram of a prior art processor 100. The processor has multiple cores 101. In particular, the illustrated processor has a core 0 101-0, a core 1 101-1, through a core M 101-M. By way of example, there may be two, four, seven, ten, sixteen, or any other appropriate number of cores. Each of the cores includes corresponding Single Instruction Multiple Data (SIMD) execution logic 102. In particular, core 0 includes SIMD execution logic 102-0, core 1 includes SIMD execution logic 102-1, and core M includes SIMD execution logic 102-M. That is, the SIMD execution logic is replicated per-core. Each SIMD execution logic is operable to process SIMD, vector, or packed data operands. Each of the operands may have multiple smaller data elements, such as 8-hit, 16-hit, 32-hit, or 64-bit data elements, which are packed together in the operands and processed in parallel by the SIMD execution logic.In some processors, each of the SIMD execution logic may represent a relatively large amount of logic. For example, this may be the case when each of the SIMD execution logic is to process wide SIMD operands. Some processors are able to process vector or packed data operands having relatively wide widths, such as, for example, 128-bit operands, 256-bit operands, 512-hit operands, 1024-hit operands, or the like. Commonly, the SIMD execution logic needed to process such wide operands tends to be relatively large, to consume a relatively large amount of die area, to increase the cost of manufacturing the processor, and to consume a relatively large amount of power during use. Replicating the relatively large SIMD execution logic per-core tends to exacerbate such problems. Moreover, in many applications or workload scenarios, the replicated SIMD execution logic per-core tends to he underutilized at least some of the time. If the number of cores continues to increase in the future, such problems may become even more significant.Still further, in the prior art processor of Figure 1, each of the cores also has conventional flow control logic. In particular, core 0 has flow control logic 103-0, core 1 has flow control logic 103-1, and core M has flow control logic 103-M. Commonly, the flow control logic may he designed or optimized to cover a wide range of usage models, for example, introducing speculative execution. However, this generally tends to have a relatively small benefit for SIMD and various other high throughput computations, but tends to be accompanied by relatively high power consumption.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSThe invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: Figure 1 is a block diagram of a prior art processor.Figure 2 is a block diagram of an embodiment of a system having an embodiment of a 10 processor and an embodiment of a memory.Figure 3 is a block diagram of an embodiment of a processor having a core 0 including an embodiment of shared core extension interface logic and having an embodiment of shared core extension logic including an embodiment of core interface logic.Figure 4 is a block flow diagram of an embodiment of a method of processing an embodiment of shared core extension call instruction.Figure 5 is a block diagram of an example embodiment of a shared core extension command register.Figure 6 is a block flow diagram of an embodiment of a method of processing an embodiment of shared core extension read instruction.Figure 7 is a block flow diagram of an embodiment of a method of processing an embodiment of shared core extension abort instruction.Figure 8A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention.Figure 8B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention.Figure 9A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache 904, according to 30 embodiments of the invention.Figure 9B is an expanded view of part of the processor core in Figure 9A according to embodiments of the invention.Figure 10 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention.Figure 11 shown is a block diagram of a system in accordance with one embodiment of the present invention.Figure 12 shown is a block diagram of a first more specific exemplary system in accordance with an embodiment of the present invention.Figure 13, shown is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present invention.Figure 14 shown is a block diagram of a SoC in accordance with an embodiment of the present invention.Figure 15 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.DETAILED DESCRIPTIONDisclosed herein are embodiments of processors having multiple cores and shared core extension logic that is shared by the multiple cores (e.g., is operable to perform data processing for each of the cores). Also disclosed herein arc shared core extension utilization instructions, processors to execute the shared core extension utilization instructions, methods performed by the processors when processing or executing the shared core extension utilization instructions, and systems incorporating one or more processors to process or execute the shared core extension utilization instructions.In the following description, numerous specific details are set forth, such as particular micro-architectural details, particular command register formats, particular shared core extension utilization instruction functionalities, particular groups of shared core extension utilization instructions, particular types and interrelationships of system components, and particular logic partitioning/integration details. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understandingof this description.Figure 2 is a block diagram of an embodiment of a system 209 having an embodiment of a processor 210 and an embodiment of a memory 218. The processor and the memory are coupled, or otherwise in communication with one another, through one or more buses or other interconnects 219. In various embodiments, the system 209 may represent a desktop computer system, a laptop computer system, a server computer system, a network element, a cellular phone, or another type of electronic device having a multi-core processor and a memory.The processor 210 has multiple cores 211. The illustrated processor has a core 0 211-0 through a core M 211-M. By way of example, there may be two, four, seven, ten, sixteen, thirty-two, sixty-four, one hundred twenty eight, or more cores, or any other reasonably appropriate number of cores that is desired for the particular implementation. In some embodiments, each of the cores may be able to operate substantially independently of the other cores. Each of the cores is able to process at least one thread. As shown in the illustration, core 0 has a thread 0 10 212-0 and may optionally include up to a thread P 212-P. Similarly, core M has a thread 0 212-0 and may optionally include up to a thread P 212-P. The number of threads P may be any reasonably appropriate number of threads. The scope of the invention is not limited to any known number of cores or any known number of threads that those cores are able to process. The processor may he any of various complex instruction set computing (CISC) processors, various reduced instruction set computing (RISC) processors, various very long instruction word (VLIW) processors, various hybrids thereof, or other types of processors entirely. In some embodiments, the cores may be general-purpose cores of a general-purpose processor of the type used in desktop, laptop, server, and like computer systems. In some embodiments, the cores may be special-purpose cores. Examples of suitable special-purpose cores include, but are not limited to, graphics processor cores, digital signal processor (DSP) cores, and network processor cores, to name just a few examples. In some embodiments, the processor may be a System-on-Chip (SoC) having multiple general-purpose or special-purpose cores and one or more of a graphics unit, a media block, system memory integrated on chip with the cores.The processor also includes an embodiment of shared core extension logic 214. The shared core extension logic is shared by each of the cores 211 (e.g., is operable to perform data processing for each of the cores). The shared core extension logic includes shared data processing logic 216 that is operable to perform the data processing for each of the cores. The shared core extension logic and the cores are coupled with one another by one or more busses or other interconnects 217 of the processor. The cores and the shared core extension logic include corresponding interface logics 213, 215 to allow one or more physical threads on each of the cores and the shared core extension logic to interface or interact with one another (e.g., for the threads of the cores to call the shared core extension logic to have data processing performed, to check on the status of the data processing, to abort data processing, to synchronize virtual memory attributes on context switches, to route page faults occurring during the data processing, etc.). The computational tasks executed by the shared core extension logic on behalf of each physical thread may run under the logical process of that specific physical thread. As will he described further below, the context used for the interface may be provided per physical thread.In particular, core 0 includes an embodiment of shared core extension interface logic 213-0 including at least some logic specific to thread 0 on core 0 and at least some logic specific to thread P on core 0. Likewise, core M includes an embodiment of shared core extension interface logic 213-M including at least some logic specific to thread 0 on core M and at least 10 some logic specific to thread P on core M. Each of the other cores (if any) may similarly include such shared core extension interface logic. The shared core extension logic 214 includes an embodiment of corresponding core interface logic 215. Each core 211 may interface or interact, through its corresponding shared core extension interface logic 213, with the core interface logic 215 of the shared core extension logic 214. In some embodiments, the shared core extension interface logic 213, and the core interface logic 215, may provide an architectural interface (e.g., new architectural macroinstructions and new architectural registers), as well as a micro-architectural interface or hardware mechanism (e.g., data processing scheduling logic, memory management unit (MMU) synchronization logic, page fault routing logic, etc.), to allow the cores to share the shared core extension logic (e.g., share data processing by the shared data processing logic 216). Detailed example embodiments of the shared core extension interface logic 213 and the core interface logic 215 will be discussed further below.The shared data processing logic 216 may represent different types of data processing logic in various different embodiments. As previously discussed above in the background section, certain types of data processing logic (e.g., certain wide SIMD execution units) have conventionally been replicated per-core. As mentioned before, often this replicated logic tends to be relatively large. Moreover, often this replicated logic is underutilized, at least some of the time, for many common workload scenarios. The replication of such logic generally tends to consume a relatively large amount of die area, to increase the cost of manufacturing, and to consume a relatively large amount of power. In some embodiments, such relatively large and/or commonly underutilized data processing logic, which is conventionally replicated per core, may be extracted from the multiple cores into the shared core extension logic, as a single shared copy of the data processing logic. Moreover, the shared core extension logic 214 may employ flow control logic that is desired or optimized for high throughput as opposed to being designed or optimized to cover a wide range of usage models, for example, introducing speculative execution, as was the case for the conventional flow control logic of the cores of Figure 1. This generally tends to provide a higher level of power-performance efficiency for throughput oriented algorithms.In various embodiments, the shared data processing logic may represent throughput-oriented hardware computation function logic, a high throughput computation engine, matrix multiplication logic, matrix transpose logic, finite filter logic, sum of absolute difference logic, histogram computation logic, gather-scatter instruction implementation logic, transcendental vector execution logic, or the like. In some embodiments, the shared data processing logic may include execution units, such as, for example, SIMD execution units (e.g., potentially relatively wide SIMD execution units). In some embodiments, the shared core extension logic may interact with shared core extension data structures 208 (e.g., matrixes, tables, etc.), for example in the memory 218.Advantageously, as compared to replicating logic, the shared core extension logic may help to reduce one or more of the overall die area needed to implement the logic, the cost of manufacturing the logic, and/or the power consumed by the logic. That is, the shared core extension logic may allow multiple cores to share common data processing function evaluation hardware resources without incurring the generally high integration costs of replicating such resources per-core. For clarity, it is not required that the particular shared data processing logic he large, although the greatest benefits of size, cost, and power reductions will often he achieved when relatively large logic is shared by the cores instead of being replicated per core. Moreover, the greatest benefits will often be achieved when the shared logic would otherwise, if it had been replicated per core, have been relatively underutilized, since the sharing may tend to increase the utilization of the logic whereby underutilized or unnecessary logic may be consolidated to reduce die area and manufacturing cost. As a further advantage, the shared core extension logic may also potentially be used to allow the cores to be customized or optimized for one type of processing (e.g., for scalar workloads performance, power and area), while allowing the shared core extension logic to be customized or optimized for another type of processing (e.g., for throughput-oriented workload performance, power and area).Figure 3 is a block diagram of an embodiment of a processor 310 having a core 0 311-0 including an example embodiment of shared core extension interface logic 313 and having an embodiment of shared core extension logic 314 including an example embodiment of core interface logic 315. As previously described, the processor may also include one or more other cores through a core M (not shown). In addition to the shared core extension interface logic 313, the core 0 also has conventional logic 334 of the type conventionally found in cores (e.g., one or more execution units, architectural registers, one or more caches, microarchitectural logic, etc.). The scope of the invention is not limited to any known such conventional logic. In addition to the core interface logic 315, the shared core extension logic 314 also has shared data processing logic 316 and a scheduler 344 to schedule data processing or tasks from multiple cores on the shared data processing logic.Each of one or more physical threads running on the core 0 311-0 may use the shared core extension interface logic 313 to interface with the shared core extension logic 314. The shared core extension interface logic 313 includes shared core extension utilization instructions 323 of an instruction set 322 of the core 0. The instruction set is part of an instruction set architecture (ISA) of the core. The ISA represents the part of the architecture of the core related to programming. The ISA commonly includes the native instructions, architectural registers, data types, addressing modes, memory architecture, interrupt and exception handling, and the like, of the processor. The ISA is distinguished from the micro-architecture, which generally represents the particular design techniques selected to implement the ISA. Processors or cores with different micro-architectures may share a common ISA. The instructions of the instruction set, including the shared core extension utilization instructions 323, represent machine instructions, macroinstructions, or higher-level instructions (e.g., instructions provided to the core for execution), as opposed to microinstructions, micro-ops, or lower-level instructions (e.g., those which result from decode logic decoding machine instructions or macroinstructions). The shared core extension interface logic 313 also includes a core 0, thread 0 set of shared core extension command registers (SCECRs) 328. Each physical thread may have a set of SCECRs registers associated with it as part of its context to be saved and restored irrespective of the progress of other threads. In some embodiments, for the core 0 the there may he multiple sets of SCECRs provided per-thread for each of one or more physical threads that run on the core 0. For example, in the illustrated embodiment, the core 0, thread 0 SCECRs may belong to a thread 0. Similarly, each physical thread running on the core 0 may have a set of core 0, thread-specific SCECRs to interface with the shared core extension logic 314. Alternatively, there may be a single set of core 0 SCECRs for the core 0. In such cases, there may be time sharing of the SCECRs between the physical threads at the hardware level. Context may be swapped out of the core 0 SCECRs on context switches and saved and restored.In the illustration, an SCECR 0 328-0 through an SCECR N 328-N arc shown. That is, there are N+1 registers. The number N+1 may be any desired number, such as two, four, eight, sixteen, thirty-two, sixty-four, or some other number. There is no requirement for the number N+1 to he a power of two, although this generally tends to provide efficient register addressing.A given one of these registers is generically represented herein as SCECR x, where the x may represent any one of register SCECR 0 through SCECR N. In some embodiments, the shared core extension command registers may be architecturally-visible registers of the ISA of the core and/or the processor. The architectural registers generally represent on-die processor storage locations. The architectural registers may also be referred to herein simply as registers. Unless otherwise specified or apparent, the phrases architectural registers and registers are used herein to refer to a registers that are visible to the software and/or programmer (e.g., software-visible) and/or the registers that are specified by macroinstructions. These registers are contrasted to other non-architectural or non-architecturally visible registers in a given microarchitecture (e.g., temporary registers used by instructions, reorder buffers, retirement registers, etc.).The shared core extension utilization instructions 323 are used to submit, monitor, and abort calls to the shared core extension logic 314 for data processing to be performed. By way of example, the shared core extension utilization instructions may be used for parallel progranuning and may be included in an instruction set (e.g., as an extension of the instruction set) to increase the efficiency and/or throughput of parallel programming workloads. The shared core extension utilization instructions may explicitly specify (e.g., through bits or one or more fields) or otherwise indicate (e.g., implicitly indicate) a shared core extension command register (SCECR x) of the core 0 shared core extension command registers 328. The shared core extension registers may provide an architectural hardware interface of the processor to the shared core extension I ogic.In the illustrated embodiment, the shared core extension utilization instructions 323 include a shared core extension (SCE) call instruction 324 that has a format SCE call (SCECR x, parameters). The SCECR x indicates one of the core 0 shared core extension command registers 328 and the parameters indicate one or more parameters associated with the call, which will be discussed further below. The illustrated shared core extension utilization instructions also include an SCE read instruction 325 having a format SCE read (SCECR x). Another shared core extension utilization instructions is an SCE abort instruction 326 having a format SCE abort (SCECR x). Yet another shared core extension utilization instructions is an SCE wait instruction 327 having a format SCE wait (SCECR x). Each of these instructions may include an operation code or opcode (e.g., a plurality of bits or one or more fields) that is operable to identify the instruction and/cif the operation to he performed. The functionality of each of these illustrative shared core extension utilization instructions will he discussed further below.It is to be appreciated that this is just one illustrative example of a suitable set of shared core extension utilization instructions. For example, in other embodiments, some of the illustrated instructions may optionally be omitted and/or additional instructions may optionally be added to the shared core extension utilization instructions. Moreover, other shared core extension utilization instructions and sets of them are contemplated and will be apparent to those skilled in the art and having the benefit of the present disclosure.One of the physical threads running on the core 0 311-0 may issue one of the shared core extension utilization instructions 323. The shared core extension utilization instruction issued by that thread may indicate an appropriate core 0 shared core extension command registers 328. The appropriate core 0 shared core extension command registers may correspond to the thread (e.g. thread 0) and provide context per thread.Referring again to Figure 3, the core 0 includes decode logic 348. The decode logic may also be referred to as a decoder or decode unit. The decode logic may receive and decode higher-level machine instructions or macroinstructions, and output one or more lower-level micro-operations, micro-code entry points, microinstructions, or other lower-level instructions or control signals that reflect and/or are derived from the original higher-level instruction. The one or more lower-level control signals may implement the operation of the higher-level instruction through one or more lower-level (e.g., circuit-level or hardware-level) operations. The decoder may be implemented using various different mechanisms including, but not limited to, microcode read only memories (ROMs), look-up tables, hardware implementations, programmable logic arrays (PLAs), and other mechanisms used to perform instruction decoding known in the art. Moreover, in some embodiments, an instruction emulator, translator, morpher, interpreter, or other instruction conversion logic may be used either instead of and/or in addition to the decode logic.SCE instruction execution logic 330 is coupled with the decode logic 348 and with the core 0 shared core extension command registers 328. The shared core extension instruction execution logic may receive from the decoder one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which reflect, or are derived from, the shared core extension utilization instructions. The shared core extension instruction execution logic is operable to perform actions in response to and/or as specified by the shared core extension utilization instructions (e.g., in response to the control signals from the decoder). In some embodiments, the shared core extension instruction execution logic and/or the processor may include specific or particular logic (e.g., circuitry or other hardware potentially combined with software and/or firmware) operable to execute and/or process the shared core extension utilization instructions and perform actions in response to and/or as specified by the shared core extension utilization instructions.In the illustrated embodiment, the shared core extension instruction execution logic is included within shared core extension control logic 329. The shared core extension control logic is coupled with the shared core extension command registers 328, the decode logic 348, and a memory management unit 331, which will be discussed further below. The shared core extension control logic may assist with various control, management, coordination, timing, and related implementation aspects of the shared core extension interface logic 313.As mentioned above, the instruction set of the core 0 includes the SCE call instruction 324. The SCE call instruction may be used to submit a call to the shared core extension logic 314 to have data processing performed on behalf of the core (e.g., on behalf of a thread running on the core). By way of example, a physical or logical thread running on the core 0 may issue an SCE call instruction in order to send a call or command for data processing to be performed to the shared core extension logic. In some embodiments, the call or command may be passed through one or more of the shared core extension command registers 328 to the shared core extension logic. For example, the shared core extension call instruction of an embodiment may specify or otherwise indicate one of the core 0 shared core extension command registers 328 (e.g., SCECR x). That is the shared core extension command registers may be accessible from the thread(s) on the cores using the new SCE call macroinstruction. In some embodiments the SCE call instruction may also specify or otherwise indicate one of more parameters to further specify, qualify, or define the data processing that is to be performed. Data may be written or stored in the indicated shared core extension command register (e.g., SCECR x) based on the SCE call instruction (e.g., based on the one or more parameters of the SCE call instruction). If a current SCE call is made to an shared core extension command register that is already dedicated to or occupied by a previous SCE call then the current SCE call may be blocked until the occupied shared core extension command register is released (e.g., when the associated call completes or is aborted). Subsequently, the shared core extension logic may access the indicated shared core extension command register (e.g., SCECR x), including the data written or stored therein, and may implement the call or command (e.g., perform the requested data processing). Figure 4 is a block flow diagram of an embodiment of a method 450 of processing an embodiment of SCE call instruction. In embodiments, the method may he performed by a processor, a core, or another type of instruction processing apparatus. In some embodiments, the method 450 may be performed by the processor 210 of Figure 2, or the core 0 311-0 of Figure 3, or a similar processor or core. Alternatively, the method 450 may be performed by an entirely different processor, core, or instruction processing apparatus. Moreover, the processor 210, and the core 311-0, may perform embodiments of operations and methods either the same as, similar to, or different than those of the method 450.A SCE call instruction is received within a core of a processor having a plurality of cores, at block 451. In various aspects, the SCE call instruction may be received at the core from an off-core source (e.g., from a main memory, a disc, or a bus or interconnect), or may be received at a portion of the core (e.g., at decode logic, scheduling logic, etc.) from other logic within the core (e.g., an instruction cache, queue, scheduling logic, etc.). The SCE call instruction is to cause the core to call shared core extension logic to have data processing performed. The shared core extension logic is shared by the plurality of cores. The SCE call instruction indicates a shared core extension command register, and also indicates one or more parameters. The one or more parameters specify the data processing that is to be performed by the shared core extension logic.In some embodiments, the one or more parameters may provide one or more of a pointer (e.g., explicit virtual memory pointers) to a command attribute data structure in memory having command attributes associated with the call, one or more pointers (e.g., one or more explicit virtual memory pointers) to one or more input data operands in memory upon which data processing is to he performed, and one or more pointers (e.g., one or more explicit virtual memory pointers) to one or more output data operands in memory where results of the data processing are to he stored. For example, in some embodiments, the one or more parameters may provide information to be stored in and/or used to derive the fields shown in Figure 5, which will be discussed further below. Alternatively, in other embodiments, one or more fields may have direct encodings of opcodes and arguments instead of memory pointers.The shared core extension logic is called, in response to the SCE call instruction, to have the data processing performed, at block 452. In some embodiments, calling the shared core extension logic may include writing or otherwise storing data in the shared core extension command register indicated by the instruction based on the one or more parameters indicated by the instruction.Figure 5 is a block diagram of an example embodiment of a shared core extension command register 528. The shared core extension command register has a number of fields. In the illustrated embodiment, these fields include, from left to right, a status field 553, a progress field 554, a command pointer field 555, an input data operand(s) pointer(s) field 556, and an output data operand(s) pointer(s) field 557. Each of these fields may include a number of bits sufficient to convey the information desired for the particular implementation.The status field 553 may he used to provide a status of the call corresponding to the shared core extension command register. Examples of such status include, but are not limited to, the call is valid (e.g., it is in progress), the call has been completed, the call has an error, or the like. By way of example, two bits may be used to specify any of the aforementioned three status conditions. In another example, a single bit may be used to encode either of two status conditions, such as valid and invalid. The valid may represent that the call is currently in progress. The invalid may indicate that an error has occurred.The progress field 554 may be used to provide a progress of the call corresponding to the shared core extension command register. The progress may represent a level of completion progress, or how far the call or command has progressed toward completion. The progress field may effectively implement a counter of sorts that counts the amount of work completed so far in executing the call. In some embodiments, the progress may he represented by atomic commit points. For example, the counter may be incremented whenever an atomic sub-operation is completed by the SCE logic. The atomic sub-operation may vary from one type of data processing to another (e.g., in one example when a certain number of cache lines of data have been processed). In some embodiments, the progress field may be used to provide progress atomicity with respect to the data processing of the shared core extension logic and an ability to pre-empt and re-schedule a running command on the shared core extension logic. When execution of a call is interrupted (e.g., on a context switch from one thread to another or on a fault), the progress field may be saved. Later, the progress field may be restored and the data processing associated with the call resumed (e.g., when the thread resubmits). Restoring the progress field may allow the data processing to resume where it left off. This is useful, especially when the amount of data processing to be performed by the SCE logic is relatively large and/or takes a relatively large amount of time to complete.The command pointer field 555 may be used to provide a pointer that points to call or command attribute information 558 of the call corresponding to the shared core extension command register. In some embodiments, the call attribute information may he included in a call attribute data structure. In some embodiments, the call attribute information may he stored at one or more memory locations in a memory 518. In some embodiments, the pointer may be an explicit virtual memory pointer. The call attribute information may further specify, qualify, define, or characterize the attributes of the call. For example, the call attribute information may further specify, qualify, define, or characterize the precise type of data processing that is to be performed by the shared core extension logic. In some embodiments, the commands attributes may describe processing that represents relatively simple or short processing routines or functions, such as, for example, operations to transpose a matrix, operations to generate a histogram, operations to perform a filter, or the like. The command attributes may describe a sequence of operations to perform on one or more input data operands (e.g., one or more input data structures) to produce one or more output data operands (e.g., one or more output data structures). In some embodiments, they may be any of various such relatively simple algorithms or routines typically performed in hardware accelerators or graphics processing units or the like. The input data operand(s) pointer(s) field 556 may be used to provide one or more pointers that point to one or more input data operands. The input data operands are those on which data processing is to be performed by the shared core extension logic. In some 20 embodiments, the one or more input data operands may represent one or more data structures, such as, for example, matrices, tables, etc. As shown, in some embodiments, the pointer(s) may point to input data operand(s) in memory location(s) in the memory 518. In some embodiments, the pointer(s) may be explicit virtual memory pointer(s). In other embodiments, the pointers may point to one or more input data operands in one or more registers or other storage locations.The output data operand(s) pointer(s) field 557 may he used to provide one or more pointers that point to one or more output data operands. The output data operands are those used to convey results of the data processing that has been performed by the shared core extension logic at the completion of the call. In some embodiments, the one or more output data operands may represent one or more data structures, such as, for example, matrices, tables. etc. As shown, in some embodiments, the pointer(s) may point to output data operand(s) in memory location(s) in the memory. In some embodiments, the pointer(s) may be explicit virtual memory pointer(s). In other embodiments, the pointers may point to one or more output data operands in one or more registers or other storage locations.It is to be appreciated that this is just one example embodiment of a suitable format for a shared core extension command register. Alternate embodiments may omit some of the illustrated fields or may add additional fields. For example, one or more of the fields may he provided through an implicit location that need not be explicitly specified in the shared core extension command register. As another example, an input data operand storage location may be reused as an output data operand storage location such that it need not be specified twice but one of the specifications may be implicit. As yet another example, one or more fields may have direct encodings of opcodes and arguments instead of memory pointers. Moreover, the illustrated order/arrangement of the fields is not required, but rather the fields may he rearranged.Furthermore, fields need not include contiguous sequences of bits (as suggested in the illustration) but rather may be composed of non-contiguous or separated bits.Referring again to Figure 3, after the execution of the SCE call instruction, an shared core extension command register indicated by the SCE call instruction (e.g., SCECR x) may store data corresponding to the SCE call instruction. After the thread or core submits the task or call, the thread or core may proceed to prepare and submit additional calls or tasks to the shared core extension logic before the earlier submitted calls or tasks complete Additionally, the thread or core may proceed to perform other processing while the previously submitted calls or tasks complete. The shared core extension command registers together with a scheduler (which will be discussed further below) may help to provide fine-grain control flow which may allow multiple threads and/or multiple cores to submit tasks or calls and then proceed to submit other tasks or calls or perform other processing while and until the tasks or calls complete on the shared core extension logic.The shared core extension logic 314 includes core interface logic 315 to access the core 0 shared core extension command registers 328. The core interface logic may also be used to access the core M shared core extension command registers 340, as well as for any other cores (if any). That is, in some embodiments, the shared core extension logic and/or the core interface logic may access a separate set of shared core extension command registers for each of the cores. The shared core extension logic may use the shared core extension command registers 328. For example, the shared core extension logic may access the command attribute 30 information pointed to by the command field (e.g., field 555), may access the input data operands pointed to by the input data operands field (e.g., field 556), may update progress as a result of data processing in the progress field (e.g., field 554), when the operation is done or encounters an error may update the status field (e.g., field 553) to reflect complete or an error, and in the event of completion without an error may access the output data operands through the pointer in the output data operands field (e.g., field 557).To facilitate the description, the shared core extension logic is shown as having a copy of the core 0, thread 0 shared core extension command registers. However, the shared core extension command registers of the shared core extension logic are shown in dashed lines to indicate that there may not actually be two sets of the core 0, thread 0 shared core extension command registers. Rather, both the core 0 and the shared core extension logic may logically view the same set of core 0, thread 0 shared core extension command registers. Similarly, the shared core extension logic may view the corresponding shared core extension command registers of other threads of other processors through potentially a core M, thread P set 340.Also for clarity, the physical core 0, thread 0 shared core extension command registers may be located in the core 0, in the shared core extension logic, in a location outside the core 0 and outside the shared core extension logic, or in a combination of different locations.The shared core extension logic 314 includes an embodiment of a scheduler 344. The scheduler may be implemented in hardware, software, firmware, or some combination. In one aspect, the scheduler may he a hardware scheduler. The scheduler may he operable to access the core 0 shared core extension command registers 328, through the core M shared core extension command registers 340, and schedule data processing associated with calls conveyed through these registers on the shared data processing logic 316. In some embodiments, the scheduler may represent a programmable hardware scheduler or programmable hardware scheduling logic to schedule the data processing for the cores according to a programmable scheduling algorithm or objective. In some embodiments, the hardware scheduler may he implemented as a state machine that is operable to rotate between command registers and between physical threads. Arbitration policies may potentially be exposed to software through a set of machine specific registers (MSRs). In other embodiments, the hardware scheduler may be implemented as a firmware block, for example incorporating both fixed read only memory (ROM) and patchable random access memory (RAM) domains. This may potentially allow the hardware scheduler to use more elaborate scheduling algorithms, which may rely on operating system directives, application programming interfaces (APIs), run-time compiler directives, real-time hardware signals, or a combination of such controls. By way of example, the scheduling may be a fair scheduling algorithm, a weighted scheduling algorithm for some of the cores over others (e.g., based on core load, time criticality of the thread or data being processed, thread priority, or according to other objectives). Many different types of scheduling algorithms known in the arts are suitable for different implementations depending upon the particular objectives of those implementations. The scheduler may also monitor the completion of the calls or tasks scheduled on the shared data processing logic.The shared core extension logic 314 also includes status and/or progress update logic 349. The status and/or progress update logic may monitor the status and/or progress of the calls being handled by the shared data processing logic 316. The status and/or progress update logic may also update shared core extension command registers corresponding to the calls based on the monitored status and/or progress. For example, the status field 553 and the progress field 554 of Figure 5 may he updated. By way of example, when a call completes on the shared core extension logic, the status may be updated to reflect completed, or when processing of a call on the shared core extension logic encounter an error the status may be updated to reflect an error condition. As another example, throughout the data processing associated with a call, the status and/or progress update logic may update the progress of the completion of the call (e.g., may update atomic commit points in the progress field 554).In some embodiments, an operating system may use a state save/state restore functionality (e.g., xsave/xrestore in Intel Architecture) to manage the state of shared core extension command registers on context switches. Calls or commands that have not yet been completed by the shared core extension logic may be saved and then restored and re-launched by the physical thread on a context switch. In some embodiments, to support context switch and operating system pre-emption, the shared core extension command registers may have the aforementioned progress field to record (e.g., atomic progress) of the data processing task being handled by the shared core extension logic. The progress field may be saved on context switch as part of the thread context, and used for task resumption when the operating system reschedules the thread.The shared core extension logic 315 also includes shared core extension control logic 343. The shared core extension control logic is coupled with the scheduler 344, the shared data processing logic 316, the status/progress update logic 349, the core 0-M shared core extension command registers 328, 340, and a shared core extension memory management unit (MMU) 341, which will be discussed further below. The shared core extension control logic may assist with various control, management, coordination, timing, and related implementation aspects of the shared core extension logic 314.Refer again to the SCE call instruction 324 of Figure 3 and/or the SCE call instruction of the method of Figure 4, in some embodiments, the SCE call instruction may he a non-blocking SCE call instruction. In some embodiments, the non-blocking SCE call instruction may be sent non-speculatively from a thread (e.g., a physical thread), and may retire at a core on which the issuing thread is running after the non-blocking SCE call instruction has been accepted for execution at the shared core extension logic (e.g., stored in the SCE command register).In other embodiments, the SCE call instruction may be a blocking SCE call instruction. In some embodiments, the blocking SCE call instruction may be sent non-speculatively from a thread (e.g., a physical thread), and may retire at a core on which the issuing thread is running after execution of the call or task has completed at the shared core extension logic (e.g., when the status field of the shared core extension command register is updated to reflect completed). In some embodiments, both non-blocking and blocking variants of SCE call instructions may be included in the instruction set.In some embodiments, a blocking SCE call instruction may specify or otherwise indicate a timeout value (e.g., a number of cycles) to wait for a shared core extension command register release. For example, this number of cycles or other timeout value may he specified in one of the parameters of the SCE call instruction. In some embodiments, a failure, fault, error, or the like, may be returned in response to the call if the timeout value is reached without the shared core extension command register being released.Following the retirement of an SCE call instruction, the shared core extension logic may modify memory state according to the assigned task or call. In a multi-threaded environment, software synchronization may he performed to maintain cache coherency and memory ordering between logical threads that may use the shared core extension and have shared operands. Alternatively, hardware synchronization may also optionally he performed.Figure 6 is a block flow diagram of an embodiment of a method 662 of processing an embodiment of SCE read instruction. In embodiments, the method may be performed by a processor, a core, or another type of instruction processing apparatus. In some embodiments, the method 662 may be performed by the processor 210 of Figure 2, or the core 0 311-0 of Figure 3, or a similar processor or core. Alternatively, the method 662 may he performed by an entirely different processor, core, or instruction processing apparatus. Moreover, the processor 210, and the core 311-0, may perform embodiments of operations and methods either the same as, similar to, or different than those of the method 662.A shared core extension (SCE) read instruction is received within a core of a processor having a plurality of cores, at block 663. In various aspects, the SCE read instruction may he received at the core from an off-core source (e.g., from a main memory, a disc, or a bus or interconnect), or may be received at a portion of the core (e.g., at decode logic, scheduling logic, etc.) from other logic within the core (e.g., an instruction cache, queue, scheduling logic, etc.). The SCE read instruction to cause the core to read a status of a previously made call to shared core extension logic. The shared core extension logic is shared by the plurality of cores. The SCE read instruction indicates a shared core extension command register.The status of the previously made call to the shared core extension logic is read, in response to the SCE read instruction, at block 664. In some embodiments, reading the status may include reading data from the shared core extension command register indicated by the instruction. In some embodiments, the status may include completion status. For example, a status field (e.g., the status field 553 in Figure 5) may be read. In some embodiments, the read status may be selected from completed, error, valid, although the scope of the invention is not so limited In other embodiments, the SCE read instruction may read other information from the indicated shared core extension command register. Examples of such information include, but are not limited to, progress (e.g., from progress field 554 of Figure 5), an output data operand or a portion thereof (e.g., as indicated by field 557), and command attribute information (e.g., as indicated by field 555). In some embodiments, the shared core extension command register corresponds to a previous call to the shared core extension logic to have data processing be performed on behalf of the core receiving the SCE read instruction.Figure 7 is a block flow diagram of an embodiment of a method 766 of processing an embodiment of SCE abort instruction. In embodiments, the method may be performed by a processor, a core, or another type of instruction processing apparatus. In some embodiments, the method 766 may be performed by the processor 210 of Figure 2, or the core 0 311-0 of Figure 3, or a similar processor or core. Alternatively, the method 766 may be performed by an entirely different processor, core, or instruction processing apparatus. Moreover, the processor 210, and the core 311-0, may perform embodiments of operations and methods either the same as, similar to, or different than those of the method 766.A shared core extension (SCE) abort instruction is received within a core of a processor having a plurality of cores, at block 767. In various aspects, the SCE abort instruction may be received at the core from an off-core source (e.g., from a main memory, a disc, or a bus or interconnect), or may be received at a portion of the core (e.g., at decode logic, scheduling logic, etc.) from other logic within the core (e.g., an instruction cache, queue, scheduling logic, etc.). The SCE abort instruction is to cause the core to abort a previously made call to shared core extension logic. The shared core extension logic is shared by the plurality of cores. The SCE abort instruction indicates a shared core extension command register.The previously made call to the shared core extension logic is aborted, in response to the SCE abort instruction, at block 768. In some embodiments, aborting the call may include stopping data processing by the shared core extension logic that corresponds to the previously made call and/or that corresponds to the indicated shared core extension command register. In some embodiments, aborting the call may also include releasing the occupied shared core extension command register indicated by the SCE abort instruction.In some embodiments, a blocking SCE call instruction may specify or otherwise indicate a timeout value (e.g., a number of cycles) to wait for a SCECR release and the call may return a failure if the timeout elapses. The failure may occur either if the timeout is reached without the release and/or if the timeout is reached during in progress command execution that has not completed prior to the expiration of the timeout. For non-blocking call a SCE wait instruction may he used to block on shared core extension execution. The SCE wait instruction may similarly include a timeout value (e.g., a number of cycles) to wait for a shared core extension command register release. A failure, error, or the like may he returned if the timeout elapses without the shared core extension command register release. In some embodiments, the timeout value of the blocking SCE call instruction and/or the SCE wait instruction may be encoded as a variable parameter that the instruction may specify. In other embodiments, the timeout may be a fixed implicit value. In some embodiments, the SCE wait instruction may he used in conjunction with a non-blocking SCE call instruction to reduce power consumption. For example, when a blocking SCE call instruction blocks and/or when an SCE wait instruction blocks, the physical thread may optionally be halted and put to sleep (assuming there is no other work that is desired to be done) until the shared core extension logic wakes it on the relevant SCE call being completed. However, this is optional and not required. Moreover, other methods for aborting a call or command that runs for an unexpected or undesired long duration are also contemplated besides the aforementioned approach of indicating a timeout value through a blocking SCE call instruction and/or a SCE wait instruction.In some embodiments, the SCE logic may operate on the same virtual memory as the core 0. Referring again to Figure 3, the core 0 311-0 has a memory management unit (MMU) 331. The MMU includes shared core extension MMU interface logic 332. The MMU 331 may he substantially conventional except for the shared core extension MMU interface logic 332. The shared core extension logic 314 has a shared core extension MMU 341. The SCE MMU may maintain the page mapphig of the core 0 (e.g., cache or preserve the translations from virtual or linear memory to system memory that are cached or preserved by the core 0). In addition to maintaining TLB entries corresponding to those of the TLB of core 0, the SCE MMU may also maintain TLB entries for each of the o cores. The shared core extension MMU has 5 core MMU interface logic 342. The shared core extension MMU interface logic 332 and the core MMU interface logic 342 interface with one another to perform synchronization 346 between the MMU 331 and the shared core extension MMU 341. In some embodiments, the shared core extension MMU interface logic 332 and the core MMU interface logic 342 may represent a hardware mechanism or hardware support for synchronization of the MMU 331 and 10 the shared core extension MMU 341.In some embodiments, synchronization between the MMU and the SCE MMU may be performed to maintain consistency in this page mapping. For example, when a page is invalidated by the core 0, the core 0 may invalidate a corresponding TLB entry of the core 0 MMU. In some embodiments, synchronization may also he performed between the core 0 and the SCE logic in which a corresponding TLB entry on the SCE MMU of the SCE logic may also be correspondingly invalidated By way of example, a physical thread running on the core 0 may use the hardware interface provided by the shared core extension MMU interface logic 332 and the core MMU interface logic 342 to signal the SCE logic to invalidate the corresponding TLB of the SCE MMU through bus cycles on the processor. That is, in some embodiments, the synchronization of the shared core extension MMU 341 may he performed by hardware from within a physical thread running on the core 0. As another example, if a thread is swapped by the operating system (e.g., a context switch) then the SCE logic may be signaled and/or notified of the context switch so that the context associated with the thread may be saved so that they can be later restored. In some embodiments, such synchronization signaling may be at the hardware level (e.g., through bus cycles or bus transactions through a hardware mechanism). That is, the synchronization may be performed at the hardware level (e.g., through the hardware of the MMU and SCE MMU and bus transactions) rather than through software involvement (e.g., without involvement of the operating system).In some embodiments, the MMU 331 and the shared core extension MMU 341 may also interact through the interface logic 332, 342 to route or communicate page faults that occurring when the shared core extension logic is processing calls for the core 0. In some embodiments, the shared core extension MMU may use the core 0 to notify the operating system of a page fault that has occurred while processing a call from the core 0. Similarly, the shared core extension MMU may notify other cores of page faults that occur while processing calls from these other cores. The cores may notify the operating system of the page faults. The operating system may not have any reason to know that the page fault actually originated at the SCE logic rather than at the core that provided the page fault. In some embodiments, for a non-blocking SCE call instruction, the instruction pointer on the core specifying the fault may be arbitrary. In some embodiments, for a blocking SCE call instruction, the instruction pointer for the faulting shared core extension logic may point to the SCE call instruction corresponding to the call that faulted on the calling thread.The shared core extension logic offers a number of advantages over other approaches for offloading processing known in the arts. Conventionally, with hardware accelerators (e.g., graphics processing units), and the like, a software-based paradigm is used to interact with the hardware accelerators. The hardware accelerators are commonly managed by software device drivers. System calls are used by applications to utilize the processing of the hardware accelerators. Intervention of software (e.g., the operating system) is often needed to provide fair utilization of hardware accelerator by different threads running on the cores. As compared to such hardware accelerators, the shared core extension logic may allow a traditional programming paradigm of the cores utilizing the shared core extension logic (e.g., general-purpose cores) without shifting to the software paradigm of driver based hardware accelerator access. Moreover, in embodiments where the SCE logic operates on the same virtual memory as associated physical threads, it can he utilized without an accompanying overhead of data copying and/or data marshaling. Furthermore, as compared to a hardware accelerator, the shared core extension logic generally involves a smaller amount of open pages for making forward progress. In addition, as compared to a hardware accelerator, the shared core extension logic generally tends to reduce the latency overhead of submitting a command substantially to approximately the latency of a non-speculative core bus-cycle. Also, the SCE logic may use a scheduling unit in hardware or other logic on-processor to provide fair or distributed utilization among different threads running on cores, rather than through intervention of software (e.g., the operating system).In the description above, for simplicity of illustration and description, embodiments have shown and described a single instance of shared core extension logic (e.g., logic 214, logic 314, etc.). However, in some embodiments there may be more than one shared core extension logic. Each of the shared core extension logic may he shared by multiple cores, which may he either the same cores or different cores, and which may he either all of the cores or some of the cores.In some embodiments, different types of shared core extension logic (e.g., to perform different types of data processing) may be included and shared among the cores. In other cases, multiple instances of the same general type of shared core extension logic may be included and shared among either all of the cores (e.g., their threads) or each of the shared core extension logic may be shared by a subset of the total number of cores (e.g., a different subset). Various arrangements are contemplated as will be appreciated by those skilled in the art and having the benefit of the present disclosure.Components, features, and specific details described for Figure 5 may optionally be used with those of Figure 3, 4, or 6. The features and/or details described herein for an apparatus also optionally apply to the methods described herein which are performed by and/or with an apparatus. For example, components, features, and specific details described for Figure 3 may optionally be used with those of Figure 4, or 6.Exemplary Core Architectures Processors and Computer Architectures Processor cores may he implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality.Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core Architectures 1n-order and out-of-order core block diagram Figure 8A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 8B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 8A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will he described.In Figure 8A, a processor pipeline 800 includes a fetch stage 802, a length decode stage 804, a decode stage 806, an allocation stage 808, a renaming stage 810, a scheduling (also known as a dispatch or issue) stage 812, a register readhnemory read stage 814, an execute stage 816, a write back/memory write stage 818, an exception handling stage 822, and a commit stage 824.Figure 8B shows processor core 890 including a front end unit 830 coupled to an execution engine unit 850, and both are coupled to a memory unit 870. The core 890 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 890 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 830 includes a branch prediction unit 832 coupled to an instruction cache unit 834, which is coupled to an instruction translation lookaside buffer (TLB) 836, which is coupled to an instruction fetch unit 838, which is coupled to a decode unit 840. The decode unit 840 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 840 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 890 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 840 or otherwise within the front end unit 830). The decode unit 840 is coupled to a rename/allocator unit 852 in the execution engine unit 850.The execution engine unit 850 includes the rename/allocator unit 852 coupled to a retirement unit 854 and a set of one or more scheduler unit(s) 856. The scheduler unit(s) 856 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 856 is coupled to the physical register file(s) unit(s) 858. Each of the physical register file(s) units 858 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point" status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 858 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 858 is overlapped by the retirement unit 854 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 854 and the physical register file(s) unit(s) 858 are coupled to the execution cluster(s) 860. The execution cluster(s) 860 includes a set of one or more execution units 862 and a set of one or more memory access units 864. The execution units 862 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 856, physical register file(s) unit(s) 858, and execution cluster(s) 860 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster -and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 864). It should also be understood that where separate pipelines are used, one or more of these pipelines may he out-of-order issue/execution and the rest in-order.The set of memory access units 864 is coupled to the memory unit 870, which includes a data TLB unit 872 coupled to a data cache unit 874 coupled to a level 2 (L2) cache unit 876. In one exemplary embodiment, the memory access units 864 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 872 in the memory unit 870. The instruction cache unit 834 is further coupled to a level 2 (L2) cache unit 876 in the memory unit 870. The L2 cache unit 876 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issuc/execution core architecture may implement the pipeline 800 as follows: 1) the instruction fetch 838 performs the fetch and length decoding stages 802 and 804; 2) the decode unit 840 performs the decode stage 806; 3) the rename/allocator unit 852 performs the allocation stage 808 and renaming stage 810; 4) the scheduler unit(s) 856 performs the schedule stage 812; 5) the physical register file(s) unit(s) 858 and the memory unit 870 perform the register read/memory read stage 814; the execution cluster 860 perform the execute stage 816; 6) the memory unit 870 and the physical register file(s) unit(s) 858 perform the write back/memory write stage 818; 7) various units may he involved in the exception handling stage 822; and 8) the retirement unit 854 and the physical register file(s) unit(s) 858 perform the commit stage 824.The core 890 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 890 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should he understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may he used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 834/874 and a shared L2 cache unit 876, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (Li) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core Architecture Figures 9A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 9A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 902 and with its local subset of the Level 2 (L2) cache 904, according to embodiments of the invention. In one embodiment, an instruction decoder 900 supports the x86 instruction set with a packed data instruction set extension. An Li cache 906 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 908 and a vector unit 910 use separate register sets (respectively, scalar registers 912 and vector registers 914) and data transferred between them is written to memory and then read back in from a level 1 (Ll) cache 906, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 904 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 904. Data read by a processor core is stored in its L2 cache subset 904 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 904 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.Figure 9B is an expanded view of part of the processor core in Figure 9A according to embodiments of the invention. Figure 9B includes an Ll data cache 906A part of the Ll cache 904, as well as more detail regarding the vector unit 910 and the vector registers 914. Specifically, the vector unit 910 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 928), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 920, numeric conversion with numeric convert units 922A-B, and replication with replication unit 924 on the memory input. Write mask registers 926 allow predicating resulting vector writes. Processor with integrated memory controller and graphics Figure 10 is a block diagram of a processor 1000 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 10 illustrate a processor 1000 with a single core 1002A, a system agent 1010, a set of one or more bus controller units 1016, while the optional addition of the dashed lined boxes illustrates an alternative processor 1000 with multiple cores 1002A-N, a set of one or more integrated memory controller unit(s) 1014 in the system agent unit 1010, and special purpose logic 1008.Thus, different implementations of the processor 1000 may include: 1) a CPU with the special purpose logic 1008 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1002A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1002A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1002A-N being a large number of general purpose in-order cores. Thus, the processor 1000 may he a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or conununication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1000 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1006, and external memory (not shown) coupled to the set of integrated memory controller units 1014. The set of shared cache units 1006 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1012 interconnects the integrated graphics logic 1008, the set of shared cache units 1006, and the system agent unit 1010/integrated memory controller unit(s) 1014, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1006 and cores 1002-A-N.In some embodiments, one or more of the cores 1002A-N are capable of multi-threading. The system agent 1010 includes those components coordinating and operating cores 1002A-N. The system agent unit 1010 may include for example a power control unit (PCU) and a display unit. The PCU may he or include logic and components needed for regulating the power state of the cores 1002A-N and the integrated graphics logic 1008. The display unit is for driving one or more externally connected displays.The cores 1002A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1002A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer Architectures Figures 11-14 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 11, shown is a block diagram of a system 1100 in accordance with one embodiment of the present invention. The system 1100 may include one or more processors 1110, 1115, which are coupled to a controller huh 1120. In one embodiment the controller hub 1120 includes a graphics memory controller hub (GMCH) 1190 and an Input/Output Hub (IOH) 1150 (which may be on separate chips); the GMCH 1190 includes memory and graphics controllers to which are coupled memory 1140 and a coprocessor 1145; the IOH 1150 is couples input/output (I/O) devices 1160 to the GMCH 1190. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1140 and the coprocessor 1145 are coupled directly to the processor 1110, and the controller hub 1120 in a single chip with the IOH 1150.The optional nature of additional processors 1115 is denoted in Figure 11 with broken lines. Each processor 1110, 1115 may include one or more of the processing cores described 5 herein and may be some version of the processor 1000.The memory 1140 may be, for example_ dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1120 communicates with the processor(s) 1110, 1115 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1195.In one embodiment, the coprocessor 1145 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller huh 1120 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1110, 1115 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1110 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions.The processor 1110 recognizes these coprocessor instructions as being of a type that should he executed by the attached coprocessor 1145. Accordingly, the processor 1110 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1145. Coprocessor(s) 1145 accept and execute the received coprocessor instructions.Referring now to Figure 12, shown is a block diagram of a first more specific exemplary system 1200 in accordance with an embodiment of the present invention. As shown in Figure 12, multiprocessor system 1200 is a point-to-point interconnect system, and includes a first processor 1270 and a second processor 1280 coupled via a point-to-point interconnect 1250. Each of processors 1270 and 1280 may be some version of the processor 1000. In one embodiment of the invention, processors 1270 and 1280 are respectively processors 1110 and 1115, while coprocessor 1238 is coprocessor 1145. In another embodiment, processors 1270 and 1280 are respectively processor 1110 coprocessor 1145.Processors 1270 and 1280 are shown including integrated memory controller (IMC) units 1272 and 1282, respectively. Processor 1270 also includes as part of its bus controller units point-to-point (P-P) interfaces 1276 and 1278; similarly, second processor 1280 includes P-P interfaces 1286 and 1288. Processors 1270, 1280 may exchange information via a point-to-point (P-P) interface 1250 using P-P interface circuits 1278, 1288. As shown in Figure 12, IMCs 1272 and 1282 couple the processors to respective memories, namely a memory 1232 and a memory 1234, which may be portions of main memory locally attached to the respective processors.Processors 1270, 1280 may each exchange information with a chipset 1290 via individual P-P interfaces 1252, 1254 using point to point interface circuits 1276, 1294, 1286, 1298. Chipset 1290 may optionally exchange information with the coprocessor 1238 via a high-performance interface 1239. In one embodiment, the coprocessor 1238 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may he included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1290 may be coupled to a first bus 1216 via an interface 1296. In one embodiment, first bus 1216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited As shown in Figure 12, various I/O devices 1214 may he coupled to first bus 1216, along with a bus bridge 1218 which couples first bus 1216 to a second bus 1220. In one embodiment, one or more additional processor(s) 1215, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field progranunable gate arrays, or any other processor, are coupled to first bus 1216. In one embodiment, second bus 1220 may he a low pin count (LPC) bus. Various devices may be coupled to a second bus 1220 including, for example, a keyboard and/or mouse 1222, communication devices 1227 and a storage unit 1228 such as a disk drive or other mass storage device which may include instructions/code and data 1230, in one embodiment. Further, an audio I/O 1224 may be coupled to the second bus 1220. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 12, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 13, shown is a block diagram of a second more specific exemplary system 1300 in accordance with an embodiment of the present invention. Like elements in Figures 12 and 13 hear like reference numerals, and certain aspects of Figure 12 have been omitted from Figure 13 in order to avoid obscuring other aspects of Figure 13.Figure 13 illustrates that the processors 1270, 1280 may include integrated memory and I/O control logic ("CL") 1272 and 1282, respectively. Thus, the CL 1272, 1282 include integrated memory controller units and include 1/0 control logic. Figure 13 illustrates that not only are the memories 1232, 1234 coupled to the CL 1272, 1282, but also that I/O devices 1314 are also coupled to the control logic 1272, 1282. Legacy I/O devices 1315 are coupled to the chipset 1290.Referring now to Figure 14, shown is a block diagram of a SoC 1400 in accordance with an embodiment of the present invention. Similar elements in Figure 10 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 14, an interconnect unit(s) 1402 is coupled to: an application processor 1410 which includes a set of one or more cores 202A-N and shared cache unit(s) 1006; a system agent unit 1010; a bus controller unit(s) 1016; an integrated memory controller unit(s) 1014; a set or one or more coprocessors 1420 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1430; a direct memory access (DMA) unit 1432; and a display unit 1440 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1420 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may he implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 1230 illustrated in Figure 12, may be applied to input 30 instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may he implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also he referred to as program products.Emulation (including binary translation, code morphitzg, etc.) In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the corn. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Figure 15 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 15 shows a program in a high level language 1502 may be compiled using an x86 compiler 1504 to generate x86 binary code 1506 that may be natively executed by a processor with at least one x86 instruction set core 1516. The processor with at least one x86 instruction set core 1516 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1504 represents a compiler that is operable to generate x86 binary code 1506 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1516.Similarly, Figure 15 shows the program in the high level language 1502 may he compiled using an alternative instruction set compiler 1508 to generate alternative instruction set binary code 1510 that may be natively executed by a processor without at least one x86 instruction set core 1514 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1512 is used to convert the x86 binary code 1506 into code that may be natively executed by the processor without an x86 instruction set core 1514. This converted code is not likely to he the same as the alternative instruction set binary code 1510 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1512 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1506.In the description and claims, the terms "coupled" and/or "connected," along with their derivatives, have be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. For example, an execution unit may be coupled with a register or a decoder through one or more intervening components. In the figures, arrows are used to show couplings and/or connections.In the description and claims, the term "logic" may have been used. As used herein, the term logic may include hardware, firmware, software, or various combinations thereof. Examples of logic include integrated circuitry, application specific integrated circuits, analog circuits, digital circuits, programmed logic devices, memory devices including instructions, etc. In some embodiments, the logic may include transistors and/or gates potentially along with other circuitry components (e.g., embedded in semiconductor materials).In the description above, specific details have been set forth in order to provide a thorough understanding of the embodiments. However, other embodiments may be practiced without some of these specific details. The scope of the invention is not to be determined by the specific examples provided above but only by the claims below. All equivalent relationships to those illustrated in the drawings and described in the specification are encompassed within embodiments. In other instances, well-known circuits, structures, devices, and operations have been shown in block diagram form or without detail in order to avoid obscuring the understanding of the description. Where multiple components have been shown in some cases they may be integrated into a single component. Where a single component has been shown and described, in some cases this single component may he separated into two or more components.Certain methods disclosed herein have been shown and described in a basic form, although operations may optionally he added to and/or removed from the methods. In addition, a particular order of the operations may have been shown and/or described, although alternate embodiments may perform certain operations in different order, combine certain operations, overlap certain operations, etc. Certain operations may be performed by hardware components and/or may be embodied in a machine-executable instruction that may he used to cause and/or result in a hardware component (e.g., a processor, potion of a processor, etc.) programmed with the instruction performing the operations. The hardware component may include a general-purpose or special-purpose hardware component. The operations may be performed by a combination of hardware, software, and/or firmware. The hardware component may include specific or particular logic (e.g., circuitry potentially combined with software and/or firmware) that is operable to execute and/or process the instruction and perform an action in response to the instruction (e.g., in response to one or more microinstructions or other control signals derived from the instruction). Reference throughout this specification to "one embodiment," "an embodiment," one or more embodiments," "some embodiments," for example, indicates that a particular feature may he included in the practice of the invention but is not necessarily required to he. Similarly, in the 10 description various features are sometimes grouped together in a single embodiment, Figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the invention.The following clauses and/or examples pertain to further embodiments. Specifics in the clauses and/or examples may be used anywhere in one or more embodiments.In one embodiment, a first apparatus includes a plurality of cores and a shared core extension logic coupled with each of the plurality of cores. The shared core extension logic has shared data processing logic that is shared by each of the plurality of cores. The first apparatus also includes instruction execution logic, for each of the cores, that in response to a shared core extension call instruction, is to call the shared core extension logic. The call is to have data processing performed by the shared data processing logic, on behalf of a corresponding core.Embodiments include the first apparatus further including a plurality of shared core extension command registers coupled with the instruction execution logic and the shared core extension logic, where the shared core extension call instruction is to indicate one of the shared core extension command registers and a plurality of parameters.Embodiments include any of the above first apparatus in which the instruction execution logic, in response to the shared core extension call instruction, is to store data in the indicated shared core extension command register based on the indicated parameters.Embodiments include any of the above first apparatus in which the instruction execution logic, in response to the shared core extension call instruction, is to store in the indicated shared core extension command register: a pointer in a call attribute pointer field to point to call attribute information; a pointer in an input data operand pointer field to point to an input data operand; and a pointer an output data operand pointer field to point to an output data operand.Embodiments include any of the above first apparatus in which the shared core extension logic is to store, based on data processing associated with the call, in the indicated shared core extension command register: a status field to provide a status of the call; and a progress field to provide a progress of the call.Embodiments include any of the above first apparatus in which the shared core extension call instruction includes a macroinstruction of an instruction set of the cores.Embodiments include any of the above first apparatus in which the shared data processing logic includes at least one vector execution unit.Embodiments include any of the above first apparatus in which the shared data processing logic includes data processing logic that is not found in the plurality of cores.Embodiments include any of the above first apparatus in which the instruction execution logic, in response to the shared core extension call instruction, is to call the shared core extension logic to have data processing performed on at least one input data structure in memory according to a routine to produce at least one output data structure in memory.Embodiments include any of the above first apparatus further including: a memory management unit (MMU) of a first core of the plurality; a shared core extension MMU of the shared core extension logic; and a hardware interface between the MMU of the first core and the shared core extension MMU to exchange synchronization signals in hardware to synchronize the MMU of the first core and the shared core extension MMU.Embodiments include any of the above first apparatus further including: a memory management unit (MMU) of a first core of the plurality; a shared core extension MMU of the shared core extension logic; and an interface between the MMU of the first core and the shared core extension MMU to route a page fault, which corresponds to a call from the first core, from the shared core extension MMU to the MMU of the first core.Embodiments include any of the above first apparatus further including hardware scheduling logic on die with the shared core extension logic to schedule calls from the plurality of cores on the shared data processing logic.In one embodiment, a first method includes receiving, within a core of a processor having a plurality of cores, a shared core extension call instruction. The shared core extension call instruction to cause the core to call a shared core extension logic, which is shared by the plurality of cores. The call is to have data processing performed. The shared core extension call instruction indicates a shared core extension command register and indicates a plurality of parameters that specify the data processing to be performed. The shared core extension logic is called, in response to the shared core extension call instruction, to have the data processing performed. Calling the shared core extension logic includes storing data in the shared core extension command register indicated by the instruction based on the parameters indicated by the instruction.Embodiments include the first method in which receiving the instruction includes receiving a non-blocking shared core extension call instruction, and further including retiring the non-blocking shared core extension call instruction at the core after the shared core extension logic has accepted the data processing to he performed.Embodiments include the first method in which receiving the instruction includes receiving a blocking shared core extension call instruction, and further including retiring the blocking shared core extension call instruction at the core after the shared core extension logic has completed the data processing.Embodiments include the first method in which receiving the instruction includes receiving a blocking shared core extension call instruction, where the blocking shared core extension call instruction indicates a timeout value for a release of the indicated shared core extension command register.Embodiments include any of the above first methods in which the shared core extension call instruction includes a macroinstruction of an instruction set of the core, and where the shared core extension command register comprise an architectural register.Embodiments include any of the above first methods in which storing the data in the indicated shared core extension command register based on the parameters includes: storing a pointer in a call attribute pointer field to point to call attribute information; storing a pointer in an input data operand pointer field to point to an input data operand; and storing a pointer an output data operand pointer field to point to an output data operand.Embodiments include any of the above first methods further including the shared core extension logic storing data in the indicated shared core extension register based on data processing associated with the call, the storing of the data including: storing a status in a status field of the indicated register to provide a status of the call; and storing a progress in a progress field of the indicated register to provide a progress of the call.Embodiments include any of the above first methods in which calling includes calling the shared core extension logic to have data processing performed on at least one input data structure in memory according to a routine to produce at least one output data structure in memory.Embodiments include any of the above first methods further including synchronizing a memory management unit (MMU) of the core and a shared core extension MMU of the shared core extension logic by exchanging synchronization signals in hardware between the MMU and the shared core extension MMU.Embodiments include any of the above first methods further including routing a page fault corresponding to the call from a shared core extension memory management unit (MMU) to an MMU of the core.Embodiments include any of the above first methods further including, before receiving the shared core extension call instruction: receiving a shared core extension abort instruction indicating the shared core extension command register; and stopping data processing corresponding to the shared core extension command register indicated by the shared core extension abort instruction, in response to the shared core extension abort instruction, and releasing the shared core extension command register.Embodiments include any of the above first methods further including, after receiving the shared core extension call instruction: receiving a shared core extension read instruction indicating the shared core extension command register; and reading a data processing completion status from the shared core extension command register indicated by the shared core extension read instruction in response to the shared core extension read instruction.In one embodiment, a machine-readable storage medium stores one or more instruction that if executed by a machine cause the machine to performing any of the above first methods.In one embodiment, an apparatus is configured or operable to perform any of the above first methods.Embodiments include a first system including a processor and a dynamic random access memory (DRAM) coupled with the processor. The processor includes a plurality of cores and shared core extension logic coupled with each of the plurality of cores. The shared core extension logic has shared data processing logic that is shared by each of the plurality of cores. The processor also includes instruction execution logic, for each of the cores, that in response to a shared core extension call instruction, is to call the shared core extension logic The call is to have data processing performed by the shared data processing logic, on behalf of a corresponding core.Embodiments include the first system in which the shared core extension call instruction includes a macroinstruction of an instruction set of the cores.Embodiments include any of the above first systems further including a plurality of architectural shared core extension command registers coupled with the instruction execution logic and the shared core extension logic, where the shared core extension call instruction is to indicate one of the shared core extension command registers and a plurality of parameters.The following section of the description consists of numbered paragraphs simply providing statements of the invention already described herein. The numbered paragraphs in this section are not claims. The claims are set forth below in the later section headed "claims".1. An apparatus comprising: a plurality of cores; a shared core extension logic coupled with each of the plurality of cores, the shared core extension logic having shared data processing logic that is shared by each of the plurality of cores; and instruction execution logic, for each of the cores, that in response to a shared core extension call instruction, is to call the shared core extension logic to have data processing performed by the shared data processing logic, on behalf of a corresponding core.2. The apparatus of clause 1, wherein the shared core extension call instruction comprises a macroinstruction of an instruction set of the cores.3. The apparatus of clause 1, further comprising a plurality of shared core extension command registers coupled with the instruction execution logic and the shared core extension logic, wherein the shared core extension call instruction is to indicate one of the shared core extension command registers and a plurality of parameters.4. The apparatus of clause 3, wherein the instruction execution logic, in response to the shared core extension call instruction, is to store data in the indicated shared core extension command register based on the indicated parameters.5. The apparatus of clause 4, wherein the instruction execution logic, in response to the shared core extension call instruction, is to store in the indicated shared core extension command register: a pointer in a call attribute pointer field to point to call attribute information; a pointer in an input data operand pointer field to point to an input data operand; and a pointer an output data operand pointer field to point to an output data operand.6. The apparatus of clause 4, wherein the shared core extension logic is to store, based on data processing associated with the call, in the indicated shared core extension command register: a status field to provide a status of the call; and a progress field to provide a progress of the call.7. The apparatus of clause 1, wherein the shared data processing logic comprises at least one vector execution unit.8. The apparatus of clause 1, wherein the shared data processing logic comprises data processing logic that is not found in the plurality of cores.9. The apparatus of clause 1, wherein the instruction execution logic, in response to the shared core extension call instruction, is to call the shared core extension logic to have data processing performed on at least one input data structure in memory according to a routine to produce at least one output data structure in memory.10. The apparatus of clause 1, further comprising: a memory management unit (MMU) of a first core of the plurality; a shared core extension MMU of the shared core extension logic; and a hardware interface between the 1\41VIU of the first core and the shared core extension MMU to exchange synchronization signals in hardware to synchronize the MMU of the first core and the shared core extension MMU.11. The apparatus of clause 1, further comprising: a memory management unit (MMU) of a first core of the plurality; a shared core extension MMU of the shared core extension logic; and an interface between the MMU of the first core and the shared core extension MMU to route a page fault, which corresponds to a call from the first core, from the shared core extension MMU to the MMU of the first core.12. The apparatus of clause 1, further comprising hardware scheduling logic on die with the shared core extension logic to schedule calls from the plurality of cores on the shared data processing logic.13. A method comprising: receiving, within a core of a processor having a plurality of cores, a shared core extension call instruction, the shared core extension call instruction to cause the core to call a shared core extension logic, which is shared by the plurality of cores, to have data processing performed, the shared core extension call instruction indicating a shared core extension command register and indicating a plurality of parameters that specify the data processing to be performed; and calling the shared core extension logic, in response to the shared core extension call instruction, to have the data processing performed, wherein calling the shared core extension logic includes storing data in the shared core extension command register indicated by the instruction based on the parameters indicated by the instruction.14. The method of clause 13, wherein the shared core extension call instruction comprises a macroinstruction of an instruction set of the core, and wherein the shared core extension command register comprise an architectural register.15. The method of clause 13, wherein storing the data in the indicated shared core extension command register based on the parameters comprises: storing a pointer in a call attribute pointer field to point to call attribute information; storing a pointer in an input data operand pointer field to point to an input data operand; and storing a pointer an output data operand pointer field to point to an output data operand.16. The method of clause 13, further comprising, the shared core extension logic storing data in the indicated shared core extension register based on data processing associated with the call, the storing of the data including: storing a status in a status field of the indicated register to provide a status of the call; and storing a progress in a progress field of the indicated register to provide a progress of the call.17. The method of clause 13, wherein receiving the instruction comprises receiving a non-blocking shared core extension call instruction, and further comprising retiring the non-blocking shared core extension call instruction at the core after the shared core extension logic has accepted the data processing to be performed.18. The method of clause 13, wherein receiving the instruction comprises receiving a blocking shared core extension call instruction, and further comprising retiring the blocking shared core extension call instruction at the core after the shared core extension logic has completed the data processing.19. The method of clause 13, wherein receiving the instruction comprises receiving a blocking shared core extension call instruction, wherein the blocking shared core extension call instruction indicates a timeout value for a release of the indicated shared core extension command register.20. The method of clause 13, wherein calling comprises calling the shared core extension logic to have data processing performed on at least one input data structure in memory according to a routine to produce at least one output data structure in memory.21. The method of clause 13, further comprising synchronizing a memory management unit (MMU) of the core and a shared core extension MMU of the shared core extension logic by exchanging synchronization signals in hardware between the MMU and the shared core extension MMU.22. The method of clause 13, further comprising routing a page fault corresponding to the call from a shared core extension memory management unit (MMU) to an MMU of the core.23. The method of clause 13, further comprising, before receiving the shared core extension call instruction: receiving a shared core extension abort instruction indicating the shared core extension command register; and stopping data processing corresponding to the shared core extension command register indicated by the shared core extension abort instruction, in response to the shared core extension abort instruction, and releasing the shared core extension command register.24. The method of clause 13, further comprising, after receiving the shared core extension call instruction: receiving a shared core extension read instruction indicating the shared core extension command register; and reading a data processing completion status from the shared core extension command register indicated by the shared core extension read instruction in response to the shared core extension read instruction.25. A system comprising: a processor comprising: a plurality of cores; a shared core extension logic coupled with each of the plurality of cores, the shared core extension logic having shared data processing logic that is shared by each of the plurality of cores; and instruction execution logic, for each of the cores, that in response to a shared core extension call instruction, is to call the shared core extension logic to have data processing performed by the shared data processing logic, on behalf of a corresponding core; and a dynamic random access memory (DRAM) coupled with the processor.26. The system of clause 25, wherein the shared core extension call instruction comprises a macroinstruction of an instruction set of the cores.27. The system of clause 25, further comprising a plurality of architectural shared core extension command registers coupled with the instruction execution logic and the shared core extension logic, wherein the shared core extension call instruction is to indicate one of the shared core extension command registers and a plurality of parameters.2K The apparatus of clause 1, further comprising a hardware interface to route operating system preemption and core exception conditions that trigger context switches between a core and the shared core extension logic. |
Apparatus, methods, and computer systems are disclosed for coupling to a link partner over at least one communication link wherein the apparatus comprises communication circuitry for performing negotiation with the link partner at least in part over the at least one communication link, the negotiation is to determine pooled credits and dedicated credits for association with communication between the communication circuitry and the link partner over the at least one communication link, the pooling credits are shared by communication traffic of a virtual communication channel between the communication circuitry and the link partner over the at least one communication link, the communication traffic belonging to a plurality of different traffic control categories and/or a plurality of different traffic types, the dedicated credits are dedicated to other communication traffic of another virtual communication channel between the communication circuitry and the link partner over the at least one communication link, the other communication traffic being a single traffic control category and a single traffic type. |
1. An apparatus coupled to a link partner via at least one communication link, the apparatus comprising:a communication circuit for performing a negotiation with the link partner at least in part over the at least one communication link, the negotiation for: determining a pooled credit and a dedicated credit for communication between the communication circuit and the link partner over the at least one communication link, the pooled credit being shared by communication traffic of a virtual communication channel between the communication circuit and the link partner over the at least one communication link, the communication traffic belonging to a plurality of different flow control categories and/or a plurality of different traffic types, the dedicated credit being dedicated to other communication traffic of another virtual communication channel between the communication circuit and the link partner over the at least one communication link, the other communication traffic being of a single flow control category and a single traffic type;in:The pooled credit corresponds at least in part to a shared queue storage space shared by communication traffic belonging to the plurality of different flow control classes and/or the plurality of different traffic types;The dedicated credit corresponds, at least in part, to dedicated queue storage space dedicated to other communication traffic of the single flow control category and single traffic type;The plurality of different flow control categories include a publishing category, a non-publishing category, and a completion category; andThe plurality of different traffic types include header types and data types;wherein the negotiation includes advertising a capability to support use of pooled credits; andThe advertising includes transmitting data encoding a capability to support use of pooled credits.2. The device according to claim 1, wherein:The consultation includes notification of:Pooled Credits; and/orDedicated credit.3. The device according to claim 2, wherein:The one or more pooled credits and/or the one or more dedicated credits are tracked and released based at least in part on processing received packet data corresponding to the one or more pooled credits and/or the one or more dedicated credits.4. The apparatus of claim 3, wherein the communication circuitry comprises a transceiver and a network interface controller coupled via one or more buses.5. The apparatus of claim 4, wherein the one or more buses include the at least one communication link.6. The device according to claim 5, wherein:The communication circuit is included in a rack server computer device or a blade server computer device; andThe rack-mounted server computer device or the blade server computer device comprises:Physical central processing unit CPU processor circuit;Hardware accelerators;solid-state memory; andA sensor is used to provide sensor data for processing to the physical central processing unit CPU processor circuit.7. A method implemented using a communication circuit, the communication circuit being coupled to a link partner via at least one communication link, the method comprising:performing, at least in part, a negotiation with the link partner via the communication circuit over the at least one communication link, the negotiation being for: determining a pooled credit and a dedicated credit for use in association with communication between the communication circuit and the link partner over the at least one communication link, the pooled credit being shared by communication traffic of a virtual communication channel between the communication circuit and the link partner over the at least one communication link, the communication traffic belonging to a plurality of different flow control categories and/or a plurality of different traffic types, the dedicated credit being dedicated to other communication traffic of another virtual communication channel between the communication circuit and the link partner over the at least one communication link, the other communication traffic being of a single flow control category and a single traffic type;in:The pooled credit corresponds at least in part to a shared queue storage space shared by communication traffic belonging to the plurality of different flow control classes and/or the plurality of different traffic types;The dedicated credit corresponds, at least in part, to dedicated queue storage space dedicated to other communication traffic of the single flow control category and single traffic type;The plurality of different flow control categories include a publishing category, a non-publishing category, and a completion category; andThe plurality of different traffic types include header types and data types;wherein the negotiation includes issuing capabilities to support use of pooled credits; andThe publishing includes transmitting data encoding a capability to support use of the pooled credits.8. The method according to claim 7,The negotiation includes the release of:Pooled Credits; and/orDedicated credit.9. The method according to claim 8, wherein:The one or more pooled credits and/or the one or more dedicated credits are tracked and released based at least in part on processing received packet data corresponding to the one or more pooled credits and/or the one or more dedicated credits.10. The method according to claim 9, wherein:The communication circuitry includes a transceiver and a network interface controller coupled via one or more buses.11. The method according to claim 10, wherein:The one or more buses include the at least one communication link.12. The method according to claim 11, wherein:The communication circuit is included in a rack server computer device or a blade server computer device; andThe rack-mounted server computer device or the blade server computer device comprises:Physical central processing unit CPU processor circuit;Hardware accelerators;solid-state memory; andThe sensor is used to provide sensor data to the physical central processing unit CPU processor circuit for processing.13. A machine-readable memory storing instructions which, when executed by said at least one machine, cause the method according to any one of claims 7 to 12 to be implemented.14. A rack-mounted computer system, comprising:a communication circuit for performing, at least in part, a negotiation with the link partner via the communication circuit over the at least one communication link, the negotiation for: determining a pooled credit and a dedicated credit for use in association with communication between the communication circuit and the link partner over the at least one communication link, the pooled credit being shared by communication traffic of a virtual communication channel between the communication circuit and the link partner over the at least one communication link, the communication traffic belonging to a plurality of different flow control categories and/or a plurality of different traffic types, the dedicated credit being dedicated to other communication traffic of another virtual communication channel between the communication circuit and the link partner over the at least one communication link, the other communication traffic being of a single flow control category and a single traffic type;Physical central processing unit CPU processor circuit;Hardware accelerators;solid-state memory; andA sensor for providing sensor data for processing to the physical central processing unit CPU processor circuit;in:The communication circuitry includes a transceiver and a network interface controller for coupling via one or more buses;said one or more buses including said at least one communication link;The pooled credit corresponds at least in part to a shared queue storage space shared by communication traffic belonging to the plurality of different flow control classes and/or the plurality of different traffic types;The dedicated credit corresponds, at least in part, to dedicated queue storage space dedicated to other communication traffic of the single flow control category and single traffic type;The plurality of different flow control categories include a publishing category, a non-publishing category, and a completion category; andThe plurality of different traffic types include header types and data types;wherein the negotiation includes issuing capabilities to support use of pooled credits; andThe publishing includes transmitting data encoding a capability to support use of the pooled credits.15. The rack-mounted computer system according to claim 14, wherein:The one or more communication buses are PCI express buses.16. The rack-mounted computer system according to claim 15, wherein:The negotiation includes the release of:Pooled Credits; and/orDedicated credit.17. The rack-mounted computer system according to claim 16, wherein:The one or more pooled credits and/or the one or more dedicated credits are tracked and released based at least in part on processing received packet data corresponding to the one or more pooled credits and/or the one or more dedicated credits. |
Shared resources for multiple communication servicesThis application is a divisional application of an invention patent application with an application date of March 26, 2020, a priority date of April 26, 2019, an application number of 202010221725.9, and an invention name of “Shared resources for multiple communication services”.Technical FieldVarious embodiments may generally relate to the fields of communications and computing, and in particular, may relate to computer buses and devices coupled via computer buses.Background techniqueThe background description provided by the present disclosure is for the purpose of generally presenting the context of the present disclosure. Unless otherwise indicated by the disclosure, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.A computer system or platform may include many components, such as a host including a central processing unit (CPU), a memory, a chipset, and/or many other devices coupled together by a computer bus. A computer bus or communication bus is a communication system that can transmit data between devices or components inside a computer or between computers. A computing system or platform may widely use various devices coupled to a computer bus. A computer bus may include related hardware components (wires, optical fibers, etc.) and software including communication protocols. There may be many kinds of computer buses, such as serial buses or parallel buses. Examples of serial buses include, but are not limited to, peripheral component interconnect (PCI) buses, including PCIx and PCI Express (PCIe), and universal serial bus (USB).BRIEF DESCRIPTION OF THE DRAWINGSThe embodiments will be easily understood by the detailed description below in conjunction with the accompanying drawings. To facilitate this description, similar reference numerals represent similar structural elements. In the figures of the accompanying drawings, the embodiments are shown by way of example and not by way of limitation.1(a)-1(b) illustrate an example apparatus including a device coupled to another device via a computer bus, according to various embodiments.2(a)-2(d) illustrate example resources shared among multiple communication traffic and virtual channels of different flow control classes in a computer bus, according to various embodiments.3(a)-3(b) illustrate example protocols between a transmitter and a receiver for resources shared between multiple communication traffic and virtual channels of different flow control classes in a computer bus, according to various embodiments.4(a)-4(b) illustrate example processes and data structures for sharing resources among multiple communication services and virtual channels of different flow control classes in a computer bus, according to various embodiments.FIG. 5 illustrates an example device suitable for practicing various aspects of the present disclosure, in accordance with various embodiments.FIG. 6 illustrates a storage medium having instructions for practicing the method described with reference to FIGS. 1-5 , according to various embodiments.Detailed waysThe following detailed description refers to the accompanying drawings. The same reference numerals may be used in different figures to identify the same or similar elements. In the following description, for the purpose of explanation rather than limitation, specific details such as specific structures, architectures, interfaces, technologies, etc. are set forth in order to provide a thorough understanding of various aspects of various embodiments. However, it will be apparent to those skilled in the art who benefit from this disclosure that various aspects of various embodiments can be practiced in other examples that depart from these specific details. In some cases, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the descriptions of various embodiments with unnecessary details.A computing system or platform may use a wide variety of devices coupled to a computer bus, communication bus, or bus. A computer bus may include hardware components (wires, optical fibers, etc.) and associated software including communication protocols. A Peripheral Component Interconnect (PCI) bus or PCI Express (PCIe, PCI-E) may be a computer bus based on the PCI specification, which provides mechanisms including system software or system drivers to perform various communication operations between devices coupled to a PCI bus or PCIe bus. Devices or components coupled to a computer bus may have multiple functions and/or be accessed by applications. PCIe may operate as a motherboard-level interconnect (to link peripherals mounted on the motherboard), a passive backplane interconnect, and as an expansion card interface for add-on boards in consumer, server, and industrial applications. PCIe devices communicate via logical connections called interconnects or links. A link is a point-to-point communication channel between two PCIe ports that allows both PCIe ports to send and receive common PCI requests, such as configuration, input/output (I/O) or memory read/write, and interrupts. At the physical level, a link may consist of one or more lanes. Slow-speed peripherals (such as 802.11 Wi-Fi cards) use a single-lane (×1) link, while graphics adapters typically use a wider and faster 16-lane link.In the following description, a PCI bus or PCIe bus may be used as an example of a computer bus, a communication bus or a bus. Similarly, a PCI device or a PCIe device may be used as an example of a device coupled to a computer bus, a communication bus or a bus. However, the present disclosure is not limited to a PCI device or a bus. The description of a PCIe device may be applicable to any other device coupled to any computer bus, a communication bus or a bus.Embodiments disclosed in the present disclosure include a device for communication, wherein the device includes a queue and a controller coupled to the queue to manage the operation of the queue. The device is coupled to another device via a communication bus. The first communication entity is used to communicate with the second communication entity via two devices and the communication bus, and the third communication entity will communicate with the fourth communication entity via the two devices and the communication bus. The queue has multiple storage spaces. For example, the queue includes a first space for storing first information for a first business type with a first flow category and a first virtual channel (VC) for communication between the first communication entity and the second communication entity. The queue also includes a second space for storing second information for a second business type with a second flow category and a second virtual channel for communication between the third communication entity and the fourth communication entity. The first business type is different from the second business type, the first flow category is different from the second flow category, or the first virtual channel is different from the second virtual channel.Embodiments disclosed in the present disclosure include a method for communicating between a transmitter and a receiver coupled to each other via a bus. The method includes sending a request from the transmitter to the receiver for a certain amount of reserved storage space of a queue within the receiver. The queue has a plurality of storage spaces, each storage space being used to store information for a service type having a flow category and for a virtual channel for communicating between the transmitter and the receiver. The method also includes receiving, by the transmitter, an indication of the certain amount of reserved space from the receiver in response to the sent request.Embodiments disclosed in the present disclosure include a device for computing. The device includes a printed circuit board (PCB) having a selected one of a peripheral component interconnect (PCI) bus, a PCI expansion bus (PCI-X) or a PCI Express bus. The device also includes: a first bus agent disposed on the PCB and coupled to the bus; and a second bus agent disposed on the PCB and coupled to the bus. At least one bus agent selected from the first bus agent or the second bus agent includes a queue, and a controller coupled to the queue to manage the operation of the queue. The queue includes a plurality of storage spaces. Specifically, the queue includes a first space for storing first information for a first business type with a first flow category and a first virtual channel for communication between the first bus agent and the second bus agent. The queue also includes a second space for storing second information for a second business type with a second flow category and a second virtual channel for communication between the first bus agent and the second bus agent. The first business type is different from the second business type, the first flow category is different from the second flow category, or the first virtual channel is different from the second virtual channel.Various operations will be described as multiple discrete operations in turn in a manner that is most helpful for understanding the illustrative embodiments; however, the order of description should not be construed as implying that these operations are necessarily order dependent. In particular, these operations may not be performed in the order presented.The phrases "in various embodiments," "in some embodiments," etc. are used repeatedly. The phrases generally do not refer to the same embodiment; however, they may refer to the same embodiment. Unless the context indicates otherwise, the terms "comprising," "having," and "including" are synonymous. The phrase "A and/or B" means (A), (B), or (A and B). The phrases "A/B" and "A or B" are similar to the phrases "A and/or B" and mean (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "at least one of A and B" means (A), (B), or (A and B). The description may use the phrases "in one embodiment," "in an embodiment," "in some embodiments," and/or "in various embodiments," which may refer to one or more of the same or different embodiments, respectively. In addition, the terms "comprising," "including," "having," etc. used with respect to the embodiments of the present disclosure are synonymous.Example embodiments may be described as a process, which is depicted as a flow chart, flow chart, data flow diagram, structure diagram, or block diagram. Although a flow chart may describe operations as an ordered process, many operations may be performed in parallel, concurrently, or simultaneously. In addition, the order of operations may be rearranged. A process may terminate when its operations are completed, but may also have other steps not included in the figure. A process may correspond to a method, function, procedure, subroutine, subprogram, etc. When a process corresponds to a function, its termination may correspond to the function returning to the calling function and/or returning to the main function.The example embodiments may be described in the general context of computer executable instructions (such as program codes, software modules, and/or functional processes) executed by one or more of the above-described circuits. The program codes, software modules, and/or functional processes may include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific data types. The program codes, software modules, and/or functional processes discussed in the present disclosure may be implemented using existing hardware in existing communication networks. For example, the program codes, software modules, and/or functional processes discussed in the present disclosure may be implemented using existing hardware at existing network elements or control nodes.As used in the present disclosure, the term "circuit" refers to a hardware component including or a part of a hardware component such as an electronic circuit, a logic circuit, a processor (shared, dedicated or grouped) and/or a memory (shared, dedicated or grouped), an application specific integrated circuit (ASIC), a field programmable device (FPD) (e.g., a field programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high capacity PLD (HCPLD), a structured ASIC or a programmable system on chip (SoC)), a digital signal processor (DSP), etc., and the circuit is configured to provide the above functions. In some embodiments, the circuit can execute one or more software or firmware programs to provide at least some of the described functions.As used in the present disclosure, the term "processor circuit" may refer to a circuit or a part of the above circuit that can perform a series of arithmetic or logical operations in an orderly and automatic manner; record, store and/or transmit digital data. The term "processor circuit" may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor and/or any other device capable of executing or otherwise operating computer executable instructions (such as program code, software modules and/or functional processes). As used in the present disclosure, the term "interface circuit" may refer to a circuit or a part of the above circuit that provides information exchange between two or more components or devices. The term "interface circuit" may refer to one or more hardware interfaces (e.g., a bus, an input/output (I/O) interface, a peripheral component interface, a network interface card, etc.). As used in the present disclosure, the term "instantiation" and the like may refer to the creation of an instance, while an "instance" may refer to the specific occurrence of an object, which may occur, for example, during the execution of a program code.As used in the present disclosure, the term "computer device" can describe any physical hardware device that can orderly and automatically perform a series of arithmetic or logical operations, is equipped to record/store data on a machine-readable medium, and sends and receives data from one or more other devices in a communication network. Computer devices can be considered as synonyms of computers, computing platforms, computing devices, etc., and are sometimes referred to as computers, computing platforms, computing devices, etc. hereinafter. The term "computer system" may include any type of interconnected electronic devices, computer devices, or components thereof. In addition, the terms "computer system" and/or "system" may refer to various components of computers that are communicatively coupled to each other. In addition, the terms "computer system" and/or "system" may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled to each other and are configured to share computing resources and/or network resources. As used in the present disclosure, the term "user equipment" or "UE" may refer to a device with radio communication capabilities, such as a computer device, and may describe a remote user of a network resource in a communication network. The term "user equipment" or "UE" may be considered as a synonym for client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio device, reconfigurable radio device, reconfigurable mobile device, etc., and may sometimes be referred to as the synonyms listed above in the following text.Examples of "computer device", "computer system", "UE", etc. may include a cellular or smart phone, a feature phone, a tablet personal computer, a wearable computing device, an autonomous sensor, a laptop computer, a desktop personal computer, a video game console, a digital media player, a handheld messaging device, a personal data assistant, an e-book reader, an augmented reality device, a server computer device (e.g., standalone, rack-mounted, blade servers, etc.), a cloud computing service/system, a network element, an in-vehicle infotainment (IVI), an in-vehicle entertainment (ICE) device, an instrument cluster (IC), a head-up display (HUD) device, an on-board diagnostic (OBD) device, a dashboard mobile device (DME), a mobile data terminal (MDT), an electronic engine management system (EEMS), an electronic/engine control unit (ECU), an electronic/engine control module (ECM), an embedded system, a microcontroller, a control module, an engine management system (EMS), a networked or "smart" device, a machine type communication (MTC) device, a machine to machine (M2M), an Internet of Things (IoT) device, and/or any other similar electronic device. In addition, the term "vehicle embedded computer device" may refer to any computer device and/or computer system that is physically installed, built into, or otherwise embedded in a vehicle.As used in the present disclosure, the term "network element" may be considered as a synonym for a networked computer, networking hardware, network equipment, router, switch, hub, bridge, radio network controller, radio access network equipment, gateway, server and/or any other similar device and/or be referred to as the synonyms listed above. The term "network element" may describe a physical computing device of a wired or wireless communication network and is configured to host a virtual machine. In addition, the term "network element" may be described as a device that provides radio baseband functionality for data and/or voice connections between a network and one or more users. The term "network element" may be considered as a synonym for "base station" and/or be referred to as a "base station". As used in the present disclosure, the term "base station" may be considered as a synonym for node B, enhanced or evolved node B (eNB), next generation node B (gNB), base transceiver station (BTS), access point (AP), roadside unit (RSU), etc. and/or be referred to as the synonyms listed above, and may be described as a device that provides radio baseband functionality for data and/or voice connections between a network and one or more users. The term "RSU" may refer to any transport infrastructure entity implemented in a gNB/eNB or a fixed (or relatively fixed) UE. An RSU implemented in a UE may be referred to as a "UE-type RSU", and an RSU implemented in an eNB may be referred to as an "eNB-type RSU". As used in the present disclosure, the terms "vehicle-to-vehicle" and "V2V" may refer to any communication involving a vehicle as the source or destination of a message. In addition, the terms "vehicle-to-vehicle" and "V2V" used in the present disclosure may also include or be equivalent to vehicle-to-infrastructure (V2I) communication, vehicle-to-network (V2N) communication, vehicle-to-pedestrian (V2P) communication, or V2X communication.As used in the present disclosure, the term "channel" may refer to any tangible or intangible transmission medium for transmitting data or data streams. The term "channel" may be synonymous and/or equivalent to "communication channel", "data communication channel", "transmission channel", "data transmission channel", "access channel", "data access channel", "link", "data link", "carrier", "radio frequency carrier" and/or any other similar term representing a path or medium through which data is transmitted. In addition, the term "link" may refer to a connection between two devices over a radio access technology (RAT) for sending and receiving information.Figures 1(a)-1(b) show an example apparatus 100 according to various embodiments, which includes a device 101 coupled to another device 103 via a computer bus 105. For clarity, the following description is based on the features of the apparatus 100, the device 101, the device 103, and the computer bus 105. It should be understood that more or fewer components may be included in the apparatus 100, the device 101, the device 103, and the computer bus 105. In addition, it should be understood that one or more devices and components in the apparatus 100 may include additional and/or varying features from the description below, and may include any device that a person of ordinary skill in the art would consider and/or refer to as a host, a device, and a computer bus. In some embodiments, the apparatus 100 is a computer or computing device, and both the device 101 and the device 103 are within a computer covered by a common housing or cover. For these embodiments, the device 101 and the device 103 may also be referred to as components. In some other embodiments, the device 101 and the device 103 may be between computers. Regardless, processor 111 and interface 131 are combined with the shared resources for communication service technology of the present disclosure, such as device 110 or device 120 which will be described more fully below after further general description of device 101 and device 103 .In an embodiment, as shown in FIG. 1( a), device 101 may include processor 111 and memory 115. Operating system 113 may operate on processor 111, and operating system 113 may include system driver 114. Device 103 may be coupled to processor 101 via computer bus 105. Device 103 may include interface 131, buffer 141, and memory 143 coupled to computer bus 105. Interface 131 may include one or more registers, such as a function header register, an authentication header register, an authentication function register, an authentication status register, an authentication control register, a write data mailbox register, a read data mailbox register, or some other register.In an embodiment, the device 100 can be any computing system or platform, such as a laptop computer, an ultra-laptop computer, a tablet computer, a touchpad computer, a portable computer, a handheld computer, a wearable device, a palmtop computer, a personal digital assistant (PDA), an electronic reader, a cellular phone, a combination of a cellular phone/PDA, a mobile smart device (e.g., a smart phone, a smart tablet computer, etc.), a mobile Internet device (MID), a mobile messaging device, a mobile data communication device, a mobile media playback device, a camera, a mobile game console, etc. In an embodiment, the device 100 can also be a non-mobile device, which can include but is not limited to, for example, a personal computer (PC), a television, a smart TV, a data communication device, a media playback device, a game console, a gateway, an Internet of Things (IOT) device, etc. The device 100 may include a controller (or processor) and other components that execute software or control hardware to execute local programs or consumer services provided by an external service provider over a network. For example, the device 100 may include one or more software clients or applications that run locally and/or utilize or access network-based services (e.g., online stores or services, social network services, etc.). The device 100 may also or alternatively be a network interface running in a browser, and the electronic device may access such network-based services from the browser. The apparatus 100 may also include storage devices for storing logic and data associated with programs and services used by the apparatus 100 .In an embodiment, the processor 111 may be a central processing unit (CPU). In some embodiments, the processor 111 may be a programmable device that can execute a program, such as a system driver 114. In an embodiment, the processor 111 may be a microcontroller, a 16-bit processor, a 32-bit processor, a 64-bit processor, a single-core processor, a multi-core processor, a digital signal processor, an embedded processor, or any other processor.In an embodiment, the operating system 113 may be any system software that manages hardware resources or software resources for the device 100 and may provide services to applications such as the system driver 114. The operating system 113 may be an Android operating system, iOS, Linux, a real-time operating system (RTOS), an automotive infotainment operating system, etc. For example, the operating system 113 may be a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux, RTLinux, Windows CE, or other operating systems.In an embodiment, the computer bus 105 may be an external computer bus, an internal computer bus, a serial computer bus, or a parallel computer bus. For example, the computer bus 105 may be a PCI bus, a PCI expansion bus (PCI-X), a PCI Express bus, a universal serial bus (USB), a parallel advanced technology attachment (PATA) bus, a serial ATA (SATA) bus, an inter-integrated circuit (I2C) bus, an IEEE 1394 interface (FireWire) bus, a small computer system interface (SCSI) bus, a scalable consistency interface (SCI) bus, or other computer buses.In an embodiment, device 103 can be any computer hardware. For example, device 103 can be a network interface card, a sound card, a video controller, an Ethernet controller, a webcam, a mouse, a Bluetooth controller, a PCI to ISA bridge, a GUI accelerator, an ATM controller, a multimedia card, a SCSI controller, a multimedia device, an MPEG-II video decoder or any input/output device. In an embodiment, device 103 can be a PCI device, which can be directly inserted into a PCI slot on a computer mainboard. In some other embodiments, device 103 can be coupled to processor 111 by different computer buses.FIG1( b) shows apparatus 100, device 101, device 103, and computer bus 105 in more detail. Apparatus 101 includes apparatus 110, and apparatus 103 includes another apparatus 120, wherein apparatus 110 and apparatus 120 are coupled via computer bus 105. Computer bus 105 may be a communication bus, and may include a plurality of links, for example, link 151 and link 153. Apparatus 110 or apparatus 120 may be a transmitter or a receiver communicating via computer bus 105. In some embodiments, apparatus 120 may have a substantially similar structure to apparatus 110. In certain other embodiments, 120 may have a different structure from apparatus 110.In an embodiment, the device 110 includes a queue 142 and a controller 158 for managing the operation of the queue 142. Other components may be included in the controller 158, such as a counter 159. In addition, the device 110 also optionally includes a queue 142, a queue 144, and a queue 146. For example, one or more queues of the queue 142, the queue 144, and the queue 146 form a hierarchical queue 140, wherein the queue 142, the queue 144, and the queue 146 may have different lengths and may be used in different ways. The queues include one or more storage spaces, respectively. For example, the queue 142 includes a first memory 147 and a second memory 149. The queue 144 includes a memory 154, and the queue 146 includes a memory 156.In an embodiment, the device 110 and the device 120 may couple the first communication entity 116 and the second communication entity 133 through the communication bus 105. The first virtual channel 155 of communication is between the first communication entity 116 and the second communication entity 133, and the first communication entity 116 and the second communication entity 133 communicate via the device 110 and the second device 120 communicating through the communication bus 105. The business 121 from the first communication entity 116 reaches the second communication entity 133 through the first virtual channel 155 to be saved as business 135. For example, a business 121 or a business 135 includes a collection of smaller information units (e.g., data packets 124) to travel through a virtual channel. In some embodiments, the communication between the first communication entity 116 and the second communication entity 133 may include multiple virtual channels. In some embodiments, the first communication entity 116 and the second communication entity 133 are on the same computer.In addition, the devices 110 and 120 may couple the third communication entity 118 and the fourth communication entity 134 via the communication bus 105. The second virtual channel 157 of communication is between the third communication entity 118 and the fourth communication entity 134, and the third communication entity 118 and the fourth communication entity 134 communicate via the devices 110 and 120 communicating via the communication bus 105. The business 122 from the third communication entity 118 reaches the fourth communication entity 133 through the second virtual channel 157 to be stored as business 136.In an embodiment, the first virtual channel 155 or the second virtual channel 157 may include multiple links of the communication bus 105, for example, link 151, link 153. The first virtual channel 155 and the second virtual channel 157 may share some common entities. For example, the first communication entity 116 may be the same as the third communication entity 118, or the second communication entity 133 may be the same as the fourth communication entity 134.The services that can be referred to as communication services, such as services 121, services 122, services 135, or services 136, include a collection of smaller information units, or simply referred to as information. Information or information units may include messages, data packets, or bit information. In addition, information for service 121 may be used for protocol layers such as physical layer, link layer, transaction layer, routing layer, transport layer, or application layer. Information for service 121 may be information for service types with flow categories and for virtual channels for communication between first communication entity 116 and second communication entity 133. For example, information for service 121 includes data packet 124, wherein data packet 124 includes header 127, payload 125, and flow control category 126. Data packet 124 may be used as a first service type for data services, or as a second service type for control services. Flow control category 126 may include a release class category, a non-release category, a completion category, a quality of service category, or some other flow category. In some other embodiments, information may be a message in the application layer, or bit information in the physical layer.In an embodiment, the queue 142 in the device 110 can be shared by multiple communication services. For example, the first memory 147 stores a service 121 having a first service type, a first stream category, and first information for a first virtual channel 155, while the second memory 149 stores a service 122 having a second service type, a second stream category, and second information for a second virtual channel 157. In this way, the first service type is different from the second service type, the first stream category is different from the second stream category, or the first virtual channel is different from the second virtual channel. For example, service 121 is a data service, and service 122 is a control service. The first storage device 147 and the second storage device 149 are within the queue 142 and are managed by the same controller 158. There can be a sorting, such as a sequential order, between the first storage device 147 and the second storage device 149 in the queue 142. For example, the first storage device 147 can be accessed earlier than the second storage device 149, or data or information can be stored in the first storage device 147 before being stored in the second storage device 149.In an embodiment, queue 144 or queue 146 may be used in a different manner than queue 142. Instead of being shared between multiple communication services, queue 144 or queue 146 may be reserved for a specific type of service, for example, for a third service type, a third flow category, or third information for a third virtual channel. For example, queue 144 may be reserved for data services, while queue 146 may be reserved for control services. More examples of different organizations of queues 142, queues 144, and queues 146 are shown in Figures 2(a)-2(d).In an embodiment, a controller 158 (also referred to as a queue manager) is coupled to a queue, for example, queue 142, queue 144, or queue 146, to manage the operation of the queue. In some embodiments, the controller 158 is configured to monitor unused capacity of the queue 142, reserve a plurality of spaces in the queue 142, each of which is releasable to store information for a service type having a flow category and information for a virtual channel for communication. In an embodiment, the reserved and unreleased spaces are unused spaces. In addition, the controller 158 is configured to release two of the plurality of spaces for use as a first space 147 and a second space 149. Specifically, the controller 158 is configured to perform operations using a counter 159 to monitor unused space based on information in one or more counters, set aside a certain number of reserved spaces, and release a plurality of spaces.In addition, the controller 158 is configured to synchronize the state of the queue 142 with the state of the corresponding queue set in the device 120. For example, the controller 158 sends a request to the device 120 for setting a certain number of reserved and unreleased spaces in the queue in the device 120. In addition, the controller 158 receives an indication of a certain number of reserved and unreleased spaces in the queue set in the device 120 in response to the sent request. More details of the controller 158 for synchronizing the queue states are shown in Figures 3(a)-3(b).In addition, the controller 158 is used to receive a service type with a flow category and a plurality of information of a virtual channel for communication, and maintain the order of the service type with a flow category and a plurality of information for the virtual channel. More details of this operation are shown in Figures 4 (a)-4 (b).In some embodiments, the first communication entity 116 or the second communication entity 133 may include a central processing unit (CPU) or a processor core (or an application/function operated thereon), a mouse, a disk, a keyboard, a storage device, or an input/output controller. In addition, the first communication entity 116 and the second communication entity 133 may be on the same computer. The communication bus 105 may be a PCI bus, a PCI expansion bus (PCI-X), a PCI Express bus, a universal serial bus (USB), a parallel advanced technology attachment (PATA) bus, a serial ATA (SATA) bus, an inter-integrated circuit (I2C) bus, an IEEE 1394 interface (FireWire) bus, a small computer system interface (SCSI) bus, a scalable consistency interface (SCI) bus.In some embodiments, the first communication entity 116 or the second communication entity 133 may be a bus agent or link partner disposed on the PCB, and the communication bus 105 may be a selected one of a peripheral component interconnect (PCI) bus, a PCI expansion bus (PCI-X), or a PCI Express bus. In addition, the device 110 may be part of a bus agent. In other words, both the first communication entity 116 and the device 110 may be part of a bus agent.Figures 2(a)-2(d) show example resources shared between multiple communication services and virtual channels of different flow control categories across a computer bus according to various embodiments. The mechanisms with various alternatives shown in Figures 2(a)-2(d) can be applied to services 121, services 122, and devices 110 coupled to the computer bus 105 shown in Figure 1(b).An example of a current existing method is shown in FIG. 2 (a), wherein separate queues are used for information of services having service types, flow categories, and services for virtual channels for communication between a first communication entity and a second communication entity. For example, for a first virtual channel, for a control service indicated by a header (Hdr), a queue 201 is reserved for a flow category as a published category (P), a queue 203 is reserved for a flow category as an unpublished category (NP), and a queue 205 is reserved for a flow category as a completed category (CP1). Similarly, for a first virtual channel, for a data service indicated by data, a queue 202 is reserved for a flow category P, a queue 204 is reserved for a flow category NP, and a queue 206 is reserved for a flow category CP1. In addition, for a second virtual channel, for a control service indicated by Hdr, a queue 207 is reserved for a flow category P, a queue 208 is reserved for a flow category NP, and a queue 209 is reserved for a flow category CP1. Similarly, for the second virtual channel, for the data service indicated by the data, queue 211 is reserved for flow category P, queue 212 is reserved for flow category NP, and queue 213 is reserved for flow category CP1. Therefore, for a first service having a first service type, a first flow category, and for a first virtual channel, and a second service having a second service type, a second flow category, and for a second virtual channel, if the first service type is different from the second service type, the first flow category is different from the second flow category, or the first virtual channel is different from the second virtual channel, there is a first queue for the first service and a second queue for the second service.As shown in Figure 2(a), each service type with flow class and service for virtual channels has its own separate queue. In other words, queues are reserved based on at least three different parameters, service type, flow class, and virtual channel. As PCIe data rates continue to increase, queue sizes increase at a non-linear pace. The increase in queue size may be due to the significant increase in latency as data rates scale, which is mainly attributed to the adoption of channel extension devices such as retimers and increased bandwidth (each retimer increases the round trip by about 100ns). Other factors that contribute to the increase in queue size may include higher partitioning requirements required by applications that manifest as multiple virtual channels (e.g., for applications such as storage, x16 Gen5 is expected to be partitioned into up to 8x2s, while x16 Gen 4 is expected to be partitioned into up to 4x4s; storage device bandwidth consumption can be met by x4 Gen 4 or x2 Gen 5), higher maximum payload size, and increased quality of service (QoS). As bandwidth doubles with each generation, transaction layer storage requirements also double. As latency increases, it has a further multiplicative effect on queue size. This has another multiplicative effect as the number of VCs increases. Sometimes the transaction layer queue size is about half of the total PCIe controller (physical layer, link layer, and transaction layer) area. However, the queues can have very low utilization. For example, if a link has a high percentage of VC1 posted writes, the link will use up a large portion of the VC1 P (posted) queue, while the Cpl (completion) and NP (not posted) queues in VC1/VC0 and the posted queue in VC0 are rarely used. However, each of these queues must be sized assuming 100% of the traffic for each of these queues and assuming the worst traffic type (e.g., for headers with very little data (if any), and for data queues assuming maximum payload size). It is reasonable to expect the overall queue size to double while doubling the bandwidth with the same latency. However, doubling across each flow control class and VC can increase the storage size by another order of magnitude. The increased queue size not only impacts silicon area, but also makes the back end challenging due to the larger partition size. There are also power impacts in terms of leakage and active power.Embodiments of the present disclosure may share resources such as queue space and VCs across multiple flow control categories to reduce total queue size requirements. In other words, the queues in embodiments of the present disclosure are shared by multiple services that share at least one parameter, for example, two services with the same service type, the same flow category, or the same VC. In addition, in the PCIe context, damaged data link layer packets (DLLPs) are discarded – therefore, mechanisms for credit negotiation and updates may need to be resilient to this. Therefore, embodiments of the present disclosure may define robust mechanisms that work across DLLP damage/discards to synchronize communications between a transmitter (Tx, the entity initiating a transaction) and a receiver (Rx, the entity receiving transactions from Tx and returning credits). Therefore, embodiments of the present disclosure may enable a smaller area, lower chip cost, and a better feature set and lower power while delivering full bandwidth. In an embodiment, a credit may refer to a certain amount of storage space in a queue used to store information. Embodiments of the present disclosure may rely on a common credit pool, such as that which may be shared across various flow control (FC) categories (e.g., Posted, Not Posted, and Complete (P/NP/Cpl)); types of traffic (e.g., Hdr and/or Data); and amounts of storage space in a shared queue across different VCs. Even though these flow control categories share a common pool or shared queue, PCIe ordering rules may still be implemented across the FC categories of each VC in the same or similar manner as in conventional implementations.In an embodiment, as shown in FIG. 2( b), a hierarchical queue including multiple queues may be used to have explicit but minimal P/NP/Cpl Hdr and Data credits, and pooled credits (shared queues) for headers and data for each VC, respectively. Specifically, multiple hierarchical queues may be used, with two hierarchical queues for each virtual channel. For VC0, there is a hierarchical queue 214 for control services and a hierarchical queue 219 for data services. In addition, the hierarchical queue (e.g., hierarchical queue 214) includes a shared queue 215, and separately reserved queues for different flow categories (e.g., queue 216, queue 217, and queue 218). Similarly, for VCn, there is a hierarchical queue 221 for control services and a hierarchical queue 222 for data services. Each hierarchical queue (e.g., hierarchical queue 214, hierarchical queue 219, hierarchical queue 221, or hierarchical queue 222) may be similar to the hierarchical queue 140 shown in FIG. 1( b).In this approach, credits are negotiated (e.g., issued and released) separately for each FC/VC Hdr/Data and for pooled credits in the VC of each Hdr/Data. The data link layer packet (DLLP) will be enhanced to have additional encoding for pooled credits. A variation on this is that the shared pool credits are not explicitly issued, but are implicitly managed by the receiver. Once a transaction for FC/VC is received (e.g., P Hdr VC0), if credits are available from a shared pool or shared queue (e.g., multiple storage spaces in the queue), the receiver can return the corresponding credits to the sender (even if no transaction is popped from its internal queue). In an embodiment, Rx can reserve a set of counters that monitor queue utilization across requests received from the sender. When a transaction is popped from the Rx queue, Rx will manage whether the credits are returned to the free credit pool or the dedicated credit pool. Since Rx is tracking the sender's view of the world in its own counters, it can easily detect whether the Tx view of the credits is below the minimum threshold of the FC/VC, in which case the credits can be returned to the queue's dedicated pool. The DLLP mechanism for returning credit to a Tx is the same as it currently exists.In an embodiment, as shown in FIG. 2( c ), further optimization is possible in the case of having explicit P/NP/Cpl Hdr and Data credits for each VC but having a common credit pool for Hdr and Data separated across all VC and FC categories. Specifically, there may be a hierarchical queue 223 and a hierarchical queue 224 for a device such as device 110. The hierarchical queue 223 includes a shared queue for control services shared between all virtual channels and between all flow categories. In addition, the hierarchical queue 223 includes a separate reserved queue for each virtual channel and flow category. The hierarchical queue 224 has a structure similar to the hierarchical queue 223. Each hierarchical queue, for example, the hierarchical queue 223 or the hierarchical queue 224, may be similar to the hierarchical queue 140 shown in FIG. 1( b ). The credit mechanism of FIG. 2( c ) is similar to the method in FIG. 2( b ), except that all VCs have two sets of pooled credits (Hdr and Data). The same extension of extracting a shared credit pool at Rx also applies.In an embodiment, as shown in FIG. 2( d ), the pooling approach may be further optimized and only pooled credits or shared queues that can be shared across FC/VC/Hdr and Data are provided, where Hdr and Data refer to traffic types. For example, there may be a queue 231 shared by all control services with different flow control categories and virtual channels, a queue 232 shared by all data services with different flow control categories and virtual channels, or a queue 241 shared by all data services and control services with different flow control categories and virtual channels. Each queue (e.g., queue 231, queue 232, or queue 241) may be similar to queue 142 shown in FIG. 1( b ). In this approach, to ensure forward progress, each sender may ensure that it reserves the minimum credits required to support the possible maximum transaction size for each FC/VC without using these credits for any other FC/VC. More detailed operations of the sender and receiver are shown in FIGS. 3( a )-3( b ) below.3(a)-3(b) illustrate example protocols, such as protocol 310 or protocol 320, for resource sharing of multiple communication services across different flow control categories and different virtual channels in a computer bus between a transmitter (Tx) 302 and a receiver (Rx) 304 according to various embodiments. The transmitter 302 and receiver 304 as shown in FIG. 3(a)-3(b) may be examples of a device 110 or a device 120 coupled by a computer bus 105, as shown in FIG. 1(b). The transmitter 302 and the receiver 304 may be referred to as a requester or initiator, a link partner, or a bus agent. There may be many different protocols, such as pooling mechanisms, for resource sharing of multiple communication services across different flow control categories and different virtual channels in a computer bus, of which protocol 310 or protocol 320 are just two example pooling mechanisms.In an embodiment, as shown in FIG3(a), the protocol 310 begins at interaction 301, where a request for a certain amount of reserved storage space of a queue within the receiver 304 is sent by the sender 302 to the receiver 304. The request sent at interaction 301 may include information identifying a service, such as a flow category, a service type (Hdr or Data), and a virtual channel. The receiver 304 may have a queue similar to the queue 142 shown in FIG1(b), the queue having a plurality of storage spaces, each storage space being used to store information for a service type having a flow category, and information for a virtual channel for communication between the sender 302 and the receiver 304.In addition, at interaction 303, protocol 310 includes: in response to the sent request, a reply is received by sender 302 from receiver 304, the reply including an indication of a certain amount of reserved space in the queue of receiver 304. The reply received at interaction 303 may include the same or similar information to identify the service, such as flow category, service type (Hdr or Data), and virtual channel. Optionally, protocol 310 includes interaction 305, at which a certain amount of space is reserved in the queue of receiver 304.In an embodiment, as shown in FIG. 3( b ), protocol 320 illustrates a mechanism for ensuring that the forwarding process is guaranteed even if the intermediate DLLP is lost. The explicit handshake between Tx 302 and Rx 304 can ensure that the reserved storage space can be used on both Tx 302 and Rx 304. Therefore, when a data packet sent from Tx 302 is received by Rx 304, Rx 304 has space to receive the sent data packet. Therefore, both Tx 302 and Rx 304 have corresponding space for the sent data packet. For example, Tx 304 maintains a dedicated credit counter for each FC/VC for Hdr and Data, respectively, and a common pool counter (providing credits for data reserved as dedicated).In an embodiment, the DLLP will be extended to include one bit for each FC/VC/Hdr and Data in both Tx and Rx directions.After initialization, Tx 302 will have reserved a certain number of credits, for example, storage space in a shared queue dedicated to each FC/VC/Hdr and Data.In an embodiment, at interaction 311, a request is sent from Tx 302 to Rx 304, wherein the request includes an Update_FC DLLP with bits set for all FC/VC/Hdr and Data. Similarly, in any request sent from Rx 304 to Tx 302, the Update_FC DLLP in the request has the same bits set. In an embodiment, if at any point Tx 302 uses the dedicated credit for any FC/VC/Hdr/Data to store information for the corresponding FC/VC/Hdr/Data, then Tx 302 will start sending Update_FC DLLPs with the corresponding bits deactivated.When Rx 304 receives such a DLLP (without any errors) at interaction 318, Rx 304 flips the corresponding bit when sending an acknowledgment Update_FC DLLP to Tx 302. Thus, at interaction 318, Rx 304 indicates that Rx 304 has recorded the fact that Tx 302 has exhausted the dedicated credits for this FC/VC/Hdr/Data.From this point on, Rx will monitor two events, the first event is to ensure that it has enough free space to account for the reserved credits for this FC/VC/Hdr/Data. In an embodiment, Rx can accumulate pool credits, such as storage space in a shared queue, and not return them to Tx302. The second event is that Rx 304 receives an Update_FC DLLP with the corresponding bit set from Tx 304, which can be sent at interaction 319. The second event serves as a confirmation from Tx that Tx observed the Rx Update_FC DLLP revocation of the corresponding FC/VC/Hdr/Data. When the second event occurs, at interaction 313, Rx 304 will begin sending Update_FC DLLPs with the bit set for this FC/VC/Hdr/Data, which indicates to Tx that it can now replenish its dedicated credits for this FC/VC/Hdr/Data. Rx 304 can ensure that if there are multiple FC/VC/Hdr/Data waiting for dedicated credits, then Rx 304 can ensure that enough reserved space has been released for all FC/VC/Hdr/Data before returning Update_FC DLLP with the corresponding bit set. If the mechanism is implemented symmetrically for bidirectional flows, 1 bit in each direction can be used for each FC/VC/Hdr/Data participating in this flow.The pooling mechanism can be negotiated between link partners or bus agents (which can be transmitter 302 or receiver 304) as part of the initial flow control credit (Init_FC) negotiation immediately after link establishment. If the link partner does not support pooled credits, a default credit allocation similar to existing methods must be followed. For example, an Init_FC handshake can be performed to issue default P/NP/Cpl credits and encode a new Init_FC1 to support pooling, which is advertised along with pooled credits and the number of each P/NP/Cpl credits that will be credited to the common pool if the link partner supports credit pooling.Additionally or alternatively, the requester (and the root port that assumes the requester role for peer traffic in the output direction) may advertise limited completion credits, as long as the initiator of the NP transaction can absorb the NP request and there are no pre-2.1 PCI devices in the hierarchy. Embodiments of the present disclosure further reduce the burden of completion queue sizing to only consider the round-trip latency of the credit cycle time with its link partner (rather than considering the system-level latency round-trip).4(a)-4(b) illustrate an example process 400 and data structure 410 for resource sharing of multiple communication services across different flow control classes and virtual channels in a computer bus, according to various embodiments. As shown in FIG1(a)-1(b), the process 400 and data structure 410 may be used for resource sharing across multiple flow control classes and multiple virtual channels in a computer bus 105.Process 400 may start with interaction 401. During interaction 401, an operation may be performed to send a request for a certain amount of reserved storage space of a queue in a receiver having multiple storage spaces by a transmitter to a receiver. The transmitter and the receiver are coupled to each other via a bus. Each storage space of the queue is used to store information of a service type having a flow category and a virtual channel for communication between the transmitter and the receiver. For example, at interaction 401, an operation may be performed by transmitter 302 to send a request for a certain amount of reserved storage space of a queue in receiver 304 having multiple storage spaces to receiver 304, as shown in interaction 301 in FIG. 3(a).During interaction 403, in response to the sent request, an operation may be performed by the sender to receive an indication of a certain amount of reserved space from the receiver. For example, as shown in interaction 303 in Figure 3(a), at interaction 403, an operation may be performed by the sender 302 to receive an indication of the amount of reserved space from the receiver 304 in response to the sent request.During interaction 405, operations may be performed by the transmitter to receive a traffic type having a flow category and a plurality of information for a virtual channel for communication. During interaction 407, operations may be performed by the transmitter to maintain a traffic type having a flow category and an order of the plurality of information for the virtual channel. Details of operations for interaction 407 and interaction 409 are further illustrated in FIG. 4( b).In an embodiment, as shown in FIG. 4( b ), a data structure 410 includes a linked list structure for enforcing transaction ordering (e.g., information ordering) within each FC category. The data structure 410 includes various pointers P1, P2, P3, P4 stored in a shared queue 411, and various control bits implemented in control logic, which can reside in a controller 413. The four messages (e.g., published transactions) arriving at a given VC category of business are P1, followed by P2, P3, and P4. In some embodiments, P1, P2, P3, and P4 are the communication types with flow categories received in the interaction 405 of FIG. 4( a ), as well as multiple messages of virtual channels used for communication. Because storage allocations are from a common pool due to storage slices, these transactions are likely to occupy non-contiguous entries in a random manner. During the interaction 407 of FIG. 4( b ), various links can be established to maintain the order of multiple messages P1, P2, P3, and P4. For example, the header pointer (Head ptr) points to P1. As shown in link 421, the "next ptr" associated with the P1 entry points to the location of P2. As shown in link 422, the "next ptr" associated with P2 points to the location of P3. As shown in link 423, the "next ptr" associated with P3 points to P4. P4 is the last entry in the P FC category, and the "Tail ptr" also points to P4. This ensures that P1, P2, P3, and P4 are processed in this order. A single bit indicates whether the transaction has consumed pool credits or dedicated FC category credits in a hierarchical queue including a dedicated FC category queue and a shared queue with pooled credits. Therefore, when P1 is processed and removed from the pool, the "Head ptr" will point to P2, and the credit corresponding to P1 will be released in the public pool or in the P credits depending on whether the sender used pool credits or dedicated credits when sending P1. A single bit in a transaction layer packet (TLP) can be used to indicate whether a transaction uses pool credits. In an embodiment, the data structure enhancement shown in Figure 4 (b) can also use a single bit to indicate whether the public pool entry is occupied by a transaction or is freely used by an incoming transaction. The hardware consults this list to make room for incoming transactions. Whenever an entry is freed, the availability of the pool entry is set appropriately and, if appropriate, the corresponding credit is added to the pool being freed.Fig. 5 shows an example device suitable for practicing various aspects of the present disclosure according to various embodiments. Device 500 can be combined with the resource sharing for communication technology disclosed in the present disclosure. As shown in the figure, device 500 may include one or more processors 502, each processor having one or more processor cores, or optionally, having a hardware accelerator 503 (which may be an ASIC or FPGA). In an alternative embodiment, hardware accelerator 503 may be a part of processor 502, or integrated together on a SOC. In addition, device 500 may include a memory 504 that may be any one of many known persistent storage media and a data storage circuit 508 including a module 509. In addition, 500 may include an I/O interface 518 coupled to one or more sensors 514 and a display screen 513.The I/O interface 518 may include a transmitter 523 and a receiver 517. In addition, the device 500 may include a communication circuit 505, which includes a transceiver (Tx) 511 and a network interface controller (NIC) 512. These elements may be coupled to each other via a system bus 506, which may represent one or more buses. In the case of multiple buses, they may be bridged by one or more bus bridges (not shown). Device 531 may be coupled to the system bus 506, and device 535 may be coupled to a computer bus 539. Device 531 may include an interface 533, and device 535 may include an interface 537. In an embodiment, the computer bus 506 or the computer bus 539 may be an example of a computer bus 105 as shown in Figures 1 (a)-1 (b), and the various devices coupled to the computer bus 506 or the computer bus 539 are examples of devices 101 and 103. For example, processor 502, accelerator 503, memory 504, storage 508, device 531, device 518, communication circuit 505 and/or device 535 may include a queue similar to queue 142 as shown in Figure 1(b), and associated logic/elements as previously described to share space in queue 142 for communication.In an embodiment, the processor 502 (also referred to as "processor circuit 502") may be one or more processing elements configured to perform basic arithmetic, logic, and input/output operations by executing instructions. The processor circuit 502 may be implemented as an independent system/device/package or as part of an existing system/device/package. The processor circuit 502 may be one or more microprocessors, one or more single-core processors, one or more multi-core processors, one or more multi-threaded processors, one or more GPUs, one or more ultra-low voltage processors, one or more embedded processors, one or more DSPs, one or more FPDs (hardware accelerators) (e.g., FPGAs), structured ASICs, programmable SoCs (PSoCs), etc., and/or other processors or processing/control circuits. The processor circuit 502 may be part of an SoC in which the processor circuit 502 and other components discussed in the present disclosure are formed as a single IC or a single package. As an example, the processor circuit 502 may include one or more Intel or Intel processors; Advanced Micro Devices (AMD) Accelerated Processing Units (APUs), Intel or Intel processors; Apple Inc. A series, S series, W series, etc. processors; Qualcomm processors; Samsung processors; etc.In an embodiment, the processor circuit 502 may include a sensor hub that may function as a co-processor by processing data obtained from one or more sensors 514. The sensor hub may include circuitry configured to integrate data obtained from each of the sensors by performing arithmetic, logic, and input/output operations. In an embodiment, the sensor hub may be capable of time stamping obtained sensor data, providing sensor data to the processor circuit 502 in response to queries for such data, buffering sensor data, continuously streaming sensor data to the processor circuit 502 including separate streams for each of the one or more sensors 514, reporting sensor data based on predetermined thresholds or conditions/triggers, and/or other similar data processing functions.In an embodiment, the memory 504 (also referred to as "storage circuit 504", etc.) can be a circuit configured to store data or logic for operating the computer device 500. The storage circuit 504 can include multiple memory devices to provide a given amount of system storage. As an example, the storage circuit 504 can be any suitable type, number of volatile storage devices (e.g., random access memory (RAM), dynamic RAM (DRAM), static RAM (SAM), etc.) and non-volatile storage devices (e.g., read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, anti-fuse, etc.) configured in any known suitable implementation manner, and/or a combination thereof. In various embodiments, each storage device can be formed of any number of different package types, such as a single chip package (SDP), a dual chip package (DDP) or a quad chip package, a dual in-line memory module (DIMM) (e.g., microDIMM or MiniDIMM) and/or any other similar storage device. In order to provide persistent storage of information such as data, applications, operating systems, etc., the storage circuit 504 may include one or more mass storage devices, such as a solid-state disk drive (SSDD); a flash memory card (such as an SD card, a microSD card, an xD picture card, etc.) and a USB flash drive; an on-chip memory or register associated with the processor circuit 502 (for example, in a low-power implementation); a micro hard disk drive (HDD); and a three-dimensional cross point (3DXPOINT) memory such as .In the case of using FPD, the processor circuit 502 and the storage circuit 504 (and/or the data storage circuit 508) may include logic blocks or logic structures, memory cells, input/output (I/O) blocks, and other interconnect resources that can be programmed to perform various functions of the example embodiments discussed in the present disclosure. The memory cells may be used to store data in a lookup table (LUT) that is used by the processor circuit 502 to implement various logic functions. The memory cells may include any combination of various levels of storage/memory including, but not limited to, EPROM, EEPROM, flash memory, SRAM, anti-fuse, etc.In an embodiment, data storage circuitry 508 (also referred to as "storage circuitry 508", etc.) with a shared or corresponding controller may provide persistent storage of information such as module 509, operating system, etc. Data storage circuitry 508 may be implemented as a solid state drive (SSD); solid state disk drive (SSDD); serial AT attachment (SATA) storage device (e.g., SATA SSD); flash drive; flash memory card (e.g., SD card, microSD card, xD picture card, etc.), and USB flash drive; three-dimensional crosspoint (3D Xpoint) storage device; on-chip memory or registers associated with processor circuitry 502; hard disk drive (HDD); micro HDD; resistive change memory; phase change memory; holographic memory or chemical memory, etc. As shown, data storage circuitry 508 is included in computer device 500. However, in other embodiments, data storage circuitry 508 may be implemented as one or more devices separate from other elements of computer device 500.In some embodiments, data storage circuitry 508 may include an operating system (OS) (not shown), which may be a general-purpose operating system or an operating system written and customized specifically for computer device 500. The OS may include one or more drivers, libraries, and/or application programming interfaces (APIs) that provide program code and/or software components for module 509 and/or control system configuration to control one or more sensors 514 and/or obtain data or process data therefrom.Module 509 may be a software module/component for performing various functions of computer device 500. In embodiments where processor circuit 502 and storage circuit 504 include a hardware accelerator (e.g., FPGA unit, hardware accelerator 503) as well as a processor core, the hardware accelerator (e.g., FPGA unit) may be pre-configured with logic (e.g., using appropriate bitstreams, logic blocks/structures, etc.) to perform certain functions of embodiments of the present disclosure (instead of using programming instructions executed by the processor core). For example, module 509 may include logic for corresponding entities discussed with respect to display screen 513, transmitter 523, and receiver 517.The components of the computer device 500 can communicate with each other via a bus 506. The bus 506 can include a variety of technologies, such as a local interconnect network (LIN); industry standard architecture (ISA); extended ISA (EISA); PCI; PCI extension (PCIx); PCIe; inter-integrated circuit (I2C) bus; parallel small computer system interface (SPI) bus; common application programming interface (CAPI); point-to-point interface; power bus; proprietary bus (e.g., Ultra Path Interface (UPI), Accelerator Link (IAL) or other proprietary bus used in SoC-based interfaces); or any other technology. In some embodiments, the bus 506 can be a controller area network (CAN) bus system, a time-triggered protocol (TTP) system, or a FlexRay system, which can allow various devices (e.g., one or more sensors 514, etc.) to communicate with each other using messages or frames.The communication circuit 505 may include circuits for communicating with a wireless network or a wired network. For example, the communication circuit 505 may include a transceiver (Tx) 511 and a network interface controller (NIC) 512. The communication circuit 505 may include one or more processors (e.g., baseband processor, modem, etc.) dedicated to a specific wireless communication protocol.A NIC 512 may be included to provide a wired communication link to a network and/or other devices. The wired communication may provide an Ethernet connection, Ethernet over USB, etc., or may be based on other types of networks, such as DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, etc. Additional NICs 512 may be included to allow connection to a second network (not shown) or other devices, for example, a first NIC 512 providing communication to a network via Ethernet, and a second NIC 512 providing communication to other devices via another type of network, such as a personal area network (PAN) including a personal computer (PC) device. In some embodiments, various components of the device 500, such as one or more sensors 514, etc., may be connected to the processor 502 via the NIC 512 discussed above rather than via the I/O circuit 518 discussed below.Tx 511 may include one or more radios to communicate wirelessly with a network and/or other devices. Tx 511 may include hardware devices that can communicate with a wired network and/or other devices using modulated electromagnetic radiation through a solid or non-solid medium. Such hardware devices may include switches, filters, amplifiers, antenna elements, etc., to facilitate over-the-air communications (OTA) by generating or otherwise generating radio waves to send data to one or more other devices and converting received signals into usable information, such as digital data that can be provided to one or more other components of the computer device 500. In some embodiments, various components of the device 500 (e.g., one or more sensors 514, etc.) may be connected to the device 500 via the Tx 511 discussed above rather than via the I/O circuit 518 discussed below. In an example, one or more sensors 514 may be coupled to the device 500 via a short-range communication protocol.Tx 511 may include one or more radios compatible with any number of 3GPP (3rd Generation Partnership Project) specifications, particularly Long Term Evolution (LTE), Advanced Long Term Evolution (LTE-A), Advanced Long Term Evolution Pro (LTE-APro), and Fifth Generation (5G) New Radio (NR). It should be noted that radios compatible with many other fixed, mobile, or satellite communication technologies and standards may be selected. These may include, for example, any cellular wide area wireless communication technology, which may generally include, for example, 5G communication systems, Global System for Mobile Communications (GSM) radio communication technology, General Packet Radio Service (GPRS) radio communication technology, or GSM Evolution Enhanced Data Rate (EDGE) radio communication technology. Other 3rd Generation Partnership Project (3GPP) radio communication technologies that may be used include UMTS (Universal Mobile Telecommunications System), FOMA (Freedom of Multimedia Access), 3GPP LTE (Long Term Evolution), 3GPP LTE Advanced (Long Term Evolution Advanced), 3GPP LTE Advanced Pro (Long Term Evolution Professional Advanced), CDMA2000 (Code Division Multiple Access 2000), CDPD (Cellular Digital Packet Data), Mobitex, 3G (3rd Generation), CSD (Circuit Switched Data), HSCSD (High Speed Circuit Switched Data), UMTS (3G) (Universal Mobile Telecommunications System (3rd Generation)), W-CDMA (UMTS) (Wideband Code Division Multiple Access (Universal Mobile Telecommunications System)), HSPA (High Speed Packet Access), HSDPA (High Speed Downlink Packet Access), HSUPA (High Speed Uplink Packet Access), HSPA+ (High Speed Packet Access Plus), UMTS-TDD (Universal Mobile Telecommunications System - Time Division Duplex), TD-CDMA (Time Division - Code Division Multiple Access), TD-SCDMA (Time Division - Synchronous Code Division Multiple Access), 3GPP Rel.8 (Pre-4G) (3rd Generation Partnership Project Release 8 (Pre-4th Generation)), 3GPP Rel.9 (3rd Generation Partnership Project Release 9), 3GPP Rel.10 (3rd Generation Partnership Project Release 10), 3GPP Rel.11 (3rd Generation Partnership Project Release 11), 3GPP Rel.12 (3rd Generation Partnership Project Release 12), 3GPP Rel.13 (3rd Generation Partnership Project Release 13) 3GPP Rel.14 (3rd Generation Partnership Project Release 14), 3GPP LTEExtra, LTE License Assisted Access (LAA), UTRA (UMTS Terrestrial Radio Access), E-UTRA (Evolved UMTS Terrestrial Radio Access), LTE Advanced (4G) (Long Term Evolution Advanced (4th Generation)), cdmaOne (2G), CDMA2000 (3G) (Code Division Multiple Access 2000 (3rd Generation)), EV-DO (Evolution Data Optimized or Evolution Data Only), AMPS (1G) (Advanced Mobile Phone System (1st Generation)), TACS/ETACS (Total Access Communications System/Extended Total Access Communications System), D-AMPS (2G) (Digital AMPS (2nd Generation)), PTT (Push to Talk), MTS (Mobile Telephone System), IMTS (Improved Mobile Telephone System), AMTS (Advanced Mobile Telephone System), OLT (Offentlig Landmobil Telefoni in Norwegian, Public Land Mobile Telephone), MTD (Mobiltelefonisystem in Swedish) D or Mobile Phone System D), Autotel/PALM (Public Automatic Land Mobile Equipment), ARP (Finnish for Autoradiopuhelin, "Car Radio"), NMT (Nordic Mobile Telephone), Hicap (a high-capacity version of NTT (Nippon Telegraph and Telephone)), CDPD (Cellular Digital Packet Data), Mobitex, DataTAC, iDEN (Integrated Digital Enhanced Network), PDC (Personal Digital Cellular), CSD (Circuit Switched Data), PHS (Personal Handyphone System), WiDEN (Broadband Integrated Digital Enhanced Network), iBurst, Unlicensed Mobile Access (UMA, also known as the 3GPP Universal Access Network or GAN standard)), Wireless Gigabit Alliance (WiGig) standards, mmWave standards (wireless systems operating at and above 10-90 GHz, such as WiGig, IEEE802.11ad, IEEE 802.11ay, etc.). In addition to the standards listed above, any number of satellite uplink technologies may be used for the uplink transceiver, including, for example, radios compliant with standards published by ITU (International Telecommunication Union) or ETSI (European Telecommunications Standards Institute), etc. Therefore, the examples provided by the present disclosure are understood to be applicable to various other communication technologies that are existing and yet to be developed. The implementation, components and details of the aforementioned protocols may be known in the art and are omitted here for the sake of brevity.Input/output (I/O) interface 518 may include, for example, an external expansion bus (e.g., Universal Serial Bus (USB), FireWire, Thunderbolt, PCI/PCIe/PCIx, etc.), circuitry for connecting computer device 500 to external components/devices such as one or more sensors 514. I/O interface circuitry 518 may include any suitable interface controller and connector to interconnect one or more of processor circuitry 502, memory circuitry 504, data storage circuitry 508, communication circuitry 505, and other components of computer device 500. Interface controllers may include, but are not limited to, memory controllers, storage controllers (e.g., Redundant Array of Independent Disks (RAID) controllers, Baseboard Management Controllers (BMC), input/output controllers, host controllers, etc.). Connectors may include, for example, buses (e.g., bus 506), ports, slots, jumpers, interconnect modules, sockets, modular connectors, etc. The I/O circuitry 518 may couple the device 500 to one or more sensors 514, etc., via a wired connection (e.g., using USB, FireWire, Thunderbolt, RCA, video graphics array (VGA), digital visual interface (DVI) and/or mini-DVI, high-definition multimedia interface (HDMI), S-Video, etc.).The one or more sensors 514 may be any device configured to detect an event or environmental change, convert the detected event into an electrical signal and/or digital data, and transmit/send the signal/data to the computer device 500. The one or more sensors 514 may be sensors for providing computer-generated sensory input. Some of the one or more sensors 514 may be sensors for motion and/or object detection. Examples of such one or more sensors 514 may include, among others, a charge coupled device (CCD), a complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS), a lensless image capture device/camera, a thermal imaging (infrared) camera, a light imaging detection and ranging (LIDAR) system, and the like. In some embodiments, the one or more sensors 514 may include a lensless image capture mechanism having an array of aperture elements, wherein light passing through the array of aperture elements defines pixels of an image. In an embodiment, the one or more motion detection sensors 514 may be coupled or associated with a light generating device, for example, one or more infrared projectors to project a grid of infrared light onto a scene, where the infrared camera may record the reflected infrared light to calculate depth information.Some of the one or more sensors 514 may be used for position and/or orientation detection, ambient/environmental condition detection, etc. Examples of such one or more sensors 514 may include, among other things, microelectromechanical systems (MEMS) having piezoelectric, piezoresistive, and/or capacitive components, which may be used to determine environmental conditions or position information related to the computer device 500. In an embodiment, the MEMS may include a 3-axis accelerometer, a 3-axis gyroscope, and/or a magnetometer. In some embodiments, the one or more sensors 514 may also include one or more gravity meters, altimeters, barometers, proximity sensors (e.g., infrared radiation detectors, etc.), depth sensors, ambient light sensors, thermal sensors (thermometers), ultrasonic transceivers, etc.).Each of these elements, such as one or more processors 502, hardware accelerators 503, memory 504, data storage circuits 508 including modules 509, input/output interfaces 518, one or more sensors 514, communication circuits 505 including Tx 511, NIC 512, system bus 506, computer bus 539, device 531, device 535 can perform conventional functions known in the art. In addition, as described in conjunction with Figures 1-4, they can be used to store and host the execution of programming instructions that implement operations associated with resource sharing across multiple flow control categories and virtual channels in computer buses, and/or provide other functions of the capabilities of the embodiments described in the current disclosure. Various elements can be implemented by assembly instructions supported by processor 502 or high-level languages (e.g., C language) that can be compiled into such instructions. Operations associated with device 500 that are not implemented in software can be implemented in hardware, for example, via hardware accelerator 503.The number, capabilities, and/or capacity of these elements 502-539 may vary depending on the number of other devices that device 500 is configured to support. Otherwise, the construction of elements 502-539 is known and will not be described further.As will be appreciated by those skilled in the art, the present disclosure may be embodied as a method or a computer program product. Thus, in addition to being implemented in hardware as previously described, the present disclosure may also take the form of a fully software embodiment (including firmware, resident software, microcode, etc.) or an embodiment that may combine software and hardware aspects, all commonly referred to as "circuits," "modules," or "systems."In addition, the present disclosure may take the form of a computer program product embodied in any tangible or non-transitory medium of expression having computer usable program code embodied therein. FIG. 6 illustrates an example computer-readable non-transitory storage medium that may be suitable for storing instructions that, in response to the apparatus executing the instructions, cause the apparatus to implement selected aspects of the present disclosure. As shown, the non-transitory computer-readable storage medium 602 may include a plurality of programming instructions 604. The programming instructions 604 may be configured to enable, in response to the execution of the programming instructions, a device such as device 600 to perform, for example, various operations associated with sharing virtual channel resources across multiple flow control categories and in a computer bus as shown in FIGS. 1-5.In an alternative embodiment, the programming instructions 604 may be provided on a plurality of computer-readable non-transitory storage media 602 instead. In an alternative embodiment, the programming instructions 604 may be provided on a computer-readable temporary storage medium 602 such as a signal. Any combination of one or more computer-usable or computer-readable media may be utilized. The computer-usable or computer-readable medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, apparatus, or propagation medium. More specific examples of computer-readable media (a non-exhaustive list) would include the following: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CDROM), an optical storage device, a transmission medium (e.g., a medium supporting the Internet or a local area network), or a magnetic storage device. It should be noted that the computer-usable or computer-readable medium may even be paper or other suitable medium on which the program is printed, since the program may be captured electronically, for example, via optical scanning of the paper or other medium, and then compiled, interpreted, or processed in other appropriate ways as necessary, and then stored in a computer memory. In the context of the present disclosure, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transmit a program for use by or in conjunction with an instruction execution system, device, or apparatus. A computer-usable medium may include a propagated data signal in baseband or as part of a carrier wave, and a computer-usable program code is implemented in the data signal. Any suitable medium (including but not limited to wireless, wired, fiber optic cable, RF, etc.) may be used to transmit computer-usable program code.The computer program code for performing the operation of the present disclosure can be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional program programming languages such as "C" programming language or similar programming languages. The program code can be executed entirely on the user's computer, partially on the user's computer, as an independent software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server. In the latter case, the remote computer can be connected to the user's computer through any type of network (including a local area network (LAN) or a wide area network (WAN)), or a connection can be established with an external computer (e.g., by using the Internet of an Internet service provider).The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, devices (systems) and computer program products according to embodiments of the present disclosure. It will be understood that each block of the flowchart illustration and/or block diagram and the combination of blocks in the flowchart illustration and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer or other programmable data processing device to produce a machine, so that the instructions executed by the processor of the computer or other programmable data processing device create a module for implementing the functions/actions specified in the flowchart and/or block diagram blocks.These computer program instructions may also be stored in a computer-readable medium, which may instruct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable medium produce an article of manufacture including an instruction module, which implements the functions/actions specified in the flowchart and/or block diagram blocks.Computer program instructions may also be loaded onto a computer or other programmable data processing apparatus so that a series of operational steps are performed on the computer or other programmable apparatus to produce a computer-implemented process, so that the instructions executed on the computer or other programmable apparatus provide a process for implementing the functions/actions specified in the flowchart and/or block diagram blocks.The flowcharts and block diagrams in the accompanying drawings illustrate the possible implementation architecture, functions and operations of the system, method and computer program product according to various embodiments of the present disclosure. In this regard, each box in the flowchart or block diagram may represent a module, section or part of a code, which includes one or more executable instructions for implementing a specified logical function. It should also be noted that in some alternative embodiments, the functions indicated in the box may not occur in the order indicated in the figure. For example, depending on the function involved, the two boxes shown in succession can actually be executed substantially simultaneously, or sometimes these boxes can be executed in the opposite order. It should also be noted that each box illustrated in the block diagram and/or flowchart and the combination of boxes in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system or dedicated hardware and computer instruction combination that performs a specified function or action. As used in the present disclosure, "computer-implemented method" may refer to any method performed by one or more processors, a computer system with one or more processors, a mobile device such as a smart phone, a tablet computer, a laptop computer, a set-top box, a game console, etc. (which may include one or more processors).The embodiments may be implemented as a computer process, a computing system, or an article of manufacture such as a computer program product of a computer readable medium. The computer program product may be a computer storage medium readable by a computer system and encoding computer program instructions for executing a computer process.All modules or steps in the claims below plus the corresponding structures, materials, actions and equivalents of the functional elements are intended to include any structure, material or action for performing functions in combination with other elements specified for protection. The description of the present disclosure has been given for the purpose of illustration and description, but it is not intended to be exhaustive or to limit the present disclosure to the disclosed form. Without departing from the scope and spirit of the present disclosure, many modifications and variations will be apparent to those skilled in the art. The embodiments are selected and described in order to best explain the principles and practical applications of the present disclosure, and to enable other persons of ordinary skill in the art to understand the embodiments of the present disclosure with various modifications suitable for the intended specific use.Thus, various example embodiments of the present disclosure have been described, including but not limited to:ExampleExample 1 may include a device for communication, the device comprising: a queue having multiple storage spaces, wherein the queue comprises a first space and a second space, the first space being used to store: first information for a first service type having a first flow category and a first virtual channel for communication between a first communication entity and a second communication entity, the first communication entity communicating with the second communication entity via the device and another device coupled to the device via a communication bus, the second space being used to store second information for a second service type having a second flow category and a second virtual channel for communication between a third communication entity and a fourth communication entity, the third communication entity communicating with the fourth communication entity via two devices, and wherein the first service type is different from the second service type, the first flow category is different from the second flow category, or the first virtual channel is different from the second virtual channel; and a controller coupled to the queue to manage the operation of the queue.Example 2 may include an apparatus of Example 1 and/or some other examples of the present disclosure, wherein the first service type or the second service type includes data service or control service; and wherein the first flow category or the second flow category includes a release category, a non-release category, a completion category, or a quality of service category.Example 3 may include the apparatus of Example 1 and/or some other examples of the present disclosure, wherein one of the first communication entity or the second communication entity and one of the third communication entity or the fourth communication entity are the same communication entity.Example 4 may include the apparatus of Example 1 and/or some other examples of the present disclosure, wherein the first virtual channel or the second virtual channel includes a plurality of links of a communication bus.Example 5 may include the apparatus of Example 1 and/or some other examples of the present disclosure, wherein the communication between the first communication entity and the second communication entity includes multiple virtual channels.Example 6 may include an apparatus of Example 1 and/or some other examples of the present disclosure, wherein the queue is a first queue, and wherein the apparatus further comprises a second queue, and wherein the second queue is reserved to store third information for a third service type, a third flow category, or a third virtual channel for communicating between a fifth communication entity and a sixth communication entity via a communication bus.Example 7 may include the apparatus of Example 6 and/or some other examples of the present disclosure, wherein the first queue is used for data traffic and the second queue is used for control traffic.Example 8 may include an apparatus of Example 1 and/or some other examples of the present disclosure, wherein, in order to manage the operation of a queue, a controller is configured to: monitor unused capacity of the queue; reserve multiple spaces, each of which is releasable to store information for a service type having a flow category, and for virtual communications of a channel; release two of the multiple spaces to be used as a first space and a second space; or synchronize the state of the queue with the state of a corresponding queue set in other apparatuses.Example 9 may include an apparatus of Example 8 and/or some other examples in the present disclosure, wherein the reserved and unreleased space is unused space, and the apparatus further includes: one or more counters, wherein based on the information in the one or more counters, a controller is used to perform operations to monitor the unused space, set aside a certain amount of reserved space, release two of the multiple spaces, or synchronize the status of the queue.Example 10 may include the apparatus of Example 8 and/or some other examples in the present disclosure, wherein the controller is further configured to: send a request to other apparatuses for setting a certain number of reserved and unreleased spaces of queues in the other apparatuses.Example 11 may include the apparatus of Example 10 and/or some other examples of the present disclosure, wherein the controller is further configured to: receive an indication of reserved and unreleased space in a certain number of queues set in other apparatuses in response to the sent request.Example 12 may include an apparatus of Example 10 and/or some other examples in the present disclosure, wherein the controller is further configured to: receive a business type with a flow category and multiple information of a virtual channel for communication; and maintain an order of the business type with the flow category and the multiple information for the virtual channel.Example 13 may include an apparatus of Example 1 and/or some other examples in the present disclosure, wherein the apparatus includes a transmitter or a receiver; and wherein the first communication entity or the second communication entity includes a central processing unit (CPU), a processor core, a mouse, a disk, a keyboard, a storage device, or an input/output controller, and wherein the first communication entity and the second communication entity are on the same computer.Example 14 may include an apparatus of Example 1 and/or some other examples of the present disclosure, wherein the first information or the second information includes a message, a data packet, or bit information for a protocol layer selected from a physical layer, a link layer, a transaction layer, a routing layer, a transport layer, or an application layer.Example 15 may include devices of Example 1 and/or some other examples of the present disclosure, wherein the communication bus is a PCI bus, a PCI expansion bus (PCI-X), a PCI express bus, a universal serial bus (USB), a parallel advanced technology attachment (PATA) bus, a serial ATA (SATA) bus, an interconnect integrated circuit (I2C) bus, an IEEE 1394 interface (FireWire) bus, a small computer system interface (SCSI) bus, or a scalable consistency interface (SCI) bus.Example 16 may include a communication method, comprising: sending a request by a transmitter to a receiver for a certain amount of reserved storage space for a queue in a receiver having multiple storage spaces, each storage space being used to store information for a service type having a flow category and a virtual channel for communication between the transmitter and the receiver, the transmitter and the receiver being coupled to each other via a bus; and receiving an indication of the certain amount of reserved space by the transmitter from the receiver in response to the sent request.Example 17 may include the method of Example 16 and/or some other examples of the present disclosure, the method further comprising: sending an indication by the transmitter to the receiver to indicate that a certain amount of space has been reserved in a queue set in the transmitter to store information for a service type with a flow category, and a virtual channel for communication between the transmitter and the receiver.Example 18 may include the method of Example 17 and/or some other examples of the present disclosure, the method further comprising: monitoring, by the sender, unused capacity of a queue set in the sender; or synchronizing, by the sender, a state of a queue set in the sender with a state of a queue set in a receiver.Example 19 may include the method of Example 16 and/or some other examples of the present disclosure, the method further comprising: receiving, by a transmitter, a business type with a flow category, and multiple information of a virtual channel for communication; and maintaining, by the transmitter, the business type with a flow category, and an order of multiple information for the virtual channel.Example 20 may include the method of Example 16 and/or some other examples of the present disclosure, wherein the transmitter or receiver includes a central processing unit (CPU), a processor core, a mouse, a disk, a keyboard, a storage device, or an input/output controller, and wherein the bus is a PCI bus, a PCI expansion bus (PCI-X), a PCI express bus, a universal serial bus (USB), a parallel advanced technology attachment (PATA) bus, a serial ATA (SATA) bus, an integrated circuit (I2C) bus, an IEEE 1394 interface (FireWire) bus, a small computer system interface (SCSI) bus, or a scalable consistency interface (SCI) bus.Example 21 may include an apparatus for performing computing, the apparatus comprising: a printed circuit board (PCB) having a bus selected from a peripheral component interconnect (PCI) bus, a PCI expansion bus (PCI-X), or a PCI Express bus; a first bus agent disposed on the PCB and coupled to the bus; a second bus agent disposed on the PCB and coupled to the bus, and wherein at least one bus agent selected from the first bus agent or the second bus agent comprises: a queue having a plurality of storage spaces, the queue comprising a first space and a second space, the first space being used to store first information for a first business type having a first flow category and a first virtual channel for communication between the first bus agent and the second bus agent, the second space being used to store second information for a second business type having a second flow category and a second virtual channel for communication between the first bus agent and the second bus agent, wherein the first business type is different from the second business type, the first flow category is different from the second flow category, or the first virtual channel is different from the second virtual channel; and a controller coupled to the queue to manage the operation of the queue.Example 22 may include the apparatus of Example 21 and/or some other examples of the present disclosure, wherein the queue is a first queue, and wherein a selected one of the first bus agent or the second bus agent also includes a second queue, and wherein the second queue is reserved to store third information for a third service type having a third flow category or for a third virtual channel for communication between the first bus agent and the second bus agent via a computer bus.Example 23 may include an apparatus of Example 21 and/or some other examples of the present disclosure, wherein, in order to manage the operation of the queue, the controller is configured to: monitor unused capacity of the queue; or reserve multiple spaces, each of which is releasable to store third information for a third service type having a third flow category and a third virtual channel for communication.Example 24 may include an apparatus of Example 21 and/or some other examples in the present disclosure, wherein the controller is further configured to: receive a first service type or a second service type having a first flow category or a second flow category, and a plurality of information of a first virtual channel for communication or a second virtual channel for communication; and maintain the order of the first service type or the second service type having a first flow category or a second flow category, and a plurality of information of the first virtual channel or the second virtual channel.Example 25 may include an apparatus of Example 21 and/or some other examples of the present disclosure, wherein the first service type or the second service type includes data service or control service; and wherein the first flow category or the second flow category includes a release category, a non-release category, a completion category, or a quality of service category.Example 26 may include an apparatus comprising: means for managing resource sharing across multiple flow control classes and virtual channels in a computer bus.Example 27 may include the apparatus of Example 26 and/or some other examples of the present disclosure, wherein the resource sharing includes sharing queue space and VCs across multiple flow control categories.Example 28 may include an apparatus comprising means for performing a method described in or related to any example of the present disclosure, or any other method or process described in the present disclosure.Example 29 may include one or more non-transitory computer-readable media that include instructions to cause the electronic device to perform a method described in or related to any example of the present disclosure, or one or more elements of any other method or process of the present disclosure, after the instructions are executed by one or more processors of the electronic device.Example 30 may include an apparatus including logic, modules, or circuits to perform one or more elements of a method described in or related to any example of the present disclosure, or any other method or process described in the present disclosure.Example 31 may include methods, techniques, or processes, or portions thereof, as described in or related to any example in this disclosure.Example 32 may include a device comprising: one or more processors and one or more computer-readable media, the computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform a method, technique, or process, or portion thereof, described in or related to any example of the present disclosure.Example 33 may include a signal as described or related to any example in this disclosure, or a portion thereof.The foregoing description of one or more embodiments provides illustration and description, but is not intended to be exhaustive or to limit the scope of the embodiments to the precise forms disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the various embodiments. |
An improved Silicon-On-Insulator (SOI) device structure with a thin SOI silicon layer maintains excellent Ioff DC characteristics without degrading device AC speed and characteristics. The device structure comprises double gate sidewall spacers including an inner polysilicon spacer and an outer dielectric (nitride or oxide) sidewall spacer. |
I claim: 1. In a Silicon-On-Insulator (SOI) device including a bulk substrate (18), a buried oxide layer (16), an ultra thin SOI silicon layer (20), field oxide regions (7), a gate dielectric layer (22), a conducting gate (24) having gate sidewalls, gate sidewall spacers (11), doped source and drain implanted regions (12) including extension implanted regions (32), and a channel region (14), the improvement comprising:said gate sidewall spacers comprising an inner pair of spacers, said inner pair of spacers being doped polysilicon spacers (28) contiguously on said gate sidewalls, said gate sidewall spacers further comprising an outer pair of spacers (30), said outer pair of spacers being dielectric spacers, said dielectric spacers being contiguously on said doped polysilicon spacers on the side opposite said gate sidewalls; said extension implanted regions (32) not being vertically directly beneath said doped polysilicon spacers (28). 2. The SOI device of claim 1, wherein said dielectric spacers are comprised of silicon nitride or silicon dioxide.3. The SOI device of claim 2, wherein:said buried oxide layer has a thickness between 50 and 60 nm; said ultra thin SOI silicon layer has a thickness between 5-20 nm; said gate dielectric layer is comprised of one of the group consisting of silicon dioxide, silicon nitride, aluminum oxide, tantalum pentoxide, and hafnium oxide; said gate dielectric layer has an equivalent silicon dioxide thickness between 0.8-1.4 nm; said conducting gate is comprised of one of the group consisting of TiN, TaN, TaW, W, Al, Ni, Ta, Mo, and Cr; said conducting gate has a thickness between 2.5 and 25 nm; said conducting gate has a length between 30 and 60 nm; said conducting gate having a top surface with a polysilicon encapsulation layer thereon, said polysilicon encapsulation layer having a thickness between 50 and 100 nm; said doped polysilicon spacers having a width between 10 and 15 nm; said dielectric spacers having a width between 10 and 110 nm. |
CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of U.S. Provisional Application No. 60/260,484, filed on Jan. 9, 2001.FIELD OF THE INVENTIONThis invention relates to semiconductor device structures, and in particular to ultra-thin Silicon-On-Insulator (SOI) devices structures.BACKGROUND OF THE INVENTIONAs integrated circuit dimensions decrease and circuit speed increases, new transistor structures have been developed in order to yield good performance at the smaller dimensions. In particular, Silicon-On-Insulator (SOI) devices are known and are generally undoped or very lightly doped silicon on a low K dielectric. SOI devices are characterized by having the active device region insulated from the bulk substrate, generally by a buried oxide layer. The active device region is thereby said to be floating. SOI devices have been developed which consume less power than do bulk CMOS devices, and which operate at higher speeds than do Ct bulk CMOS devices. FIG. 1 shows a prior art SOI device, including bulk substrate 2, buried oxide layer 4, SOI silicon layer 6, field oxide regions 7, gate dielectric layer 8, conducting gate 10, gate sidewall spacers 11, doped source and drain regions 12, and channel region 14. Source and drain regions may overlap the gate region, or gate sidewall spacer technology may be used to provide separation, or underlap, between the gate and the source-drain regions.For SOI devices having channel lengths below about 50 nm, it is very difficult to achieve good short-channel control, i.e., to effectively shut off the transistors in the off state, without significantly thinning down the thickness of the buried layer and the thickness of the SOI silicon layer, which is technically very challenging. For a device with SOI silicon thickness of less than 20 nm, an underlap of the source/drain regions with the gate is needed in order to be able to turn off the device. Accordingly, the details of the gate sidewall spacer technology used in the fabrication of such devices are critical to their performance. By way of example, it is known that if doped polysilicon spacers are used in place of nitride spacers (termed a "straddled gate device"), the device DC characteristics improve significantly. The polysilicon spacer which is also doped during source/drain implant serves as a side gate with a lower work function It behaves like a longer gate when there is no bias applied on the gate (i.e., in the Ioff condition). The polysilicon side gate causes the surface beneath it to invert at much lower applied voltage than the voltage necessary to invert the main channel region, due to the lower work function of the polysilicon. This causes the device to behave like a very short channel device during Ion conditions. The result is a much improved Ion and Ioff. However, when single layer polysilicon gate sidewall spacers are used, the source/drain extension regions reach under the poly spacers and cause an increased overlap capacitance which slows down the AC device performance.BRIEF SUMMARY OF THE INVENTIONIt is therefore an object of this invention to provide a Silicon-On-Insulator device structure with a thin SOI silicon layer which maintains excellent Ioff DC characteristics without degrading device AC speed and characteristics.These objects are met by providing double gate sidewall spacers including an inner polysilicon spacer and an outer dielectric (nitride or oxide) sidewall spacer.BRIEF DESCRIPTION OF THE FIGURESFIG. 1 shows a prior art SOI device structure.FIG. 2a shows the intermediate inventive device structure having polysilicon gate sidewall spacers.FIG. 2b shows the intermediate inventive device structure having inner dielectric spacers and extension implanted regions.FIG. 2c shows the final inventive device structure having outer dielectric spacers and source/drain implanted regions.DETAILED DESCRIPTION OF THE INVENTIONFIGS. 2a-2c show the inventive device structures. FIG. 2a shows buried oxide layer 16, usually undoped SiO2 with thickness of approximately 50-60 nm, atop bulk substrate 18. Thin undoped silicon layer 20 of the SOI has a thickness of approximately 5-20 nm. Gate dielectric 22 is comprised of silicon dioxide or alternately silicon nitride, aluminum oxide, tantalum pentoxide or hafnium oxide with equivalent oxide thickness of 0.8-1.4 nm. Conducting metal gate 24 is deposited and patterned from TiN, TaN, TaW. W. Al, Ni, Ta, Mo, or Cr, and has thickness of approximately 2.5-25 nm, with gate length of 30-60 nm. Polysilicon encapsulation layer 26 with thickness of 50-100 nm is deposited on and patterned with gate 24. (Encapsulation layers in general are necessary to prevent cross-contamination of the fab line during post-gate formation processing.) 10-15 nm polysilicon is deposited and anisotropically etched to form poly spacers 28 which when doped function as side gates.FIG. 2b shows first dielectric spacers 30, formed by depositing 10-20 nm silicon nitride or silicon dioxide followed by anisotropic etch. The dielectric spacer etch removes the exposed portions of gate dielectric 22. Extension implanted regions 32 are formed following formation of first dielectric spacers 30. The extension implants may be comprised of 1-2e14/cm<2 >BF2 at 10-15 keV for p-channel, or 0.5-2e14/cm<2 >As at 3-5 keV for n-channel, by way of example. RTA anneal for 5-10 seconds at 900-950 C follows.FIG. 2c shows second dielectric spacers 34, formed by depositing 50-90 nm of silicon nitride or silicon dioxide followed by anisotropic etch. Sidewalls 36 of spacers 34 may be vertical or sloped. Source/drain implanted regions 38 are formed following formation of second spacers 34. Polysilicon spacers 28 are also doped during source/drain implantation. The source/drain implants may be comprised of 1-2e15/cm<2 >of B at 2-3 keV for p-channel, or 1-2e15/cm<2 >of P at 7.5-12.5 keV for n-channel, by way of example. RTA anneal for 5-10 seconds at 950-1025 C follows. Formation of approximately 5 nm of nickel silicide may follow.My inventive structure, which comprises double spacers, one set of polysilicon spacers, and at least one set of dielectric spacers, enables the source/drain extension implanted regions to be moved away from under the doped polysilicon side gate, thereby reducing overlap capacitance. Device simulations of the inventive device structure using the Medici simulation program by Avanti show that the DC Ioff characteristics are comparable to those achieved with only polysilicon spacers; the Ion characteristics are within 2-3% of those achieved with only polysilicon spacers. The AC characteristics, specifically the inverter ring oscillator stage delay, are significantly improved compared to a device having only polysilicon spacers. The AC characteristics are within 10% of the values achieved with devices having only silicon nitride spacers, but with much improved short channel control.It is not intended that the invention be restricted to the exact embodiments described herein. For example, the processing details, including temperatures, times, implant energies and doses, and exact metal and dielectric materials used, may be altered without departing from the inventive concept. Additionally, the dielectric spacers may be comprised of a single first set of oxide or nitride spacers rather than the first and second dielectric spacers disclosed herein. The scope of the invention should be construed in view of the claims.With this in mind, |
Apparatus and method for specifying quantum operation parallelism. For example, one embodiment of an apparatus comprises: instruction fetch circuitry to fetch a plurality of quantum instructions from a memory or a cache; slice-based instruction processing circuitry to identify quantum circuit slices comprising sets of one or more of the plurality of quantum instructions; and one or more instruction decoders to decode the quantum instructions to generate quantum microoperations; and quantum execution circuitry to execute sets of the quantum microoperations in parallel based on the quantum circuit slices. |
1.A device that includes:An instruction fetching circuit, the instruction fetching circuit is used to fetch a plurality of quantum instructions from a memory or a cache;A slice-based instruction processing circuit for identifying quantum circuit slices of a plurality of groups including one or more quantum instructions of the plurality of quantum instructions; andOne or more instruction decoders for decoding the quantum instructions to generate quantum micro-operations; andA quantum execution circuit configured to execute multiple sets of the quantum micro-operations in parallel based on the quantum circuit slice.2.The device of claim 1, wherein the quantum execution circuit is to transmit a first set of control signals to the quantum controller in response to executing a first set of quantum micro-operations associated with the first quantum slice, the first set of The control signal is to cause the quantum controller to generate an analog waveform to modify multiple qubits of the quantum processor in parallel.3.The device according to claim 2, wherein the slice-based instruction processing circuit is to identify the slice based on a slice flag field in each quantum instruction.4.The device according to claim 3, wherein the first value in the slice flag field identifies its associated instruction as the beginning of a slice, and the second value in the slice flag field identifies its associated instruction as the end of the slice, And the third value in the slice flag field identifies its associated instruction as being in the slice.5.The device of claim 4, wherein the fourth value in the slice flag field identifies its associated instruction as a single instruction slice.6.The device of claim 3, wherein each slice flag field includes a 2-bit instruction field, and each quantum instruction includes a 32-bit instruction.7.7. The device of any one of claims 1 to 6, wherein the first set of quantum micro-operations includes a first quantum rotation operation and a second quantum rotation operation.8.The device according to claim 7, wherein the first set of control signals is to cause the quantum controller to generate a first analog waveform to perform the first quantum rotation operation on the first qubit, and to be in contact with the first The analog waveform generates a second analog waveform in parallel to perform the second quantum rotation operation on the second qubit.9.8. The device of any one of claims 1 to 8, wherein the first set of quantum micro-operations includes a first two-qubit controlled NOT gate and a second two-qubit controlled NOT gate.10.9. The device of claim 9, wherein the first set of control signals causes the quantum controller to generate a first set of analog waveforms to realize the NOT gates controlled by the first and second double qubits.11.One method includes:Fetch multiple quantum instructions from memory or cache;Identifying quantum circuit slices of a plurality of groups including one or more quantum instructions of the plurality of quantum instructions; andDecoding the quantum instructions to generate quantum micro-operations; andA plurality of sets of the quantum micro-operations are executed in parallel based on the quantum circuit slice.12.The method of claim 11, further comprising:In response to performing a first set of quantum micro-operations associated with the first quantum slice, a first set of control signals are transmitted to the quantum controller, the first set of control signals causing the quantum controller to generate analog waveforms to modify the quantum in parallel Multiple qubits of the processor.13.The method of claim 12, wherein each slice is to be identified based on a slice flag field in each quantum instruction.14.The method according to claim 13, wherein the first value in the slice flag field identifies its associated instruction as the beginning of a slice, and the second value in the slice flag field identifies its associated instruction as the end of the slice, And the third value in the slice flag field identifies its associated instruction as being in the slice.15.The method of claim 14, wherein the fourth value in the slice flag field identifies its associated instruction as a single instruction slice.16.The method according to any one of claims 13 to 15, wherein each slice flag field includes a 2-bit instruction field, and each quantum instruction includes a 32-bit instruction.17.The method of any one of claims 11 to 16, wherein the first set of quantum micro-operations includes a first quantum rotation operation and a second quantum rotation operation.18.The method according to claim 17, wherein the first set of control signals is to cause the quantum controller to generate a first analog waveform to perform the first quantum rotation operation on the first qubit, and is in conjunction with the first The analog waveform generates a second analog waveform in parallel to perform the second quantum rotation operation on the second qubit.19.The method according to any one of claims 11 to 18, wherein the first set of quantum micro-operations includes a first two-qubit controlled NOT gate and a second two-qubit controlled NOT gate.20.21. The method of claim 19, wherein the first set of control signals is to cause the quantum controller to generate a first set of analog waveforms to realize the NOT gates controlled by the first and second double qubits.21.A machine-readable medium having program code stored thereon, and when the program code is executed by a machine, the machine performs the following operations:Fetch multiple quantum instructions from memory or cache;Identifying quantum circuit slices of a plurality of groups including one or more quantum instructions of the plurality of quantum instructions; andDecoding the quantum instructions to generate quantum micro-operations; andA plurality of sets of the quantum micro-operations are executed in parallel based on the quantum circuit slice.22.The machine-readable medium of claim 21, further comprising program code that causes the machine to perform the following operations:In response to performing a first set of quantum micro-operations associated with the first quantum slice, a first set of control signals are transmitted to the quantum controller, the first set of control signals causing the quantum controller to generate analog waveforms to modify the quantum in parallel Multiple qubits of the processor.23.The machine-readable medium of claim 22, wherein each slice is to be identified based on a slice tag field in each quantum instruction.24.The machine-readable medium of claim 23, wherein the first value in the slice flag field identifies its associated instruction as the beginning of a slice, and the second value in the slice flag field identifies its associated instruction as a slice. And the third value in the slice flag field identifies its associated instruction as being in the slice.25.The machine-readable medium of claim 24, wherein the fourth value in the slice flag field identifies its associated instruction as a single instruction slice. |
Apparatus and method for specifying quantum operation parallelism for quantum control processorTechnical fieldThe embodiments of the present invention generally relate to the field of quantum computing. More particularly, these embodiments relate to devices and methods for specifying quantum operation parallelism for quantum control processors.Background techniqueQuantum computing involves the field of research related to computing systems that use quantum mechanical phenomena to manipulate data. These quantum mechanical phenomena such as superposition (in which quantum variables can exist in multiple different states at the same time) and entanglement (in which multiple quantum variables have related states regardless of the distance between them in space or time) are in classical There are no analogues in the world of computing, and therefore cannot be achieved by classical computing devices.Description of the drawingsA better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:Figures 1A-1F illustrate various views of an example quantum dot device according to one embodiment;Figure 2 illustrates an embodiment of a processor pipeline for processing quantum and non-quantum instructions;Figure 3 illustrates an embodiment of a front-end circuit of a processor for processing quantum and non-quantum instructions;4A-4B illustrate an embodiment of a quantum-classical processor interface;Figures 5A-5B illustrate an example quantum circuit and the program code to implement the quantum circuit;6A-6B illustrate examples in which quantum instructions are generated by a compiler, decoded into micro-operations (up), and executed in a quantum execution engine;Figure 7 illustrates a method according to an embodiment of the present invention;Figure 8 illustrates an embodiment of a qubit index generator for addressing qubits in a quantum processor;Figure 9 illustrates a method for determining the qubit index value used to identify the qubit;Figure 10 illustrates an implementation using a corrective micro-operation sequence;Figure 11 illustrates a method for managing and using corrective micro-operation sequences;Figure 12 illustrates an embodiment in which the quantum control stack is integrated on a single IC chip;Figures 13A-13B illustrate different embodiments for executing a rotation command specifying an arbitrary rotation value;Figure 14 illustrates a method for performing arbitrary qubit rotations using approximation;Figure 15 illustrates an example code sequence for qubit rotation;Figure 16 illustrates an example quantum circuit slice including parallel and sequential operations on qubits;Figure 17 illustrates the processing of quantum instructions using slice tags;FIG. 18 illustrates an example of a quantum circuit slice including multiple sets of rotation operations;Figures 19A-19B illustrate multiple sets of slices of different types of quantum circuits;Figures 20A-20B illustrate additional examples of slices on different types of quantum circuits;Figure 21-22 illustrates a code example comparing quantum timeline and quantum slice;Figure 23 illustrates an embodiment of a parallel quantum architecture;Figure 24 illustrates a method according to an embodiment of the present invention; andFigure 25 illustrates one embodiment of a parallel decoder device.Detailed waysIn the following description, for the purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present invention described below. However, those skilled in the art will understand that the embodiments of the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in the form of block diagrams, so as not to obscure the basic principles of the embodiments of the present invention.IntroductionQuantum computers use quantum mechanical phenomena such as superposition and entanglement to perform calculations. Compared with digital computers that store data in one of two finite states (0 or 1), quantum computing uses quantum bits (qbits), which can be in a superposition of states. The qbit can be realized using the physically distinguishable quantum states of elementary particles such as electrons and photons. For example, in the case where the two states are vertical polarization and horizontal polarization, the polarization of photons can be used. Similarly, the spin of an electron can have distinguishable states, such as "spin up" and "spin down".The Qbit state is usually represented by bracket notation |0> and |1>. In traditional computer systems, bits are exclusively in one state or another state, namely "0" or "1". However, the qbit in a quantum mechanical system can be in the superposition of two states at the same time, which is a unique and basic characteristic of quantum computing.Quantum computing systems execute algorithms that contain quantum logic operations performed on qubits. The sequence of operations is statically assembled into the schedule and the qubits are addressed by using an indexing scheme. This algorithm is then executed a large enough number of times until the confidence interval of the calculated answer is higher than a threshold (for example, ~95+%). The hit threshold means that the desired arithmetic result has been reached.Qbit has been implemented by using a variety of different technologies that can manipulate and read quantum states. To name just a few examples, these include, but are not limited to, quantum dot devices (spin-based and space-based), trapped ion devices, superconducting quantum computers, optical lattices, nuclear magnetic resonance computers, solid-state NMR Kane quantum devices, and electron quantum computers on helium , Cavity quantum electrodynamics (CQED) device, molecular magnet computer and fullerene-based ESR quantum computer. Therefore, although quantum dot devices are described below with respect to certain embodiments of the present invention, the basic principles of the present invention can be combined with any type of quantum computer including but not limited to those listed above. The specific physical implementation for qbit is orthogonal to the embodiment of the invention described herein.Quantum dot deviceQuantum dots are small semiconductor particles, usually a few nanometers in size. Due to this small size, quantum dots operate according to the rules of quantum mechanics and have different optical and electronic properties from macroscopic entities. Quantum dots are sometimes called "artificial atoms" to express the fact that quantum dots are individual objects with discrete, bound electronic states, just like in the case of atoms or molecules.Figures 1A-1F are various views of a quantum dot device 100 that can be used with embodiments of the invention described below. FIG. 1A is a top view of a portion of the quantum dot device 100 with some material removed so that the first gate line 102, the second gate line 104, and the third gate line 106 are visible. Although many drawings and descriptions herein may refer to a specific set of wires or gates as "barriers" or "quantum dots" wires or gates, respectively, this is only for the convenience of discussion, and in other embodiments, " The roles of the "barrier" and "quantum dot" wires and gates can be exchanged (for example, the barrier gate can be turned into a quantum dot gate, and vice versa). 1B-1F are side cross-sectional views of the quantum dot device 100 of FIG. 1A; specifically, FIG. 1B is a view through section BB of FIG. 1A, FIG. 1C is a view through section CC of FIG. 1A, and FIG. 1D is a view through 1A is a view of section DD, FIG. 1E is a view through section EE of FIG. 1A, and FIG. 1F is a view through section FF of FIG. 1A.The quantum dot device 100 of FIG. 1 can be operated in any of a variety of ways. For example, in some embodiments, electrical signals such as voltage, current, radio frequency (RF), and/or microwave signals may be provided to one or more of the first gate line 102, the second gate line 104, and/or the second gate line 102. Three gate lines 106 to promote quantum dots (for example, electron spin-based quantum dots or hole spin-based quantum dots) to form a quantum well stack 146 under the third gate 166 of the third gate line 106 in. The electrical signal provided to the third gate line 106 can control the potential of the quantum well under the third gate 166 of the third gate line 106, and the electrical signal provided to the first gate line 102 (and/or the second gate The electrical signal of the polar line 104) can be controlled between adjacent quantum wells under the first gate 162 of the first gate line 102 (and/or the second gate 164 of the second gate line 104). Potential energy barrier. The quantum interaction between quantum dots in different quantum wells in the quantum well stack 146 (for example, under different quantum dot gates) can be partly determined by the barrier potential energy applied between them (for example, by intervening the barrier gate ) Provides potential energy barrier control.Generally, the quantum dot device 100 disclosed herein may further include a magnetic field source (not shown), which may be used for the state of a quantum dot that is degenerate under normal circumstances (for example, the self-spin of a quantum dot based on the spin of an electron). An energy difference is generated in the spin state, and the state of the quantum dot (for example, the spin state) can be manipulated by applying electromagnetic energy to the gate line to generate a qubit capable of calculation. The source of the magnetic field can be one or more lines of magnetic force, as discussed below. Thus, the quantum dot device 100 disclosed herein can manipulate the position, number, and quantum state (for example, spin) of the quantum dots in the quantum well stack 146 through the controlled application of electromagnetic energy.In the quantum dot device 100 of FIG. 1, the gate dielectric 114 may be disposed on the quantum well stack 146. The quantum well stack 146 may include at least one quantum well layer 152 (not shown in FIG. 1 ), where the quantum dots may be localized during the operation of the quantum dot device 100. The gate dielectric 114 may be any suitable material, such as a high-k material. A plurality of parallel first gate lines 102 may be disposed on the gate dielectric 114, and the spacer material 118 may be disposed on the sides of the first gate lines 102. In some embodiments, the patterned hard mask 110 may be disposed on the first gate line 102 (wherein the pattern corresponds to that of the first gate line 102), and the spacer material 118 may extend to the hard mask 110 On the side as shown. The first gate lines 102 may each be a first gate 162. The different ones of the first gate lines 102 can be electrically controlled in any desired combination (for example, as desired, each first gate line 102 can be individually electrically controlled, or some or all of the first The gate lines 102 may be shorted together in one or more groups).A plurality of parallel second gate lines 104 may be disposed above and between the first gate lines 102. As shown in FIG. 1, the second gate line 104 may be arranged perpendicular to the first gate line 102. The second gate line 104 may extend above the hard mask 110 and may include a second gate 164 that extends downward toward the quantum well stack 146 and contacts adjacent ones of the first gate lines 102 The gate dielectric 114 between the gate lines is as shown in FIG. 1D. In some embodiments, the second gate 164 may fill the area between adjacent gate lines/spacer material structures in the first gate line 102/spacer material 118 structure; in other embodiments, the insulating material ( Not shown) may exist between the first gate line 102/spacer material 118 structure and the adjacent second gate 164. In some embodiments, the spacer material 118 may be disposed on the side surface of the second gate line 104; in other embodiments, no spacer material 118 may be disposed on the side surface of the second gate line 104. In some embodiments, the hard mask 115 may be disposed on the second gate line 104. The plurality of second gates 164 of the second gates 164 of the second gate lines 104 are electrically continuous (due to the shared conductive material of the second gate lines 104 above the hard mask 110). The different second gate lines 104 of the second gate lines 104 may be electrically controlled in any desired combination (for example, as desired, each second gate line 104 may be individually electrically controlled, or some or all of the second The gate lines 104 may be shorted together in one or more groups). The first gate line 102 and the second gate line 104 may form a grid together, as shown in FIG. 1.A plurality of parallel third gate lines 106 may be disposed above and between the first gate line 102 and the second gate line 104. As shown in FIG. 1, the third gate line 106 may be arranged to be diagonal to the first gate line 102 and diagonal to the second gate line 104. In particular, the third gate line 106 may be arranged diagonally above the opening in the grid formed by the first gate line 102 and the second gate line 104. The third gate line 106 may include a third gate 166 that extends down to the gate dielectric 114 in the opening in the grid formed by the first gate line 102 and the second gate line 104 ; Thus, each third gate 166 may be defined by two different first gate lines 102 and two different second gate lines 104. In some embodiments, the third gate 166 may be defined by the insulating material 128; in other embodiments, the third gate 166 may fill an opening in the gate (for example, contacting the first gate line 102 disposed adjacently) And the spacer material 118 on the side of the second gate line 104, not shown). The additional insulating material 117 may be provided on and/or around the third gate line 106. The plurality of third gates 166 of the third gate line 106 are electrically continuous (due to the shared conductive material of the third gate line 106 above the first gate line 102 and the second gate line 104). The different third gate lines 106 of the third gate lines 106 may be electrically controlled in any desired combination (for example, as desired, each third gate line 106 may be individually electrically controlled, with some or all of the latter The tri-gate lines 106 may be shorted together in one or more groups).Although FIGS. 1A-1F illustrate a specific number of the first gate line 102, the second gate line 104, and the third gate line 106, this is for illustrative purposes only, and may be used in the quantum dot device 100 Any number of first gate lines 102, second gate lines 104, and third gate lines 106 are included. Other examples of the arrangement of the first gate line 102, the second gate line 104, and the third gate line 106 are possible. Electrical interconnections (eg, vias and wires) may contact the first gate line 102, the second gate line 104, and the third gate line 106 in any desired manner.Not shown in FIG. 1 is the accumulation region that can be electrically coupled to the quantum well layer of the quantum well stack 146 (eg, laterally close to the quantum well layer). The accumulation region can be separated from the gate line by a thin layer of dielectric material. The accumulation region may be a region in which carriers accumulate (for example, due to doping, or due to the presence of large electrodes that pull carriers into the quantum well layer), and can be used as a reservoir of carriers. Electrons can be selectively sucked into the region of the quantum well layer under the third gate 166 (for example, by controlling the voltage on the quantum dot gate, the first gate 162, and the second gate 164) to form a carrier-based Current quantum dots (for example, electron or hole quantum dots, including single charge carriers, multiple charge carriers, or no charge carriers). In other embodiments, the quantum dot device 100 may not include a lateral accumulation region, but instead may include a doped layer in the quantum well stack 146. These doped layers can provide carriers for the quantum well layer. Any combination of accumulation regions (eg, doped or undoped) or doped layers in the quantum well stack 146 can be used in any embodiment of the quantum dot device 100 disclosed herein.Apparatus and method for hybrid classical quantum computerAfter Richard Feynman asked in 1982 whether quantum physics could be effectively simulated using quantum computers, many research efforts on quantum computers focused on their universality and efficiency over classical calculations. One such example is David Deutsch's quantum Turing machine in 1985, which can be programmed to perform any computing task that can be performed by any physical object.Compared with theories and algorithms, quantum physical machines are still in their infancy. So far, efforts to construct quantum information processing systems have achieved a certain degree of success. A small quantum computer that can perform a small set of quantum operations on a few qubits represents the latest technology in quantum computing. In addition, the quantum state is fragile in the sense that the quantum state only remains coherent for a limited duration. This gap between algorithms and physical machines has driven efforts to invent hybrid classical-quantum algorithms. Some recent quantum algorithm development has focused on short-depth quantum circuits to perform quantum calculations, which are formed to be embedded in larger classical optimization loops (such as the variational eigensolver (PJJO'Malley, 2016)) Subroutines in. Quantum language, tools and processes have been developed to provide a software layer/stack to translate and optimize applications to the quantum physical layer to cope with the strict resource constraints in quantum computing (Frederic T. Chong, 2017, September 14).On the hardware side, classic computers have been used to perform error correction for quantum computing. The "quantum coprocessor" model is the most popular mainstream execution model, in which the classic CPU controls the quantum processing unit in a manner similar to how the CPU interacts with the GPU in modern computer systems. As described in (X.Fu, May 2016) and (X. Fu, 2018), the microarchitecture for experimental superconducting quantum coprocessors includes features such as: code fetch data The arbiter on the path is used to guide the classical instructions to the host CPU and the quantum instructions to the quantum coprocessor; the exchange register file is used to synchronize the register file between the host CPU and the quantum coprocessor; and quantum instructions cache.However, the microarchitectures used for these mechanisms are not well defined, and clear support for hybrid classical-quantum programs is lacking. Therefore, it is not clear how quantum coprocessors will be implemented in quantum computers, especially the quantum coprocessors required to run a variety of quantum programs. A flexible and programmable model has not been developed to implement hybrid classical quantum algorithms.One embodiment of the present invention adds a quantum instruction set to the instruction set architecture (ISA) of a processor such as a CPU. For example, these instructions can be included in an extension to the ISA (for example, the AVX-512 extension for the x86 platform). In addition, in one embodiment, a quantum engine is added to the execution unit of the processor, and new quantum instructions are extracted, decoded, scheduled, and executed on the functional unit of the quantum engine. In one embodiment, the quantum engine uses a shared register file and/or system memory to interact with the classical execution engine. When executing quantum instructions (or quantum micro-operations in certain embodiments described herein), the quantum execution engine generates control signals to manipulate the state of the qubits in the quantum processor. The quantum engine also executes instructions to obtain the measurement of the specified sets of qubits and stores the results. In these embodiments, the quantum/classical interface provides connectivity between the quantum processor and the quantum engine of the classical processor.FIG. 2 illustrates an embodiment of a processor or core 210 that uses the same pipeline resources as the non-quantum instructions 201B to fetch, decode, and execute quantum instructions 201A and non-quantum instructions 201B. The processor/core 210 of this embodiment supports quantum extension to the existing ISA of the processor/core 210 (for example, the ISA is extended to include quantum instructions 201A). The program code 205C including quantum and non-quantum instructions is generated by the compiler 205B from the source code 205A written by the programmer (for example, using an extended ISA). Various source code/program code examples are provided below.Quantum and non-quantum instructions 201A-B are fetched from the memory 205 at the front end of the instruction pipeline and stored in the level 1 (L1) instruction cache 201. Instructions and data may also be stored in a level 2 or level 3 cache within the cache/memory subsystem 215, which manages memory requests and cache coherency.The decoder 202 decodes the instructions 201A-B into micro-operations or u op 203A, which are scheduled by the scheduler 203 for execution, and executed by the execution circuit 204. In one embodiment, certain stages of the pipeline are enhanced to include hardware support for processing quantum instructions 201B, while other stages are unchanged. For example, quantum decoding circuit 202A may be added to decoder 202 for decoding quantum instruction 201A, just as non-quantum decoding circuit 202B decodes non-quantum instruction 201B. Although shown as separate components in FIG. 2 for purposes of explanation, the quantum decoding circuit 202A and the non-quantum decoding circuit 202B may include common or overlapping circuits and/or sets of microcodes. For example, in one embodiment, an existing decoder can be extended to include microcode support for quantum instructions (eg, in a microcode ROM) to generate new sets of quantum micro operations. Depending on the processor architecture, the decoder 202 includes other decoding circuits, such as a set of decoding table structures (for example, see FIG. 3 and associated text).In one embodiment, decoder 202 generates a sequence of micro-operations 203A in response to decoding instructions 201A-B. In an implementation with quantum and non-quantum instructions, micro-operations may include a mixture of quantum micro-operations and non-quantum micro-operations, and these micro-operations are then scheduled by the instruction scheduler 203 for execution.The quantum and non-quantum micro-operations 203A generated by the decoder 202 can initially be queued in one or more micro-operation queues of the scheduler 203 for execution, and the scheduler 203 can follow (one or more) according to dependency and/or execution resource availability. A) micro-operations are dispatched in the micro-operation queue. The embodiments of the present invention can be implemented on various different types of processors with different types of schedulers. For example, in one embodiment, a set of execution "ports" couples the scheduler 203 to the execution circuit 204, where each execution port can issue micro-operations to a specific set of functional units 204C-E. In the example architecture shown in FIG. 2, for example, SIMD and floating point (FP) micro-operations can be issued by the scheduler 203 through an FP/SIMD execution port coupled to a set of FP/SIMD functional units 204C, and integer micro-operations can Issued through an integer port coupled to a set of integer function units 204D. Although only two types of non-quantum functional units are shown for simplicity, the processor/core 210 may include various other/additional non-quantum functional units (for example, such as load/store address generation unit, branch unit, additional SIMD and integer units etc.).In the specific embodiment shown in FIG. 2, the quantum engine functional unit 204E shares the same set of register files 204A-B used by the traditional processor functional units 204C-D. In this particular example, the register file 204A-B includes an FP/SIMD register file 204A storing floating point and SIMD operands used by the FP/SIMD function unit 204C and an integer register storing integer operands used by the integer function unit 204D Heap 204B. In one implementation, the FP/SIMD register file 204A includes 512-bit vector registers, and the integer register file 204B includes 64-bit scalar registers. Of course, different processor architectures will use different types of registers shared by the quantum engine functional unit 204E. Various other types of registers can also be used, such as a set of control/status registers and mask registers.In an embodiment where quantum micro-operations and non-quantum micro-operations are mixed, quantum micro-operations are issued to a set of quantum engine functional units 204E through one or more quantum ports, and these quantum engine functional units 204E perform quantum micro-operations to perform the underlying Quantum operations. For example, the quantum engine functional unit 204E may generate control signals on the quantum-classical interface 206 in response to quantum micro-operations to manipulate and measure the qubits of the quantum processor 207.The quantum-classical interface 206 includes a digital-to-analog (DA) circuit to convert the digital quantum control signal generated by the quantum engine functional unit 204E into the control quantum processor 207 (for example, code word triggered pulse generation (CTPG) such as described below Unit and arbitrary waveform generator (AWG)) required analog signals, and also includes analog-to-digital (AD) circuits to convert physical qubit measurements into digital result data.In one embodiment, the quantum-classical interface 206 is integrated on the same semiconductor chip as the other components of the instruction processing pipeline (for example, the execution circuit 204, the scheduler 203, the decoder 202, etc.). As discussed in detail below, depending on the specific physical implementation of the quantum processor 207, different types of circuits/logic components may be used.Figure 3 illustrates an embodiment in which quantum instruction processing support is added to a low-power processing pipeline that includes a pre-decode buffer 301B, 2 with two sets of quantum/non-quantum decoder circuits 202A-B, 302A-B Road decoder 302, dual look-up table (XLAT) for instruction translation, and microcode ROM 304. In one embodiment, XLAT components 303, 305 and microcode ROM 304 are extended to support quantum instructions, as indicated by logic blocks 303Q-305Q. The pre-decoding buffer 301B detects and marks the macro instruction boundary before being completely decoded into a micro operation by the 2-way decoder 302.The operands used for quantum and non-quantum micro-operations are stored in a set of shared registers 321 (as described above) and accessed by the quantum function unit 320 when performing micro-operations. In response to quantum micro-operations, the Q-C interface 320 controls the operation of the quantum processor 207.Different examples of quantum-classical interface 206 are illustrated in Figures 4A-B. The QC interface 206 in FIG. 4A includes a plurality of micro-operation units 401A-C. These micro-operation units 401A-C generate codewords to control pulses triggered by multiple codewords in response to the micro-operations performed by the quantum engine functional unit 204E. Operation of generation (CTPG) units 402A-C. In response, the CTPG units 402A-C generate pulse sequences to control the qubits of the quantum processor 207. Once the quantum processor 207 has reached the prescribed execution state, one or more of the measurement discrimination units (MDU) 403A-B will perform quantum measurements.The QC interface 206 shown in FIG. 4B includes a set of components used to perform microwave complex signal generation, including an RF microwave unit 451, a multi-channel arbitrary waveform generator (AWG) 452, one or more digital-to-analog converters (DAC ) 453 and one or more measuring units 454. In one embodiment, the input to each of these components includes a set of codewords generated by the quantum engine functional unit 204E, and the output is an analog waveform that manipulates the state of the quantum bit of the quantum processor 207. The measurement unit 454 measures the current state associated with one or more qubits at a specified point in execution.To further guide the analysis and discussion, a specific example is illustrated in FIG. 5A, which shows the evolution of a multi-body disordered Hamiltonian (Hamiltonian) quantum circuit over time. Note that the angle through which Rx and Ry rotate is derived from several parameters. In particular, and (where k∈{0,1,...,5,6}) are randomly generated and used to simulate large multi-body systems that require more qubits than the number of qubits supported by the underlying quantum chip Quantity.An example of a quantum program that uses this circuit for part of its calculations is illustrated in FIG. 5B, which includes a mixture of quantum instructions and non-quantum instructions (as indicated by the comment on the right of the source code). In this example, NR is the number of out-of-order realizations (ie, multiple small random realizations that simulate a large multi-body system), NQ is the number of qubits, and NP is the iteration to achieve the required probability (Pr) accuracy Number of times, NT is the number of Trotter steps, and a[i] cumulative qubit measurement. The probability that a qubit is in the state |0> or |1> is obtained by repeated measurement (NP) and averaging.The program structure shows how classical operations and quantum operations can be closely intertwined and executed on the classical-quantum processing architecture described herein. The most effective way to execute the program is to process all instructions in the pipeline, such as those described above, in which the quantum engine function unit 204E is used to control the configuration as other classic execution engines 204A-B (such as integer, floating point, etc.) The qubits of the execution engine peer (peer).Figures 6A-B provide examples to demonstrate the operation of one embodiment of the invention. FIG. 6A illustrates a part of quantum assembly language (QASM) code 601 to implement the highlighted portion 501 of the quantum circuit in FIG. 5A. The QASM code 601 is compiled into the hybrid processor program code 602 in the memory 205. In this example, in this particular example, the registers RBX and RBX+1 from the shared register file 321 or 204B are used to hold the qubit indexes that address logical qubits #2 and #3, respectively. The arrow indicates the mapping of the relevant part of the QASM code 601 to the hybrid processor program code 602.FIG. 6B illustrates how the quantum macroinstruction QCNOTUP (to implement the CNOT gate) is decoded by the decoder 202 into a series of micro-operations 605. The micro-operation 605 is executed by the quantum engine functional unit 204E to generate a code word with a prescribed code word or command packet format 606. In a particular format, the first data field indicates the qubit on which the operation will be performed (qubit 3 in this example), the second data field indicates the channel on which the operation will be transmitted (channel 4), the first The three fields are used to indicate the command state (for example, a single command state), and the fourth data field is used to indicate the type of qubit (transmon qubit). Of course, the basic principle of the present invention is not limited to any specific encoding format.The method according to an embodiment of the present invention is illustrated in FIG. 7. The method can be implemented in the context of the aforementioned processor architecture but not limited to any specific processor or system architecture.At 701, the source code containing quantum instructions is compiled to generate runtime program code with quantum and non-quantum instructions. At 702, the quantum/non-quantum instruction is fetched from the memory and stored in a local cache (eg, L1 instruction cache) or an instruction buffer. As mentioned above, quantum instructions can be freely mixed with non-quantum instructions in the pipeline.At 703, the quantum and non-quantum instructions are decoded into sets of quantum and non-quantum micro-operations, respectively, and stored in a queue before execution. At 704, quantum/non-quantum micro-operations are scheduled for execution based on the micro-operations and/or resource dependencies. For example, if the first micro-operation depends on the result of the second micro-operation, the first micro-operation can be scheduled for execution only when the data generated by the second micro-operation is available in one of the registers. Similarly, if a particular functional unit is busy, the scheduler can wait for an indication that the functional unit is available before scheduling micro-operations that require the functional unit. Various other/additional scheduling techniques can be implemented (for example, scheduling based on priority, register loading, etc.).At 705, quantum micro-operations and non-quantum micro-operations are executed on their corresponding functional units in the execution circuit. As mentioned above, the shared register set can be used to store the source and destination operands required by these micro-operations.At 706, the result generated by the execution of the quantum micro-operation can be used as an input to the interface unit to control the quantum state of the qubit in the quantum processor. In one embodiment, a series of codewords or command groupings can be generated, which identify the quantum channel, one or more qubits within the quantum processor, the qubit type, and/or the command state. The specific physical operations performed in response to codewords or command packets are based on the basic type of quantum processor used.The embodiments described herein integrate quantum instructions into existing processor pipelines. Due to the tight integration, these embodiments significantly reduce the various overheads/bottlenecks associated with current coprocessor designs. These overheads/bottlenecks include, for example, the communication between the classical computing layer/module and the quantum computing layer/module in the software stack, and the communication between the classical CPU and the quantum chip via a message queue. Given the relatively small size of a certain amount of subroutines, current GPU-like coprocessor implementations are inefficient.Due to the addition of classic processing power, the hybrid coprocessor model reduces some overhead. In a specific implementation that supports the hybrid coprocessor model, many new micro-architectural mechanisms have been introduced. However, these micro-architectural mechanisms are vaguely defined as the boundary between classic CPUs and quantum coprocessors.In contrast, in the hybrid architecture described in this article, the classical computing pipeline is equipped to fully support a defined set of quantum instructions, which can be in the front end of the pipeline (that is, at the macro instruction level) and the back end of the pipeline (For example, where quantum micro-operations and non-quantum micro-operations are mixed) and non-quantum instructions are freely mixed and executed on functional units in the execution circuit of the processor.Scalable qubit addressing mode for quantum execution engines and/or coprocessorsIn quantum computing, a qubit is a unit of quantum information, which is a quantum simulation of the classical binary bit. Calculations are achieved by directly applying quantum gates that represent quantum logic operations to qubits. Mathematically, this calculation process is described as a unitary transformation of the qubit. When the calculation is complete, the qubit is measured to obtain information about the state of the qubit.Therefore, in order to describe a quantum operation, it is necessary to identify the qubit or set of qubits to which the operation is applied. In a quantum program, each quantum instruction needs to encode both the operation to be performed and one or more qubits on which the operation is to be performed. In the existing quantum instruction set architecture (for example, QASM, open QASM, QIS, etc.), register operands are usually encoded in the opcode of the instruction. This scheme is suitable for classical calculations because the number of registers is very limited (for example, 16, 32, 64, etc.). However, this scheme is not scalable for quantum computing, because quantum instructions will eventually need to address a very large number of qubits. Therefore, encoding the qubit address in the opcode field of the quantum instruction will explode the instruction width.As mentioned above, in one embodiment, quantum instructions and non-quantum instructions are processed together in a shared processor pipeline. In this manner, quantum instructions can rely on the same addressing modes as those available for non-quantum instructions. Therefore, in this embodiment, the qubits are addressed in a manner similar to non-quantum instructions that access the system memory, providing a sufficiently large address space to accommodate a large number of qubits.As shown in FIG. 8, in this embodiment, the quantum engine functional unit 204E includes a qubit index generation unit (QIG) 802, which determines a qubit index value or a qubit ID in response to one or more micro-operations 805. One or more quantum operation units 801 process operations specified by micro operations. Then, the qubit index value (for example, 011 for qubit 3 in this example) is potentially combined with one or more commands generated by the quantum operation unit 801 in response to the processing micro-operation 805 in the codeword/ Command group 606.QIG 802 can operate according to different addressing modes supported by the processor. In one embodiment, the instruction identifies one of the shared registers 321 containing the qubit index value (sometimes referred to as the qubit ID). It can then use the qubit index value to identify the qubits within the codeword/command packet 606, and/or use the qubit index value to perform operations to generate one or more additional qubit index values. For example, it can add the qubit ID value to an integer specified by the micro-operation to generate a second qubit ID.The following example demonstrates a way in which QIG 802 uses x86 assembly syntax to generate qubit IDs in response to micro-operations. These operations can be performed in an x86 pipeline extended to support quantum instructions. However, the same general principles can be implemented on any processor architecture.The single qubit instruction "QIROTX [RDI], 1" applies the X gate to the qubit number stored in the RDI. Thus, if the RDI contains 5, the X gate is applied to the 5th qubit. In this example, QIG 802 determines the qubit ID simply by reading the value stored in the RDI (which is one of the shared registers 321 in this example). In this embodiment, the RDI value was previously stored by another micro-operation. As another example, if the architectural register RBX contains the value 2, then the two qubit instructions "QCNOTUP [RBX + 3]" apply the CNOT operation, where qubit 2 (q[2]) is the control qubit, and qubit 5 (Q[5]) is the target qubit. QIG interprets the [RBX+3] notation as: the ID of the control qubit is stored in RBX, and the ID+3 of the control qubit is the target qubit ID. Thus, the addressing scheme is extended so that two different qubits can be addressed with a single instruction (ie, CNOT). In contrast, in classical computing, each instruction addresses only one memory location.Figure 8 also illustrates a codeword trigger pulse generator (CTPG) 402A, which includes control logic and an analog-to-digital converter to interpret the codeword/command grouping 606 to identify one or more qubits (in this example Q3), and generate a pulse sequence to achieve the specified quantum operation. When all quantum operations have been performed as specified by the program code 205C, the quantum operation circuit 801 and QIG 802 generate codeword/command packet 606 so that one or more MDUs 403A-B take one or more Measurement of qubits (as specified by QIG 802 that generates qubit indexes). As mentioned above, the MDU includes analog-to-digital circuits to convert analog measurements into digital values, which are then processed by the quantum error correction unit 808 to detect and potentially correct errors. If valid result data has been received, it can be stored in one or more of the shared registers 321 and/or accumulated with previous measurement data. In addition to error correction, measurement can also be used for program flow control based on measurement feedback.The quantum error correction unit 808 can implement various technologies for detecting and correcting quantum errors. For example, in one embodiment, the error decoder (in the QEC unit 808) decodes the multi-qubit measurement from the quantum processor 207 to determine whether an error has occurred, and if so, implement corrective measures (Yes possible). Error measurements can be made from multiple qubits in a way that does not interfere with the quantum information in the encoded state of the qubit (for example, using an auxiliary qubit (ancilla qubit)). In response, the QEC unit 808 generates error syndrome data from which it can identify errors that have occurred and implement corrective actions. In one embodiment, the error syndrome data includes stabilizer codes, such as surface codes. In some cases, the response may simply be to reinitialize the qubit and start over. However, in other cases, modifications to the quantum algorithm implemented in the quantum program code 205C can be made to stabilize the area where the quantum processor is responsible for the error (for example, in the case where the compiler 205B includes a just-in-time (JIT) compiler) . In either case, the CTPG 402A performs the underlying physical operations under the control of the codeword/command packet 606 generated by the QEFU 204E. For example, the CTPG 402A can generate electromagnetic pulses to adjust the phase of one or more qubits based on the detected phase error, or reset the phase/spin of all qubits if reinitialization is required.The qubits are addressed in a manner similar to how the address memory of a classic CPU provides the scalability characteristics/attributes required for future quantum processor implementations. In particular, the above embodiments provide a qubit index, which is seamlessly integrated in the existing processor ISA and scaled to a large number of qubit systems. These embodiments also remove pressure from the quantum instruction opcode space through quantum extensions to x86 or other architectures to address the qubit space and integrate quantum operations into existing processor pipelines.The method according to an embodiment of the present invention is illustrated in FIG. 9. The method can be implemented on the above architecture but not limited to any specific processor or system architecture.At 901, quantum and non-quantum instructions from the runtime program code are extracted and decoded to generate quantum and non-quantum micro-operations. At 902, the index generation unit evaluates a quantum micro-operation including a register identifier and one or more values optionally included with the micro-operation to determine a qubit index value. As mentioned above, various techniques can be used to generate the index, including reading the qubit index value from the register identified by the micro-operation, and using the integer value included with the micro-operation to generate an additional qubit index value .At 902, the quantum execution circuit generates a codeword that specifies the quantum operation to be performed on the qubit identified by the calculated qubit index value. At 905, in response to another codeword generated based on additional micro-operations, a qubit measurement is performed. At 906, analog measurements made on one or more of the qubits are converted into digital values. Then, error correction and/or flow control can be performed based on the resulting digital result value stored in the register file of the processor.Apparatus and method for injecting corrective quantum operation in processor pipelineDuring the operation of two qubits in a quantum computing system, an exchange or interaction mechanism is usually used, which adds a drift term to the phase of the interacting qubits. This drift term tends to exponentially decrease the coherence of the qubits on the sequence of two qubit operations, resulting in a lower T2 (dephasing) time. This limits the amount of time available for quantum operations and reduces the robustness and usefulness of the quantum computing system.The resilience of a quantum computing system can be improved using corrective pulse sequences delivered with quantum operations. These correction pulse sequences are statically generated by the compiler for later replay on the quantum experimental hardware. The manually generated pulse sequence that compensates for decoherence in the quantum circuit can also be directly programmed into the system.However, before hardware-level playback, long train pulse sequences require exponential memory resources to store waveforms. In addition, due to the overhead of sending corrective pulse sequences between each quantum gate operation, the bandwidth of feeding pulse sequences into the system hardware limits the scalability of low circuit depth algorithms. The manually generated pulse sequence is lengthy and cannot be extended to a large number of qubits or long circuit depth algorithms.In order to construct more flexible quantum microcodes for general-purpose quantum computing systems, it is necessary to solve the problems of decoherence and incorrectly shaped control pulses. Decoherence refers to the fact that qubits are decohered by the loss of phase information encoded in them only by sitting idle. An imperfectly shaped control pulse can cause the qubit to lose phase alignment, causing the qubit to move off resonance. The next quantum operation on that qubit will only be partially valid, which leads to a certain amount of calculation errors.In order to solve the above-mentioned problems, one embodiment of the present invention uses a lookup table or other indexed data structure (hereinafter referred to as a “lookup table”) to store corrective operation sequences associated with different quantum operations. When a quantum instruction is received in the decoder unit, the look-up table is accessed to determine whether there is a correction sequence available for the quantum operation. The unique opcode of the macro instruction or the combination of micro operations generated by the macro instruction can be used as an index in the lookup table to identify any corrective actions. If a corrective pulse sequence is found, then a set of corresponding corrective micro-operations of the specified pulse sequence are injected into the instruction stream to replace (and/or combine) the micro-operations used for qubit operations.The correction micro-operations are forwarded to the quantum execution unit, which performs the correction micro-operations to generate a set of correction pulses. In one embodiment, the corrective micro-operations are uniquely tailored for each specific qubit and different combinations of qubits (for example, for two qubit operations between qubits). In one embodiment, the set of correction micro-operations used to generate the correction pulse may be compiled over time based on observations made with respect to a specific qubit, set of qubits, and/or specific operations. For example, if a particular qubit or set of qubits is showing a problem of decoherence, then one or more micro-operations can be automatically added to the lookup table to correct the problem.The decoherence problem can be identified by a quantum error correction unit. In one embodiment, the quantum error correction unit includes a machine learning engine to identify the decoherence problem based on the analysis of quantum calculations over a period of time. It can then identify a specific set of micro-operations and operand values needed to correct the problem. Thus, one embodiment of the present invention includes a quantum processor, an instruction decoder, a micro-operation sequencer (sequencer), and a quantum micro-code execution engine together with a look-up table that contains each type of quantum gate supported by the instruction set Of some pre-configured pulse sequences.10 illustrates an embodiment including a quantum decoder 202A, the quantum decoder 202A has a correction sequence management logic/circuit 1000 to manage and execute the lookup in the spin echo sequence table 1005, thereby storing for correction qubits A specific set of micro-operations and operand values for each instruction required for the error. When the quantum instruction is read into the quantum instruction decoder 202A, the correction sequence management logic/circuit 1000 performs a search to determine whether there is a correction pulse sequence for the qubit (or set of qubits) and/or the quantum operation identified by the instruction. If one is found, then the set of regular micro-operations of the instruction is replaced by a sequence of corrective micro-operations. In the specific embodiment in FIG. 10, the microcode sequencer (hereinafter referred to as the "correction microcode sequencer") 1002 that enables the generation of correction pulses generates a correction micro-operation sequence, which can replace the original micro-operation sequence, or can correct It complements it (for example, integrating corrective micro-operations in the original sequence of micro-operations). In the embodiment in which the original micro-operation sequence is replaced, the spin-echo sequence table 1005 contains both micro-operations that implement the echo sequence (for correction) and micro-operations that perform operations specified by quantum instructions.Regardless of how it is generated, the corrective micro-operation sequence is scheduled for execution on the quantum engine functional unit 204E, which executes the new composite pulse sequence via the Q-C interface 206.In one embodiment, the spin echo sequence table 1005 is statically generated based on a calibration test run on the quantum processor 207. After the initial static update, the correction sequence management circuit/logic 1000 dynamically updates the spin echo sequence table 1005 over time, because new errors are associated with various qubits of the quantum processor 207. In one embodiment, the error detection and machine learning logic/circuit 1008 can continuously analyze the results generated by the quantum processor 207 during runtime and specify the corrective actions to be taken by the corrective sequence management circuit/logic 1000. The corrective sequence The management circuit/logic 1000 then updates the spin echo sequence table 1005 with the new corrective micro-operation sequence and/or new operand value needed to perform the correction. For example, decoherence can be identified by repeated errors related to the state of a particular qubit or a particular combination of qubits.In one embodiment, when the error detection and machine learning logic/circuit 1008 detects an error syndrome that it has not seen before, it will try to identify any correlations between the new error syndrome and the previously learned model. Based on these correlations, it can generate a new set of corrective micro-operations in the spin echo sequence table 1005. If the corrective suggestion does not solve the error, the error detection and machine learning logic/circuit 1008 will make another attempt until the desired result is obtained, at which time it will remain listed in the spin echo sequence table 1005 (enter) Corrective micro-operations.Thus, in one embodiment, the machine learning logic/circuitry 1008 performs their unsupervised learning when new errors occur. Unsupervised learning is particularly beneficial for working with the quantum processor 207, because the physical response of each qubit may change over time and may also vary from quantum processor to quantum processor. In one implementation, the error detection and machine learning logic/circuit 1008 is initially equipped with a set of basic models, which are often used to detect and correct certain types of errors. Starting from this set of basic models, the error detection and machine learning logic/circuit 1008 continuously trains itself in response to detecting new errors, and updates the model and spin echo sequence table 1005 accordingly. As a result, the error detection and machine learning logic/circuit 1008 will become familiar with the specific characteristics of the quantum processor 207 associated with it, and will learn to correct different types of errors, some of which may be unique to the quantum processor 207. some.The method according to an embodiment of the present invention is illustrated in FIG. 11. The method can be implemented on the above architecture but not limited to any specific processor or system architecture.At 1101, a corrective training sequence can be performed, in which the quantum processor's qubits are evaluated through a series of operations and measurements to determine corrective operations. Based on the result, the correction sequence table (for example, the above-mentioned spin echo sequence table) is updated with entries that specify corrective operations performed on the specific quantum processor in response to certain instructions. As described above, the correction entries can be stored in the microcode ROM, and the sequence of micro operations to be executed can be identified as a replacement or addition to the uncorrected quantum micro operations.At 1103, in response to the quantum macroinstruction, the correction sequence table is queried to identify correction micro-operations associated with quantum operations and/or specific qubits to be used. At 1104, perform prescribed quantum operations on prescribed qubits through the classical-quantum interface. At 1105, a qubit measurement is performed in response to the codeword that specifies the measurement(s). At 1106, the analog measurements are converted into digital values, which are subjected to error detection/correction, and in one embodiment, to machine learning. For example, machine learning can identify changes to the correction sequence table to improve the correction micro-operation sequence. The measured values can also be stored in a shared register file where they can be further processed.Apparatus and method for integrating quantum control stack on chipSmall-scale quantum information processors have been implemented with various physical structures. In addition to the physical quantum chips placed inside the dilution refrigerator, these processors also include racks for classic control electronics.As quantum devices continue to mature, there is an emerging need to effectively organize and orchestrate all elements that control the stack of electronic devices, so that quantum physics chips can be manipulated (electrical control, microwave, flux) and measured with acceptable precision, thereby allowing Perform quantum experiments and procedures in a reliable and reproducible way.Research efforts have begun to progress toward more compact forms of control electronics stacks and classical computing components. However, in all current proposals, quantum computers are built from physically separate and independently designed components (including classic CPUs, quantum coprocessors, and control electronics). Because these components are designed with more flexible and universal interfaces, the communication between these components includes significant energy overhead, which negatively affects the control and operating efficiency of the quantum processor.In order to solve these problems, an embodiment of the present invention illustrated in FIG. 12 integrates the classical CPU with support for quantum instructions and quantum control electronic device functions (DC, microwave, flux, measurement, etc.) into a VLSI chip In 1210. When integrated on the same chip, the communication between different chip components can be highly optimized. In the specific embodiment shown in FIG. 12, the integrated quantum control stack chip 1210 includes an instruction decoder 202, a scheduler 310, and an execution circuit 1204 for executing quantum and non-quantum instructions (as in the above embodiment) .The quantum classical interface 206 is also integrated on the quantum control stack chip 1210, the quantum control stack chip 1210 includes a quantum operation analog signal generator 1201, the quantum operation analog signal generator 1201 includes an analog/RF component 1201B, the analog/RF component 1201B is used to generate an analog signal based on the digital waveform received from the digital part of the interface 1201A to control the qubits of the quantum processor 207. In addition, the qubit measurement circuit 1202 includes an analog/RF measurement component 1202B for performing qubit measurement in response to a signal received from the digital measurement component 1202A (eg, in response to the execution of one or more measurement micro-operations).In one embodiment, the integrated quantum control stack chip 1210 has power/performance characteristics that allow it to be included in the room temperature stage floor 1250 of the quantum system and tightly coupled to maintain a milliKelvin stage The quantum processor 207 in the floor 1251. In an alternative embodiment, a low temperature stage floor 1250 (eg, 4k stage floor) may be used.Therefore, this embodiment eliminates any inter-module interface and communication overhead at the architecture level, and directly couples the quantum control stack chip 1210 to the quantum processor 207. The individually designed chip 1210 includes a standard interface protocol. For example, the current implementation has a control and measurement IC that uses a low-bandwidth bus, such as a serial peripheral interface (SPI) bus, to communicate with the main controller chip. When the main control chip and the control/management IC are integrated, the interface between these components can be removed. Integration enables efficient pipelines and data paths to be designed to pass control and data between functional units.In addition, inter-module communication can be optimized at the architectural level to transfer operations and receive data between the commander and the responder. An example of architecture-level protocol optimization is the queue-based signal crossing between the non-deterministic timing domain of the digital quantum control stack chip 1210 and the deterministic timing domain of the quantum processor 207. It is also possible to use optimization between clock domains.Generally speaking, the embodiment shown in FIG. 12 removes the IC system design and operation overheads, which will naturally exist when separately designed VLSI modules are coupled. This embodiment also improves the inter-module communication efficiency and direct communication with the metal layer in the VLSI processing node.Although one embodiment integrates the digital processor 1210 with the control electronics 206, which drives the analog control signal to the quantum physics chip 207 to manipulate the qubits, all such control electronics functionality need not be integrated at the same time. For example, integration can be staged to pull in certain integrated circuits that have been fully tested first, and then pull in other components as they become mature. By way of example and not limitation, the integration of DC electronics and flux AWG within the quantum-classical interface 206 can be performed at a later time.Method and device for arbitrary qubit rotation on quantum processorIn recent years, small-scale quantum information processors have been implemented in different physical architectures. As quantum devices continue to mature, there is an emerging need to support arbitrary rotation of a single qubit. Moving the qubit state from an arbitrary point on the Bloch Sphere to another arbitrary point on the Bloch Sphere can be regarded as a qubit rotation around an arbitrary axis, which can be decomposed into a y-axis and The rotation of the z axis. If rotation around the y axis is not available on the physical quantum device itself, it can consist of rotation around the x axis and z axis. Therefore, it supports arbitrary rotation of a single qubit around the x-axis and z-axis.The embodiment of the present invention provides an arbitrary rotation of a single qubit around the Bloch spherical axis. Although the embodiments described below focus on rotation around the X axis, the basic principles of the present invention can be applied to rotation around the y axis and the z axis. Moreover, although implementations on quantum dot systems or superconducting quantum systems are described below, the basic principles of the present invention are not limited to these specific implementations.On quantum dots or superconducting quantum systems, precise arbitrary rotation requires a very specific RF waveform to be pulsed to the target qubit. There are two problems with designing a system to meet these requirements. First of all, "arbitrary" actually means that the waveform must be infinitely precise, which is impractical for qubit control electronics. In addition, in order to be sufficiently "accurate", a huge number of waveforms must be generated.An embodiment of the present invention solves these problems by using a limited number of waveforms to approximate any angle rotation around the X axis to achieve the required accuracy. In this embodiment, the control electronics only supports a basic set of angles rotating around the X axis. An arbitrary rotation is translated into a sequence of rotations (gates) derived from that base.The choice of the basic angle set and the decomposition design allows the quantum program to amplify the accuracy as the control electronics are amplified to support more rotations with higher accuracy in the basic rotation set.In one embodiment, a two-level decomposition is used. First, the compiler decomposes the rotating unitary into a sequence of π/n, where n is an integer ∈{±1, …±nmax}, where π/nmax approximates the hardware accuracy limit. In one embodiment, the processor can perform a second stage to decompose π/n into a sequence of π/2m, where m is an integer ε{1, ...mmax}, where π/2m_max is at the hardware accuracy limit. Normally, nmax=2m_max. Note that if the processor exposes the π/2m rotation in the instruction set architecture, the second-level decomposition may also be completed by the compiler together with the first-level decomposition.One embodiment is implemented in a hybrid classic-quantum x86-based architecture that operates with one or both of the following macro instructions:QIROTX qoffs32, r32/imm32QIROTX qoffs64, r64/imm64Respectively for 32-bit and 64-bit versions. The first operand (qoffs32, qoffs64) specifies the destination qubit, and the second operand (r32/imm32, r64/imm64) specifies the angle to be rotated. In one embodiment, the first operand is stored in the first source register, and the second operand is stored in either the second source register or the immediate value of the instruction. The qubit indexing technique described above can be used to identify the qubit on which the rotation is to be performed (see, for example, Figure 8 and associated text).One embodiment of the architecture for processing QIROTX instructions is illustrated in FIG. 13A. In this embodiment, the decoder 202 includes a circuit/microcode 1321 for decoding QIROTX instructions, and the execution circuit 1304 includes a QIROTX execution circuit 1322 for executing micro operations generated by the QIROTX decoding circuit/microcode 1321 (for example, One or more functional units). As described above, the execution circuit 1304 of this embodiment includes functional units for executing both quantum instructions and non-quantum instructions. In this embodiment, the first source register SRC1 1301 stores the QOFFS value for identifying the physical qubit on which the rotation is performed, and the second source register SRC2 1302 stores the rotation angle to be applied to the qubit.FIG. 13B illustrates an embodiment in which the rotation angle is encoded in the immediate data of the QIROTX instruction. In this embodiment, the immediate value is passed through to the execution circuit along with the micro-operation, and only a single source register SRC11301 is used to store the QOFFS value in order to identify the physical qubit on which the rotation is performed.The following description of how to encode the rotation angle in the source operand of the QIROTX instruction assumes a 64-bit implementation (for example, QIROTX qoffs64, r64/imm64). However, the basic principles described here can easily be ported to 32-bit implementations. In one embodiment, if r64[63]==0, the angle to be rotated is π/r64[63:0], and if r64[63]==1, the angle is -π/twos_complement (r64 [63:0]).In the instruction QIROTX [RDI] R10, if the register RDI contains 5 and the register R10 contains 1, then the X gate is applied to qubit 5. If register RDI contains 5 and register R10 contains 264-2, then Rx(-π/2) is applied to qubit 5.The difficulty lies in supporting all different waveforms/pulse shapes for different amounts of rotation. It is not reasonable to store 232 or 264 waveforms/pulse shapes on the chip.To solve this problem, one embodiment stores 216 waveforms/pulse shapes on the chip. Although this is very large, it is still manageable on the current architecture. In one embodiment, the precision limiter exists in the RF/analog circuit of the quantum control circuit 1350, and on the currently envisaged RF/analog architecture, the combined integrated RF and analog circuit has a precision of approximately 16 bits. Thus, the value of 216 is appropriate and provides a sufficient level of accuracy for these architectures. The accuracy of the integrated RF/analog circuit may improve over time, but it will likely take years.In another embodiment, only 2N+1 waveform shapes/pulses are stored according to the following sequence: π, ±π/2, ±π/4, ±π/8, ±π/16… ±π/2N. For example, 33 waveform shapes/pulses can be stored according to the following sequence: π, ±π/2, ±π/4, ±π/8, ±π/16… ±π/216. Then, an arbitrary rotation is approximated by performing a binary search for the closest match. In particular, different combinations of these values can be combined via binary search to identify a specific combination closest to the desired rotation value. For example, the rotation of π/3 can be approximated by the combination of π/4 + π/16 + π/64 + π/256, etc., which results in an angular error of π/(3*256) = π/768, which may be Acceptable accuracy level.For many quantum algorithms, it is sufficient to rotate any single qubit to within a distance of approximately 10-6. With this architecture, quantum coprocessors or integrated quantum processors can be designed with low-precision rotation support to meet the requirements of these algorithms. As control electronics become more capable, the rotation accuracy of quantum coprocessors or integrated quantum processors can increase accordingly.A computer-implemented method according to an embodiment of the present invention is illustrated in FIG. 14. Although the method can be implemented on the above-mentioned processor and system architecture, it is not limited to any specific system architecture.At 1401, a quantum rotation instruction specifying an arbitrary rotation value is extracted and decoded. At 1402, the qubit is identified using the first source value, and the arbitrary rotation value is identified from the second source value. For example, the first source value may be included as the first operand of the quantum rotation instruction, and the second source value may be included as the second operand and/or immediate value of the quantum rotation instruction. At 1403, a binary search is performed using different combinations of waveform shape/pulse value to approximate arbitrary rotation, and at 1404, the on-chip waveform shape/pulse value is accessed. As mentioned above, the binary search can be executed by the functional unit in the execution circuit. At 1405, the approximate rotation value is used to rotate the qubit, and at 1406, a measurement is taken (potentially after additional qubit operations) to measure the current state of one or more qubits, and the resulting value is stored in In the register of the register file.Apparatus and method including scalable representation of arbitrary quantum computing rotationQuantum computing is performed with a sequence of quantum gates, some of which can be performed in parallel. For example, in Figure 15, the code area 1500A includes two gates, q0 and q1, which can be executed in parallel. Similarly, gates q0 and q1 from code region 1500B can be executed in parallel.On quantum computers, quantum circuits are likely to be stored as program codes in classical memory subsystems. However, there is currently no defined mechanism for the vector sub-control processor to identify those gates that can be executed in parallel. Instead, the current system can cascade all the gates (pulses) used for qubits into a single waveform, synchronize it with other qubits, load it into an arbitrary wave generator (AWG), and replay it. Although some systems provide features such as flowcharts and timeline user interfaces for waveform creation, these focus on the synchronization of physical signals between boxes and modules.An embodiment of the present invention expresses the qubit gate-level parallelism and timing requirements in the classic instruction set architecture. In a particular implementation, each gate is assigned to a quantum circuit "slice". Gates assigned to a particular slice can be executed in parallel if they are applied to different qubits, while gates applied to the same qubit in the slice are executed in order if they are applied to the same qubit.FIG. 16 illustrates an example of a quantum circuit slice 1600 applied to four qubits q0-q4. In this example, slice 1600 includes a 40ns time block with two 20ns sub-blocks. In the first 20ns sub-block, the rotation is performed in parallel on the qubits q0 and q1 (ie, the rotation Rxy(0.5π, π) is performed on q0 in parallel with the rotation Rxy(0, 0.5π) on q1). At the beginning of the second 20ns block, the double-qubit gate operation is performed on qubits q2 and q3 in parallel with the second rotation on qubit q0 (Rxy(0.5(β-π), π)).In the implementation in which quantum instructions are specified according to the ISA, n-bit fields can be used in each quantum instruction to encode slices. Although a 2-bit field is used in one embodiment, the basic principles of the present invention are applicable to various other field sizes.FIG. 17 illustrates an example in which a 2-bit slice field in bits [31:30] is used to generate 32-bit quantum instructions 1701-1706 to indicate whether the instruction is included in the slice. As illustrated, the instruction may include one or more of an opcode field specifying the quantum operation to be performed, a qubit field identifying the qubit on which the quantum operation will be performed, and (optionally) specifying the parameters of the instruction. Operands.In one embodiment, the compiler 1700 generates a 2-bit slice flag 1710 to group instructions into slices and identify the beginning and end of each slice. For example, in one implementation, the slice marker {0,1} indicates the beginning of the slice, and {1,0} indicates the end of the slice. Other instructions contained in the slice (not at the beginning or end) are marked with {0,0}. In one implementation, the slice tag {1,1} is reserved for a single instruction slice. Thus, in the example in FIG. 17, the slice mark of instruction 1703 indicates the beginning of a slice, which includes instruction 1704 and ends with instruction 1705. Instructions 1701, 1702, and 1706 are single instruction slices that are not executed in parallel with any other instructions. Instructions added with a slice field may include quantum macro instructions or micro instructions (quantum micro operations as described herein).The slice-based quantum execution, scheduling, and control circuit 1750 interprets the slice flags to determine the range of the slice, and executes the parallel gates 1740 within the slice on multiple qubits. In one embodiment, a set of wave generators 1752 generates a sequence of parallel pulses that are applied to different qubits in quantum processor 1755. Various types of slice-based execution, scheduling, and control circuits 1750 can be used, some examples of which are described herein (see, for example, Figure 23 and associated text).Figure 18 illustrates another example of using a pseudo-code sequence including a sequence of quantum instructions, each of which has been assigned a slice flag. Here, both the first quantum circuit slice 1801 and the second quantum circuit slice 1802 include rotation operations on the qubits q0 and q1. One of the rotations is assigned the start slice mark {0,1}, and the other rotations are assigned the end slice mark {0,1}, and the timing controller 1750 is notified of the two rotation operations in each slice 1801-1802 Can be executed in parallel. The remaining instructions in the pseudo-code sequence are assigned a single instruction slice tag {1,1}, indicating that these instructions cannot be executed in parallel with any other instructions.Figures 19A-B illustrate examples of implementing different types of quantum circuits subdivided into slices using the implementation described herein. In particular, FIG. 19A includes a quantum circuit for a disorder-induced phase transition with seven qubits. Based on the capabilities of the qubits to be processed in parallel, five different slices 1801-1805 are identified. In particular, the first slice 1801 includes three two-qubit controlled NOT gates (C gates) executed in parallel; the second slice 1802 includes seven different controlled gates executed in parallel on seven different qubits. Phase rotation; the third slice 1803 includes three double-qubit-controlled revolving gates executed in parallel on six qubits; and slices 1804-1805 each include three controlled NOT gates to operate on different groups of qubits .Figure 19B includes a long-range interactive quantum circuit in which operations are performed between non-adjacent qubits. This implementation is arranged into slices 1901-1906, each of which includes multiple sets of long-range 2-qubit interactions. Slice 1901 includes three simultaneous long-range interactions, and slices 1902-1903 each include three simultaneous long-range interactions, followed by a single long-range operation. Slice 1904 includes a single long-range operation, followed by two simultaneous long-range interactions, followed by another single long-range operation. Slice 1905 successively includes: one long-range interaction, two simultaneous qubit rotations, two longer-range interactions, and two simultaneous qubit rotations. Finally, slice 1906 includes two consecutive long-range interactions, followed by a single qubit rotation.Error correction operations are usually implemented with a set of relatively small, repetitive circuits that are suitable for scheduling into parallel slices. Figure 20A illustrates an example of how the quantum slice architecture described herein can be applied to error correction operations. In particular, the surface code operation schedule 2005 using auxiliary qubits 2010 is illustrated as a sequence of eight quantum operations (numbered 1-8).The lower left part of FIG. 20A illustrates a corresponding surface code syndrome measurement circuit 2030 including an auxiliary qubit 2010 and four data qubits. In this embodiment, the quantum operation is subdivided into slices 2001, as highlighted by the dotted pattern. As illustrated, each slice may include multiple parallel qubit operations, including error correction operations involving auxiliary qubits 2010.Similarly, FIG. 20B illustrates a surface code lattice surgical merging operation in which two separate surface code lattices 2051-2052 are merged into a single merged surface code lattice 2055. This operation highlights the way in which slices can be specified to enable logical operations.Figures 21-22 provide code examples that compare quantum timelines with quantum slices as described herein. In particular, the program code sequence illustrated in FIG. 21 includes a combination of non-quantum instructions and quantum instructions. The first instruction LDI loads the value into the register r0, followed by the two quantum instructions Q_OP0 and Q_OP1, and then the first wait instruction based on the value in r0 (ie, wait for the bit to change in response to an event). After the waiting condition has been met, another quantum operation Q_OP2 is performed. The second wait instruction is executed on line 6 (QWAIT 0), followed by the final quantum operation Q_OP3.Figure 22 illustrates the quantum portion of the program code that implements the slice-based technique described herein. In particular, a two-bit slice label is applied to each quantum operation to specify the quantum operations that can be executed in parallel within the slice. Here, Q_OP0 and Q_OP3 are assigned a slice flag 11 to indicate that each quantum instruction includes a single instruction slice. However, the slice flags assigned to Q_OP1 and Q_OP2 indicate that these operations can be performed in parallel within the same slice. In particular, slice mark 01 indicates that Q_OP1 is the beginning of a slice, and slice mark 10 indicates that Q_OP2 is the end of a slice. Thus, using the quantum slicing technique described herein, two of the four quantum operations are performed in parallel, rather than sequentially.FIG. 23 illustrates one embodiment of an architecture for performing parallel quantum operations in response to quantum micro-operation gates arranged with slice markers 2300. The timing control unit 2301 includes a plurality of queues 2305-2308 for storing quantum operations and timing data for performing slice-based quantum processing. In particular, quantum operations to be executed in parallel based on slice markers are stored in a plurality of analog wave generator (AWG) queues 2301 at the same time, and timing tags that specify the timing details of quantum operations and measurement operations are stored in the timing queue 2305.The measurement pulse generator (MPG) queue 2307 stores operations for generating measurement pulses for qubit measurement, and the measurement discrimination (MD) queue 2308 stores operations for authenticating measurements using one or more measurement authentication techniques.In one embodiment, the timing controller 2310 operates based on the timing markers from the timing queue 2301, and generates quantum control signals and measurement signals through an analog-to-digital (A/D) interface 2302, which provides a pair of quantum processors with the A/D interface 2302. The 2340 qubits perform prescribed operations.In one embodiment, the slice-based control circuit 2315 assigns quantum micro-operations (uop) in each slice to the set of analog wave generators 2321. Quantum micro-operations in slices that point to different qubits can be executed in parallel. Each quantum micro-operation is processed by the micro-operation unit 2322 in the analog wave generator 2321. In the illustrated embodiment, when performing micro-operations associated with quantum operations, the micro-operation unit 2322 generates one or more codewords to control the pulse generator (CTPG) 2324 triggered by the codewords of the corresponding AWG 2321. For quantum micro-operations in slices that can be executed in parallel (that is, they point to different qubits), the set of CTPG 2324 generates parallel quantum control pulses to control the qubits in the quantum processor 2340.The measurement pulse trigger/generator 2326 in the A/D interface 2302 processes the micro-operations in the MPG queue 2307 to generate microwave measurement pulses modulated by the qubit. The measurement discrimination unit 2327 converts the analog output from the quantum processor 2340 into a binary result in response to the operation specified in the MD queue 2308. The binary results are then stored in a storage device from which they can be analyzed.An embodiment of a method for implementing a slice-based quantum architecture is illustrated in FIG. 24. This method can be implemented in the context of the various quantum processing architectures described herein, but is not limited to any specific architecture.At 2401, the source code that specifies the quantum operation is evaluated, and based on the evaluation at 2402, a slice tag is attached to the quantum instruction to form a quantum slice. In one embodiment, each quantum slice includes a set of one or more instructions. For multi-instruction quantum slices, if they point to different qubits, quantum operations can be executed in parallel. In one embodiment, quantum slices are formed based on the dependency between quantum instructions and quantum processing architecture. For example, the first quantum instruction that depends on the result of the second quantum instruction must be executed after the second quantum instruction. These two instructions should not both be included in the same slice, or if they are included in the same slice, the second instruction must be processed before the first instruction. Moreover, if the quantum processing architecture on which the quantum operations are performed can only process a limited number of quantum operations in parallel, this architectural limitation can be used to determine the most effective number of quantum operations to be included in each quantum slice.As described above, in one embodiment, the operation of evaluating source code and attaching slice markers is performed by the compiler before and/or during runtime. Alternatively or additionally, the slice generation circuit in the front end of the instruction processing pipeline generates quantum slices based on its evaluation of the quantum instructions entering the pipeline, thereby attaching quantum tags to define the start and end of each slice.At 2403, the slice mark is evaluated to identify the slice. As mentioned above, in one embodiment, a 2-bit field is appended to each instruction to mark the start of the slice (01), the end of the slice (10), the body of the slice (00), and the single instruction slice (11). The 2-bit value is used to identify those instructions that can be executed in parallel and those instructions that cannot be executed in parallel (for example, those operations that must be executed serially).At 2404, the quantum instructions are decoded and executed according to the slice flag. For example, the order in which quantum instructions are decoded/executed can be determined based on the quantum slice grouping of instructions. In one embodiment, instructions are decoded and executed in an attempt to maximize the parallel processing performed by the quantum controller (eg, to ensure that operations that can be executed in parallel are also available to the quantum controller).At 2405, the quantum operation is dispatched to the quantum controller and/or quantum-classical interface according to the quantum slice specification. The quantum controller/interface includes an analog wave generator (AWG), which is used to generate an analog pulse sequence to control the qubits of the quantum processor, whenever possible to generate parallel pulses for parallel qubit operations.Apparatus and method including scalable representation of arbitrary quantum computing rotationAlthough the quantum architecture described in this article utilizes quantum gate-level parallelism and operates many qubits in parallel to improve performance, the quantum program instructions are sequentially stored in the memory. Thus, when controlling a large number of qubits, a high-bandwidth decoder is implemented in one embodiment to convert instructions into parallel quantum operations.An embodiment of the present invention enlarges the instruction decoding bandwidth by increasing the clock frequency of the decoder and adopting a parallel multi-decoder design. Using these techniques, hundreds or potentially thousands of physical qubits can be controlled in parallel.Figure 25 illustrates an embodiment of the present invention, in which multiple decoders 2510-2512 decode quantum instructions from one or more quantum applications 2505 in parallel to generate parallel sets of quantum micro-operations. The decoding load balancer 2507 uses a prescribed load balancing strategy to distribute quantum instruction decoding operations across each of the decoders 2510-2512. For example, the decoding load balancer 2507 may distribute the decoding workload to complete the decoding operation as quickly as possible. Alternatively, the decoding load balancer 2507 may distribute work to the decoders 2510-2512 to meet the active qubit 2540 throughput requirements (eg, only consume as much power as needed to meet the throughput requirements).After decoding, the M to N interconnect fabric 2530 dispatches the resulting quantum micro-operations across multiple control channels 2520A-G to perform parallel quantum operations on the qubits 2540. Here, M is the number of decoders 2510-2512, and N is the number of channels 2520A-G. In this embodiment, any one of the M decoders 2510-2512 can be coupled to any one or more of the N channels 2520A-G via the M-to-N interconnect 2530 (ie, so that the decoding The micro-operations generated by the converters 2510-2512 can be sent to any appropriate channel 2520A-G for qubit processing).In one embodiment, each channel 2520A-G includes one or more of the components described above with respect to FIG. 23, including an analog wave generator (AWG) 2321, a slice-based control circuit 2315, and an associated qubit The AWG queue 2306 that performs the control operation on the 2540.The values of M and N can be adjusted based on the speed of the channels 2520A-G and qubits 2540 (ie, the "slow" clock domain) relative to the speed of the decoder 2510-2512 (ie, the "fast" clock domain). By way of example and not limitation, the fast clock domain may be 500 MHz to 3 GHz, while the slower clock rate may be 50 Hz to 300 Hz. One embodiment uses 64-byte cache lines and 32-bit instructions, although the basic principles of the invention are not limited to this implementation.The embodiment of the present invention allows the parameterization of quantum instructions, and prevents the code size explosion by turning the quantum circuit into a subroutine that accepts input parameters. The use of these techniques avoids the limitation of statically compiled quantum programs, where the instruction operands are all calculated at compile time. Moreover, the above-mentioned embodiments provide programmers with a familiar ISA-based mechanism to closely mix classical calculations and quantum gates, and transfer data between these two worlds.In the above detailed description, reference is made to the drawings forming a part thereof, and in the drawings, embodiments that can be practiced are shown through illustrations. It is understood that other embodiments may be utilized, and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description should not be taken in a restrictive sense.The various operations may be described as multiple discrete actions or operations in sequence in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be taken as implying that these operations must be order-related. In particular, these operations may not be performed in the order of presentation. The described operations may be performed in a different order from the described embodiment. In additional embodiments, various additional operations may be performed, and/or the described operations may be omitted. Unless otherwise stated, terms like "first", "second", "third", etc. do not imply a specific order.For the purposes of this disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of this disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A , B and C). The term "between" when used in reference to a measurement range includes the end of the measurement range. As used herein, the notation "A/B/C" means (A), (B), and/or (C).The description uses the phrases "in one embodiment" or "in an embodiment," which may each refer to one or more of the same or different embodiments. In addition, the terms "including", "including", "having" and the like as used in relation to the embodiments of the present disclosure are synonymous.ExampleThe following are example implementations of different embodiments of the invention.Example 1. A device includes: an instruction fetching circuit, which is used to fetch multiple quantum instructions from a memory or a cache;A slice-based instruction processing circuit, which is used to identify quantum circuit slices of a plurality of groups including one or more quantum instructions in the plurality of quantum instructions; and one or more instruction decoders, which are used to decode the Quantum instructions to generate quantum micro-operations; and a quantum execution circuit for executing multiple sets of quantum micro-operations in parallel based on the quantum circuit slice.Example 2. The device of example 1, wherein the quantum execution circuit is to transmit a first set of control signals to the quantum controller in response to executing a first set of quantum micro-operations associated with the first quantum slice, and the first set of control The signal causes the quantum controller to generate analog waveforms to modify multiple qubits of the quantum processor in parallel.Example 3. The device according to example 2, wherein the slice-based instruction processing circuit is to identify the slice based on the slice flag field in each quantum instruction.Example 4. The device according to example 3, wherein the first value in the slice flag field identifies its associated instruction as the beginning of a slice, and the second value in the slice flag field identifies its associated instruction as the end of the slice, and The third value in the slice flag field identifies its associated instruction as being within the slice.Example 5. The device according to example 4, wherein the fourth value in the slice flag field identifies its associated instruction as a single instruction slice.Example 6. The device of Example 3, wherein each slice flag field includes a 2-bit instruction field, and each quantum instruction includes a 32-bit instruction.Example 7. The device of Example 1, wherein the first set of quantum micro-operations includes a first quantum rotation operation and a second quantum rotation operation.Example 8. The device according to example 7, wherein the first set of control signals is to cause the quantum controller to generate a first analog waveform to perform the first quantum rotation operation on the first qubit, and the first analog waveform The waveform generates a second analog waveform in parallel to perform the second quantum rotation operation on the second qubit.Example 9. The device of Example 1, wherein the first set of quantum micro-operations includes a first two-qubit-controlled NOT gate and a second two-qubit-controlled NOT gate.Example 10. The device according to example 9, wherein the first set of control signals is to cause the quantum controller to generate a first set of analog waveforms to realize the NOT gate controlled by the first and second double qubits.Example 11. A method includes: fetching a plurality of quantum instructions from a memory or a cache; identifying quantum circuit slices including a plurality of groups of one or more quantum instructions of the plurality of quantum instructions; and decoding the quantum instructions to generate Quantum micro-operations; and executing multiple sets of the quantum micro-operations in parallel based on the quantum circuit slice.Example 12. The method of Example 11, further comprising: transmitting a first set of control signals to the quantum controller in response to performing a first set of quantum micro-operations associated with the first quantum slice, the first set of control signals to cause all The quantum controller generates analog waveforms to modify multiple qubits of the quantum processor in parallel.Example 13. The method according to example 12, wherein each slice is to be identified based on the slice flag field in each quantum instruction.Example 14. The method according to example 13, wherein the first value in the slice flag field identifies its associated instruction as the beginning of the slice, and the second value in the slice flag field identifies its associated instruction as the end of the slice, and The third value in the slice flag field identifies its associated instruction as being within the slice.Example 15. The method according to example 14, wherein the fourth value in the slice flag field identifies its associated instruction as a single instruction slice.Example 16. The method of example 13, wherein each slice flag field includes a 2-bit instruction field, and each quantum instruction includes a 32-bit instruction.Example 17. The method of example 11, wherein the first set of quantum micro-operations includes a first quantum rotation operation and a second quantum rotation operation.Example 18. The method according to example 17, wherein the first set of control signals is to cause the quantum controller to generate a first analog waveform to perform the first quantum rotation operation on the first qubit, and the first analog waveform The waveform generates a second analog waveform in parallel to perform the second quantum rotation operation on the second qubit.Example 19. The method of Example 11, wherein the first set of quantum micro-operations includes a first two-qubit controlled NOT gate and a second two-qubit controlled NOT gate.Example 20. The method according to example 19, wherein the first set of control signals is to cause the quantum controller to generate a first set of analog waveforms to realize the NOT gates controlled by the first and second double qubits.Example 21. A machine-readable medium having program code stored thereon, the program code, when executed by a machine, causes the machine to perform the following operations: fetch a plurality of quantum instructions from a memory or a cache; and the identifier includes the plurality of quantum instructions; Quantum circuit slices of multiple groups of one or more quantum instructions in the quantum instructions; and decoding the quantum instructions to generate quantum micro-operations; and executing multiple sets of the quantum micro-operations in parallel based on the quantum circuit slices.Example 22. The machine-readable medium of example 21, further comprising program code that causes the machine to perform the following operations: in response to performing the first set of quantum micro-operations associated with the first quantum slice, transmitting the first set of control signals to the quantum The controller, the first set of control signals is to cause the quantum controller to generate an analog waveform to modify multiple qubits of the quantum processor in parallel.Example 23. The machine-readable medium of example 22, wherein each slice is to be identified based on a slice flag field in each quantum instruction.Example 24. The machine-readable medium of example 23, wherein the first value in the slice flag field identifies its associated instruction as the beginning of the slice, and the second value in the slice flag field identifies its associated instruction as the beginning of a slice. End, and the third value in the slice flag field identifies its associated instruction as being within the slice.Example 25. The machine-readable medium of example 24, wherein the fourth value in the slice flag field identifies its associated instruction as a single instruction slice.Example 26. The machine-readable medium of example 23, wherein each slice flag field includes a 2-bit instruction field, and each quantum instruction includes a 32-bit instruction.Example 27. The machine-readable medium of example 21, wherein the first set of quantum micro-operations includes a first quantum rotation operation and a second quantum rotation operation.Example 28. The machine-readable medium of example 27, wherein the first set of control signals is to cause the quantum controller to generate a first analog waveform to perform the first quantum rotation operation on the first qubit, and to interact with the The first analog waveform generates a second analog waveform in parallel to perform the second quantum rotation operation on the second qubit.Example 29. The machine-readable medium of example 21, wherein the first set of quantum micro-operations includes a first two-qubit controlled NOT gate and a second two-qubit controlled NOT gate.Example 30. The machine-readable medium of example 29, wherein the first set of control signals is to cause the quantum controller to generate a first set of analog waveforms to implement the first and second dual-qubit-controlled NOT gates.The embodiments of the present invention may include various steps, which have been described above. These steps can be embodied by machine-executable instructions, which can be used to make a general-purpose or special-purpose processor execute these steps. Alternatively, these steps can be performed by specific hardware components that contain hard-wired logic to perform the steps, or by any combination of programmed computer components and custom hardware components.As described herein, instructions may refer to a specific configuration of hardware such as an application specific integrated circuit (ASIC) configured to perform certain operations or have predetermined functionality, or stored in a non-transitory computer-readable medium. Software instructions in the memory. Thus, the technology shown in the drawings can be implemented using codes and data stored and executed on one or more electronic devices (for example, terminal stations, network elements, etc.). Such electronic devices use computer machine readable media, such as non-transitory computer machine readable storage media (for example, magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase change memory) and transitory computers Machine-readable communication media (for example, electrical, optical, acoustic or other forms of propagated signals-such as carrier waves, infrared signals, digital signals, etc.) to store and transmit codes (internally and/or through networks and other electronic devices) And data.In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input / Output device (for example, keyboard, touch screen and/or display) and network connection. The coupling of the group of processors and other components is usually through one or more buses and bridges (also called bus controllers). The storage device and the signal carrying network services respectively represent one or more machine-readable storage media and machine-readable communication media. Thus, the storage device of a given electronic device generally stores code and/or data for execution on the set of one or more processors of the electronic device. Of course, one or more parts of the embodiments of the present invention may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purpose of explanation, many specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the present invention can be practiced without some of these specific details. In some cases, well-known structures and functions are not described in detail in order to avoid obscuring the subject of the present invention. Therefore, the scope and spirit of the present invention should be judged based on the following claims. |
FinFET transistors (102), (104), P-N junctions (150) and methods (400) for forming the same are described herein. In one example, a FinFET transistor (102, 104) is described that includes a channel region (214) wrapped by a metal gate (208), the channel region (214) connecting source and drain regions (210), (212). A first oxide isolation layer (112) is disposed on a first side of the fin (202) and a second oxide isolation layer (114) is disposed on a second side of the fin (202), where the second side is opposite of the first side. The second oxide isolation layer (114) has a thickness (284) greater than a thickness (280) of the first oxide isolation layer (112). |
CLAIMSWhat is claimed is:1. A P-N junction comprising:a first P-type FinFET transistor;a first N-type FinFET transistor disposed adjacent the first P-type FinFET transistor; anda first oxide isolation layer laterally separating the first N-type FinFET transistor from the adjacent the first P-type FinFET transistor, the first oxide isolation layer having a thickness of greater than 150nm.2. The P-N junction of claim 1 , wherein the first P-type FinFET transistor comprises:a second oxide isolation layer disposed on a side of the first P-type FinFET transistor opposite the first oxide isolation layer, the second oxide isolation layer having a thickness of less than half of a thickness of the first oxide isolation layer.3. The P-N junction of claim 2, wherein the thickness of the first oxide isolation layer is at least three times the thickness of the second oxide isolation layer.4. The P-N junction of claims 1 -3, wherein the bhrh· bRhRproduct gain of the P-N junction is less than 1.5. The P-N junction of claim 1 further comprising:a second P-type FinFET transistor disposed adjacent the first P-type FinFET transistor; anda second oxide isolation layer laterally separating the first P-type FinFET transistor from the adjacent second P-type FinFET transistor, the second oxide isolation layer having a thickness of less than half a thickness of the first oxide isolation layer.6. The P-N junction of claim 5, wherein the thickness of the second oxide isolation layer is less than 80nm and the thickness of the first oxide isolation layer is greater than 200nm.7. The P-N junction of claim 5, wherein a width of the first oxide isolation layer defined between the first P-type FinFET transistor and the adjacent second N-type FinFET transistor is greater than a width of the second oxide isolation layer defined between the first P-type FinFET transistor and the adjacent second P-type FinFET transistor.8. The P-N junction of claim 1 further comprising:a second N-type FinFET transistor disposed adjacent the first N-type FinFET transistor; anda second oxide isolation layer laterally separating the first N-type FinFET transistor from the adjacent second N-type FinFET transistor, the second oxide isolation layer having a thickness less than the thickness of the first oxide isolation layer.9. The P-N junction of claim 8, wherein the thickness of the second oxide isolation layer is less than 80nm and the thickness of the first oxide isolation layer is greater than 200nm.10. The P-N junction of claim 8, wherein a width of the first oxide isolation layer is greater than a width of the second oxide isolation layer.11. The P-N junction of claim 1 further comprising:a second FinFET transistor disposed adjacent one of the first P-type FinFET transistor and first N-type FinFET transistor, the second FinFET transistor being of the same type as a closer of the first P-type FinFET transistor and first N-type FinFET transistor; and a second oxide isolation layer laterally separating the second FinFET transistor from the adjacent one of the first P-type FinFET transistor and first N-type FinFET transistor, the second oxide isolation layer having a thickness substantially equal to the thickness of the first oxide isolation layer.12. A method for forming a P-N junction, the method comprising:etching a semiconductor substrate to form a plurality of high aspect ratio fins, the plurality of high aspect ratio fins including a first high aspect ratio fin and a second high aspect ratio fin separated by a first high aspect ratio trench;filling the first high aspect ratio trench with an oxide material;removing a portion of the oxide material filling the first high aspect ratio trench; andstopping the removal of the oxide material filling the first high aspect ratio trench to form a first oxide isolation layer having a thickness of at least 150nm.13. The method of claim 12, wherein etching the semiconductor substrate to form the plurality of high aspect ratio fins further comprises:forming the first high aspect ratio fin in a p-doped region of the semiconductor substrate; andforming the second high aspect ratio fin in an n-doped region of thesemiconductor substrate, the first and second high aspect ratio fins separated by the first high aspect ratio trench.14. The method of claim 13 further comprising:forming a third high aspect ratio fin of the plurality of high aspect ratio fins in a p-doped region of the semiconductor substrate adjacent the first high aspect ratio fin; andforming a second oxide isolation layer having a thickness less than 100nm between the first and third high aspect ratio fins.15. The method of claim 13 further comprising:filling a second first high aspect ratio trench etched in the semiconductor substrate with an oxide material;removing a portion of the oxide material filling the second high aspect ratio trench; andstopping the removal of the oxide material filling the second high aspect ratio trench to form a second oxide isolation layer having a thickness less than half of the thickness of the first oxide isolation layer. |
FINFET TECHNOLOGY USING DEEP ISOLATIONTECHNICAL FIELDEmbodiments of the present invention generally relate to FinFET transistors, P-N junctions and methods for forming the same. More particularly, embodiments of the present invention relate to FinFET transistors and P-N junctions having deep oxide isolation layers.BACKGROUNDFinFET transistors have begun to replace traditional planar transistors in next generation electronic devices due to the ability to enhance the control of current flowing between source and drain regions of the transistors at smaller nanometer nodes. Devices, such as memory structures, also benefit from the use of FinFET transistors because FinFET transistors have lower power and provide increased transistor density while enabling improved deviceperformance.Memory structures that use FinFET transistors remain susceptible to single event latch-ups (SEL), just like planar transistors. Latch-up i n CMOS technologies is caused by the triggering of a parasitic p-n-p-n SCR (silicon controlled rectifier) structure. S E L is caused by transient currents originating from charges generated along the track of an incident charged particle. Neutrons are the primary cause of SEL in terrestrial applications. Conventional S E L m itigation techniques for planar transistor aim to decouple or weaken elements of the parasitic SCR structure. S uch techniques are typically associated with an area penalty that can be tolerated for a given application . U ntil recently both CM OS and underlying S E L device physics have scaled together in planar transistors, thus al lowi ng predictable S E L results for a given design flow. However, this has changed with the recent introduction of Fin FET technology as it has been observed that the failure rate associated with SEL events in FinFET transistors is generally higher than that of planar transistors.Thus, there is a need for an improved FinFET transistor. SUMMARYFinFET transistors, P-N junctions and methods for forming the same are described herein. In one example, a FinFET transistor is described that includes a fin having channel region wrapped by a metal gate, the channel region connecting a source region and a drain region of the fin. A first oxide isolation layer is disposed on a first side of the fin and a second oxide isolation layer is disposed on a second side of the fin, where the second side is opposite of the first side. The second oxide isolation layer has a thickness greater than a thickness of the first oxide isolation layer.In another example, a P-N junction is described. The P-N junction includes a first P-type FinFET transistor, a first N-type FinFET transistor and first oxide isolation layer. The first N-type FinFET transistor is disposed adjacent the first P-type FinFET transistor. The first oxide isolation layer lateral separates the first N-type FinFET transistor from the adjacent the first P-type FinFET transistor. The first oxide isolation layer has a thickness of at least 150 nm.In still another example, a P-N junction is described that includes a first P- type FinFET transistor, a first N-type FinFET transistor and first oxide isolation layer. The first N-type FinFET transistor is disposed adjacent the first P-type FinFET transistor. The first oxide isolation layer lateral separates the first N-type FinFET transistor from the adjacent the first P-type FinFET transistor. The -N junction has a bhrh· bRhRproduct gain of less than 1.In yet another example, a method for forming a P-N junction is described that includes etching a semiconductor substrate to form a plurality of high aspect ratio fins, the plurality of high aspect ratio fins including a first high aspect ratio fin and a second high aspect ratio fin separated by a first high aspect ratio trench, filling the first high aspect ratio trench with an oxide material, removing a portion of the oxide material filling the first high aspect ratio trench, and stopping the removal of the oxide material filling the first high aspect ratio trench to form an oxide isolation layer having a thickness of at least 150nm. BRIEF DESCRIPTION OF THE DRAWINGSSo that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.Figure 1 is a schematic sectional view of an electronic device having a P- N junction that includes FinFET transistors.Figure 2 is an isometric view of a portion of the electronic device of Figure 1 illustrating a P-type FinFET transistor disposed adjacent an N-type FinFET transistor.Figures 3A-3H are sectional views of a film stack during different stages of a sequence for forming the electronic device of Figure 1 having adjacent P- type and N-type FinFET transistors.Figure 4 is a block diagram of a method for forming an electronic device having adjacent P-type and N-type FinFET transistors.To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.It is contemplated that elements of one embodiment may be beneficially incorporated in other embodiments.DETAILED DESCRIPTIONThe FinFET technology has significantly improved CMOS performance and has enabled Moore’s Law scaling down to advanced nodes of 7 nm and beyond. The manufacturing of FinFET transistors required a significant change in the geometry of the shallow trench isolation (STI). The purpose of the STI is to electrically isolate adjacent transistors. Advanced planar CMOS transistors have STI depths in the range of about 200 to about 250 nm. For FinFET technologies, the exposed silicon fin is formed by etching-back the STI, which results in an STI depth of between about 70 to about 80 nm. FinFET designs can expect even further STI depth reduction with continued CMOS scaling. From planar to FinFET d e s i g n s , the approximately 3 times reduction in t h e STI depth has significantly reduced the minimum substrate path between source/drain of adjacent pMOS and nMOS transistors. This does not deteriorate p/nMOS isolation during normal CMOS operation, when all p-n junctions are under reverse bias. However, the reduced substrate path between adjacent junctions has been found to allow triggering of parasitic SCR latch-up, when junctions of both pMOS and nMOS transistors can be forward biased during an SEL transient.As discussed above, conventional FinFET transistors are susceptible to SEL events due to the reduced substrate path between adjacent junctions. Conventional FinFET transistors are particularly more susceptible to SEL events due to high energy particle strikes than conventional planar transistors. High energy particles include neutrons, thermal neutrons, alpha particles and the like. In particular, the inventors have observed that 10 times less energy is needed to cause an SEL event due to high energy particle strikes on conventional FinFET transistors as compared to conventional planar transistors. The inventors have discovered a strong dependence between oxide isolation thickness between N- type and P-type FinFET transistors and the probability of high energy particle strike SEL events. Thus, the disclosure herein describes techniques for improving the resistance of electronic devices employing FinFET transistors to SEL events by selectively increasing oxide isolation thickness almost 2-3 times that of conventional FinFET transistors. Moreover, while the oxide isolation thicknesses between N-type and P-type FinFET transistors are increased, shallower oxide isolation thicknesses between same types of FinFET transistors may be maintained. Thus, electronic devices with robust resistance to SEL events may be realized with a minimal increase in fabrication costs. Moreover, the novel FinFET transistors described herein are at up to 10 times less susceptible to SEL events than traditional FinFET transistors, desirably approaching and even equaling that of planar transistors.Figure 1 is a schematic diagram of one example of an electronic device 100 having a P-N junction 150 defined between an N-type FinFET transistor 102 and an adjacent P-type FinFET transistor 104. In the example of Figure 1 , the electronic device 100 is configured as a CMOS device. However, the FinFET transistors 102, 104 may be configured for use in other types of devices that include both N-type and P-type FinFET transistors 102, 104.The N-type and P-type FinFET transistors 102, 104 are formed on a semiconductor substrate 106. The FinFET transistors 102, 104 may be formed by additive or subtractive techniques, including techniques currently known or developed in the future.The substrate 106 may be a silicon substrate or a substrate comprised of another suitable material. The substrate 106 includes a P-well 152 and an N- well 154. In the example depicted in Figure 1 , the N-well 154 is illustrated as formed on the P-well 152. However, the P-well 152 may alternatively be formed on the N-well 154, or the P-well 152 may be laterally spaced from the N-well 154, for example in a twin-tub configuration. The P-well 152 and the N-well 154 may be formed using ion implantation, diffusion or other suitable technique. In one example, the P-well 152 is doped with phosphorus, while the N-well 154 is doped with boron.In the example depicted in Figure 1 , there are at least two N-type FinFET transistors 102 formed on the P-well 152. There are also at least two P-type FinFET transistors 104 formed on the N-well 154. One of the N-type FinFET transistors 102 is disposed adjacent to one of the P-type FinFET transistors 104. An oxide isolation layer 1 12 is disposed in trench 108 formed between each adjacent FinFET transistors of the same type. For example, the oxide isolation layer 1 12 is disposed between each pair of adjacent N-type FinFET transistors 102. The oxide isolation layer 1 12 is also disposed between each adjacent pair of P-type FinFET transistors 104. An oxide isolation layer 1 14 is disposed in a trench 1 10 formed between adjacent FinFET transistors of different types. For example, the oxide isolation layer 1 14 is disposed between the N-type FinFET transistor 102 that is adjacent to the P-type FinFET transistor 104. The depth of a portion of the trench 1 10 containing oxide material is at least double the depth of a portion of the trench 108 containing oxide material, thus making the thickness of the oxide isolation layer 1 14 is at least double the thickness of the oxide isolation layer 1 12. The deeper trench 1 10 and thicker oxide isolation layer 1 14 provides excellent resistance against SEL events across the P-N junction 150 as further discussed below. Additional details of the P-N junction 150 are illustrated in the isometric view of a portion of the electronic device 100 of Figure 1 depicted in Figure 2. As shown in Figure 2, the N-type FinFET transistor 102 includes a high aspect ratio fin 202 and a metal gate 208, both of which extend upwards from the substrate 106. The fin 202 may be formed by additive or subtractive techniques. In one example, the fin 202 may be formed may be formed from silicon, silicon germanium, germanium or lll-V material. The fin 202 may be optionally covered with thin oxide capping layer 206.The oxide isolation layer 1 12 is formed on the substrate 106 between the fins 202 of the N-type FinFET transistors 102. In one example, the oxide isolation layer 1 12 is formed in the trench 108 defined between the fins 202. The oxide isolation layer 1 12 is formed from one or more of silicon oxide, silicon nitride, silicon oxynitride, fluoride-doped silicate glass (FSG), low-k dielectric, or other suitable material. Similarly, the oxide isolation layer 1 14 is formed on the substrate 106, such as in the trench 1 10 defined between the fin 202 of the N- type FinFET transistor 102 and a high aspect ratio fin 252 of the P-the FinFET transistor 104. The oxide isolation layer 1 14 may be comprised of the same materials suitable for use as the oxide isolation layer 1 12.The metal gate 208 generally has a fin-shape that is perpendicular to a plane of the substrate 106 and is also perpendicular to a plane of the fin 202.The metal gate 208 surrounds a portion of the fin 202, separates a source region 212 of the fin 202 from a drain region 210 of the fin 202. The source and drain regions 212, 210 are generally aligned in a common plane extendingperpendicular to a plane of the substrate 106. The source and drain regions 212, 210 also are oriented perpendicular to a plane of the metal gate 208.The metal gate 208 wraps around a channel region 214 defined between the source and drain regions 212, 210. The channel region 214 is formed from the same material as the regions 212, 210 as the channel region 214 is an integral part of the fin 202. When the metal gate 208 is energized, current flows through the channel region 214 from the source region 212 to the drain region 210.The metal gate 208 is formed from a gate electrode disposed over a gate dielectric material. The gate dielectric material separates the gate electrode from the channel region 214. The gate electrode may be polysilicon, Ta, TiN, TiAIN, TiSiN, TaN, TaAIN, TaSiN, W, WN, Re, Ir, Ru, Mo, Al, Cu, CO, Ni, WN/RU02,ZrSi2, MoSi2, TaSi2, NiSi2, or other suitable material.The gate dielectric material may be a high-K oxide, such as a hafnium based material. Examples of hafnium based materials that are suitable for use as the gate dielectric material include HfOx, HfSiOx, HfSiON, HfZrO, HfLaO, HfTaO, HfTiO and the like. Alternatively, the gate dielectric material may LaO, AIO, ZrO, Zr02, ZrSi02, LaSiO, AlSiO, TiO, Ta205, Ta203, Y2O3, STO, BTO, BaZrO, or other suitable material. In one example, the metal gate 208 is formed from a polysilicon gate electrode disposed over an HfOxgate dielectric material.The metal gate 208 may also include additional layers, such as capping layers and interfacial layers. For example, a capping layer may be disposed between the gate dielectric material and the metal gate material. The capping layer may be lanthanum oxide, LaSiO, manganese oxide, aluminum oxide, or other suitable material. The capping layer may have a thickness ranging from about 3 to about 10 angstroms. In another example, an interfacial layer may be disposed between the gate dielectric material and the channel region 214. The interfacial layer may have a thickness ranging from about 3 to about 10 angstroms. The interfacial layer may be an oxide, such as silicon oxide or silicon oxynitride. Alternatively, the interfacial layer may be silicon nitride or other suitable material.The P-type FinFET transistor 104 includes the fin 252 and a metal gate 258, both of which extend upwards from the substrate 106. As with the fin 202, the fin 252 may be formed by additive or subtractive techniques. In one example, the fin 252 may be formed may be formed from silicon, silicon germanium, germanium or lll-V material. The fin 252 may be optionally covered with thin oxide capping layer 256.The metal gate 258 generally has a fin-shape that is perpendicular to a plane of the substrate 106 and is also perpendicular to a plane of the fin 252.The metal gate 258 surrounds a portion of the fin 252, separates a source region 262 of the fin 252 from a drain region 260 of the fin 252. The source and drain regions 262, 260 are generally aligned in a common plane extendingperpendicular to a plane of the substrate 106. The source and drain regions 262, 260 also are oriented perpendicular to a plane of the metal gate 258. The metal gate 258 wraps around a channel region 264 defined between the source and drain regions 262, 260. The channel region 264 is formed from the same material as the regions 262, 260 as the channel region 264 is an integral part of the fin 252. When the metal gate 258 is energized, current flows through the channel region 264 from the source region 262 to the drain region 260.The metal gate 258 is formed from a gate electrode disposed over a gate dielectric material. The gate dielectric material separates the gate electrode from the channel region 264. The metal gate 258 is constructed similar to as described above with reference to the metal gate 208, and may also include additional layers, such as capping layers and interfacial layers as described above with reference to the metal gate 208.The N-type FinFET transistors 102 are separated by a pitch or distance 282. In one example, the distance 282 is about 42nm. The N-type FinFET transistor 102 is separated from the P-type FinFET transistor 104 by a distance 286. The distance 286 is generally larger than the distance 282 to accommodate fabrication of the deeper oxide isolation layer 1 14. For example, the oxide isolation layer 1 14 has a thickness 284 that is greater than a thickness 280 of the oxide isolation layer 1 12. In one example, the thickness 284 is at least about twice the thickness 280 of the oxide isolation layer 1 12. In another example, the thickness 284 is at least three times the thickness 280 of the oxide isolation layer 1 12. It is contemplated that the distance 286 defining the width of the trench 1 10 and oxide isolation layer 1 12 may be tapered or stepped such that the a width at a bottom of the trench 1 10 is much less than the width at the portion of the trench 1 10 at which the oxide isolation layer 1 12 is exposed opposite the substrate 106. For example, the width at the bottom of the trench 1 10 may be about the same as the distance 282.In the example depicted in Figure 2, the thickness 280 of the oxide isolation layer 1 12 is less than about 100 nm, such as between 70-80 nm. In contrast, the thickness 284 of the oxide isolation layer 1 14 is greater than 150 nm, such as between 200-250 nm. Stated in another manner, the thickness 284 of the oxide isolation layer 1 14 is at least twice the thickness 280 of the oxide isolation layer 1 12. In one example, the thickness 284 of the oxide isolation layer 1 14 is at least 2.5 times the thickness 280 of the oxide isolation layer 1 12. In yet another example, the thickness 284 of the oxide isolation layer 1 14 is at least 3 times the thickness 280 of the oxide isolation layer 1 12. The deep thickness 284 of the oxide isolation layer 1 14 assist preventing charged particles from traveling between the wells 152, 154, thus increasing the resistance to SEL events. In one example, the SEL resistance due to the thickness 284 of the oxide isolation layer 1 14 across the P-N junction 150 is about 10 times greater than that of a conventional FinFET designs.It should be appreciated that the thickness of the oxide isolation layer 1 14 selected to improve the resistance to SEL events may be different depending on the technology node and critical dimensions of the FinFET comprising the P-N junction 150, and expected energy levels of the particles present in the environment for which the device was designed for use. For example, terrestrial applications encounter particles having much lower energy levels than applications that are designed to be utilized in hardened or non-terrestrial applications. The thickness 284 of the oxide isolation layer 1 14 described above has proven suitable for terrestrial applications for FinFET manufactured utilizing the 16 nm technology node. Non-terrestrial applications, such as aerospace or other applications requiring hardening against higher energy particles (relative to normally encountered terrestrial particle), at the same technology node would generally have a thicker oxide isolation layer 1 14.The improved the resistance to SEL events achieved utilizing the techniques disclosed herein may also be characterized as reducing the product gain of bhrh· bRhRof the parasitic SCR compared to conventional designs using FinFET technology. Generally, bhrhand brhrare the gains of the two transistors in the feedback loop of the parasitic SCR. Maintaining the bhrh· bRhRproduct gain to less than 1 will prevent latch-up. The beta gains for the parasitic bipolars are strong functions of the distance in the SCR current path. Since deeper STI increases this distance it reduces the bh h· bRhRproduct gain. The bipolar transistor beta gains also depend on the currents in the bipolar transistors of the parasitic SCR. The higher the currents the higher the bhrh· bRhRproduct gain. Since said currents are proportional to the deposited charge from an ion strike, the thickness 284 of the oxide isolation layer 1 14 may be selected such that the bhRh· brhrproduct gain is less than a predefined design and radiationenvironment threshold, such as less than 1 for common terrestrial radiation environments. Higher energy ion strikes encountered in space radiation environments will deposit significantly more charges and cause higher currents in the parasitic bipolar transistors. This in turn will raise the bhrh · bRhRproduct gain above 1 and the same thickness 284 may not be sufficient to prevent SEL in such space radiation environments. A higher thickness 284 may be required to prevent SEL in such high energy radiation environments.Figures 3A-3H are sectional views of a film stack during different stages of a sequence for forming the electronic device 100 of Figure 1 having adjacent N-type and P-type FinFET transistors 102, 104. Figure 4 is a block diagram of a method 400 for forming an electronic device, such as the electronic device 100 having adjacent N-type and P-type FinFET transistors 102, 104 such as by the sequence illustrated in Figures 3A-3H. It is contemplated that the method 400 may be utilized to form other electronic devices having P-N junctions 150.The method 400 begins at operation 402 by patterning a first mask layer 300 on a substrate, such as the substrate 106, such as illustrated in Figure 3A. The N-well and P-well are not illustrated in Figures 3A-3H to avoid cluttering the figures. The first mask layer 300 includes a plurality of openings 302 through which exposed regions 304 of the substrate 106 are exposed for etching and trench formation. The first mask layer 300 may be a photoresist mask, a hard mask or combination thereof.At operation 404, the exposed regions 304 of the substrate 106 are etched to form trenches 108, as illustrated in Figure 3B. The trenches 108 formed in the substrate 106 are fabricated by dry (e.g., plasma) etching. Suitable etchants include halogens and halogen containing compounds such as Cl2, CF4, SF6, NF3, and CCI2F2 among others. Wet etching or other suitable technique may alternatively be utilized. Suitable wet etchants include nitric acid (HNO3) and hydrofluoric acid (HF), potassium hydroxide (KOH), ethylenediaminepyrocatechol (EDP) and tetramethylammonium hydroxide (TMAH), among others.The material of the substrate 106 remaining between the trenches 108 form the fins 202, 252. The distance 282 between fins 202 is less than the distance 286 between a pair of adjacent fins 202, 252. The distance 282 may be at least half the distance 286, such as at least a quarter of the distance 286. The larger distance 286 between the pair of adjacent fins 202, 252 allows the trench 1 10 to be much deeper than the trenches 108, thereby facilitating a thicker oxide isolation layer 1 14 to be disposed in the trench 1 10 relative to the oxide isolation layer 1 12 disposed in the trench 108, as further illustrated in later operations of the method 400 described below.At operation 406, the first mask layer 300 is removed, as illustrated in Figure 3C. In one example the first mask layer 300 is removed by an ashing process, such as by exposure to an oxygen containing plasma, or other suitable method.At operation 408, a second mask layer 320 is disposed on the fins 202, 252 and trenches 108. The second mask layer 320 is patterned to form an opening 322 through which the substrate 106 may be etched, such as illustrated in Figure 3D. The second mask layer 320 may be fabricated and patterned from materials and techniques such as described with reference to the first mask layer 300.At operation 410, the substrate 106 is etched through the opening 322 in the second mask layer 320 to form the trench 1 10. As illustrated in Figure 3E, the trench 1 10 is deeper than the trench 108. Although not to scale, the trench 1 10 is at least two times deeper than the trench 108, and even as much as 2.5 or more times deeper than the trench 108. Additionally, the trench 1 10 is at least about two times wider than the trench 108, such as at least 3 to 4 times deeper than the trench 108. The wider trench 1 10 facilitates forming a deeper trench 1 10, such that more oxide isolation layer may be utilized for improved upset resistance from high energy particle strikes. After etching, the second mask layer 320 is removed, for example, by ashing in the presence of an oxygen containing plasma or other suitable method.At operation 412, the trenches 108, 1 10 are filled with oxide material to form the oxide isolation layers 1 12 and oxide isolation layer 1 14, as illustrated in Figure 3F. The oxide isolation layers 1 12, 1 14 may be deposited utilizing spin- on, chemical vapor deposition, atomic layer deposition or other suitable technique. A top surface of the oxide isolation layers 1 12, 1 14 may be made coplanar with the top surface of the fins 202, 252, for example, using an etch back or chemical mechanical polishing or other suitable planarization technique.Once the trenches 108, 1 10 are filled with oxide material, a third mask layer 330 is deposited and patterned on the oxide material to form openings 332. The third mask layer 330 may be fabricated and patterned from materials and techniques such as described with reference to the first mask layer 300. At operation 414, a portion of the oxide material forming the oxide isolation layers 1 12 and oxide isolation layer 1 14 is etched through openings 332 in the third mask layer 330 to set the thickness 280 of the oxide material filling the trenches 108 and the thickness 284 of the oxide material filling the trench 1 10, as illustrated in Figure 3G.At operation 416, the third mask layer 330 is removed. The third mask layer 330 may be removed by ashing in the presence of an oxygen containing plasma, or other suitable method. After operation 416, the metal gates 208, 258 are formed over the fins 202, 252 to form the transistors 102, 104 as illustrated in Figures 1 and 2.Thus, the FinFET transistors 102, 104 and particularly the P-N junction 150 described herein have greater SEL resistance as compared to conventional FinFET transistors and conventional P-N junctions. As the FinFET transistors 102, 104 comprising the P-N junction 150 have a reduced probability of SEL events due to high energy particle strikes as compared to conventional FinFET transistors, the electronic device 100, such as CMOS or other electronic devices, are more robust compared to conventional electronic devices. The increased thickness of the oxide isolation layer 1 14 disposed between the N-type FinFET transistor 102 and the P-type FinFET transistor 104 allows most of the charge from impacting particles to be dissipated in the substrate before diffusing around the large area due to the relatively thicker material comprising of the oxide isolation layer 1 14 disposed in the deeper trench 1 10 (as compared to trenches 108), thus adding an extra protection again multi-bit upsets and minimizing the occurrence of uncorrectable events in electronic devices 100 in which P-N junction 150 is utilized. Advantageously, the FinFET transistors 102, 104 comprising the P-N junction 150 are up to 10 times less susceptible to SEL events than traditional FinFET transistors, desirably approaching and even equaling that of planar FinFET transistors.In one example, FinFET transistors, P-N junctions and methods for forming the same are described herein. Such a FinFET transistor may include: a metal gate; a fin comprising: a source region; a drain region; and a channel region wrapped by the metal gate, the channel region connecting the source and drain regions; a first oxide isolation layer disposed on a first side of the source fin; and a second oxide isolation layer disposed on a second side of the source fin, the second side opposite of the first side, the second oxide isolation layer having a thickness greater than a thickness of the first oxide isolation layer.In some such FinFET transistor, the thickness of the second oxide isolation layer may be at least twice the thickness of the first oxide isolation layer.In some such FinFET transistor, the thickness of the second oxide isolation layer may be between 200nm and 250nm.In another example, a P-N junction is described. Such a P-N junction may include: a first P-type FinFET transistor; a first N-type FinFET transistor disposed adjacent the first P-type FinFET transistor; and a first oxide isolation layer laterally separating the first N-type FinFET transistor from the adjacent the first P-type FinFET transistor, the first oxide isolation layer having a thickness of greater than 150nm.In some such P-N junction, the first P-type FinFET transistor may include: a second oxide isolation layer disposed on a side of the first P-type FinFET transistor opposite the first oxide isolation layer, the second oxide isolation layer having a thickness of less than half of a thickness of the first oxide isolation layer.In some such P-N junction, the thickness of the first oxide isolation layer may be at least three times the thickness of the second oxide isolation layer.In some such P-N junction, the bhrh· bRhRproduct gain of the P-N junction may be less than 1.Some such P-N junction may further include: a second P-type FinFET transistor disposed adjacent the first P-type FinFET transistor; and a second oxide isolation layer laterally separating the first P-type FinFET transistor from the adjacent second P-type FinFET transistor, the second oxide isolation layer having a thickness of less than half a thickness of the first oxide isolation layer.In some such P-N junction, the thickness of the second oxide isolation layer is less than 80nm and the thickness of the first oxide isolation layer may be greater than 200nm. In some such P-N junction, a width of the first oxide isolation layer defined between the first P-type FinFET transistor and the adjacent second N-type FinFET transistor may be greater than a width of the second oxide isolation layer defined between the first P-type FinFET transistor and the adjacent second P- type FinFET transistor.Some such P-N junction may further include: a second N-type FinFET transistor disposed adjacent the first N-type FinFET transistor; and a second oxide isolation layer laterally separating the first N-type FinFET transistor from the adjacent second N-type FinFET transistor, the second oxide isolation layer having a thickness less than the thickness of the first oxide isolation layer.In some such P-N junction, the thickness of the second oxide isolation layer may be less than 80nm and the thickness of the first oxide isolation layer may be greater than 200nm.In some such P-N junction, a width of the first oxide isolation layer may be greater than a width of the second oxide isolation layer.Some such P-N junction may further include: a second FinFET transistor disposed adjacent one of the first P-type FinFET transistor and first N-type FinFET transistor, the second FinFET transistor being of the same type as a closer of the first P-type FinFET transistor and first N-type FinFET transistor; and a second oxide isolation layer laterally separating the second FinFET transistor from the adjacent one of the first P-type FinFET transistor and first N-type FinFET transistor, the second oxide isolation layer having a thicknesssubstantially equal to the thickness of the first oxide isolation layer.In yet another example, a method for forming a P-N junction is described. Such a method for forming a P-N junction may include: etching a semiconductor substrate to form a plurality of high aspect ratio fins, the plurality of high aspect ratio fins including a first high aspect ratio fin and a second high aspect ratio fin separated by a first high aspect ratio trench; filling the first high aspect ratio trench with an oxide material; removing a portion of the oxide material filling the first high aspect ratio trench; and stopping the removal of the oxide material filling the first high aspect ratio trench to form a first oxide isolation layer having a thickness of at least 150nm.In some such method, etching the semiconductor substrate to form the plurality of high aspect ratio fins may further include: forming the first high aspect ratio fin in a p-doped region of the semiconductor substrate; and forming the second high aspect ratio fin in an n-doped region of the semiconductor substrate, the first and second high aspect ratio fins separated by the first high aspect ratio trench.Some such method may further include: forming a third high aspect ratio fin of the plurality of high aspect ratio fins in a p-doped region of thesemiconductor substrate adjacent the first high aspect ratio fin; and forming a second oxide isolation layer having a thickness less than 100nm between the first and third high aspect ratio fins.In some such method, filling the first high aspect ratio trench with the oxide material may include: filling the first high aspect ratio trench with at least one material selected from the group consisting of silicon oxide, silicon nitride, silicon oxynitride, fluoride-doped silicate glass (FSG), and a low-k dielectric.In some such method, the first high aspect ratio fin may be formed from silicon, silicon germanium, germanium or lll-V material.Some such method may further include: filling a second first high aspect ratio trench etched in the semiconductor substrate with an oxide material;removing a portion of the oxide material filling the second high aspect ratio trench; and stopping the removal of the oxide material filling the second high aspect ratio trench to form a second oxide isolation layer having a thickness less than half of the thickness of the first oxide isolation layer.While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. |
A memory controller hub has a data stream controller adapted to use a system memory to store graphics data and to control functions of the system memory, a processor interface, a system memory interface, a graphics subsystem coupled to the data stream controller and adapted to perform graphics operations on graphics data, and a graphics port adapted to couple the memory controller hub to an external graphics device. |
1.A memory controller hub includes:The data flow controller is adapted to use the system memory to store graphics data and control the function of the system memory;Processor interface;System memory interface;The graphics subsystem, coupled to the data flow controller, is adapted to perform graphics operations based on the graphics data; andDedicated graphics port, adapted to couple the memory controller to external graphics devices,The graphics port is adapted to couple the memory controller hub to an external graphics device through a dedicated bus interface,External graphics devices include AGP internal storage modules,The graphics port is adapted to transmit the graphics data between the system memory and the storage module in the AGP through the data flow controller and through the dedicated bus interface,The dedicated graphics port transmits graphics data between the external graphics device and the system memory interface in AGP mode, and graphics data between the graphics subsystem and the local memory coupled to the external graphics device in Gfx mode.2.The memory controller hub of claim 1, further comprising a video output port, coupled to the graphics subsystem, adapted to output a video signal from the memory controller hub.3.The memory controller hub of claim 2 wherein the video output port is adapted to provide video signals directly to the display device.4.The memory controller hub of claim 2, wherein the video output port includes an analog video output port.5.The memory controller hub of claim 2, wherein the video output port includes a digital frequency output port.6.The memory controller hub of claim 2, wherein the video output port includes an analog video output port and a digital video output port.7.The memory controller hub of claim 1, wherein the external graphics device includes a graphics controller.8.The memory controller hub of claim 7, wherein the graphics port is adapted to transmit graphics data between the system memory and the graphics controller through the data stream controller and through the dedicated bus interface.9.The memory controller hub of claim 7 wherein the graphics controller is adapted to provide video signals to the display device.10.The memory controller hub of claim 1, further comprising a test module that determines whether an external graphics device is present, and sets a dedicated graphics port in AGP or Gfx mode in response to the test.11.A computer system, including:CPU;display screen;System memory adapted to store video data and non-video data; andA memory controller hub, coupled to the CPU and system memory, is adapted to perform memory control and graphics functions. The memory controller hub includes:Video output port to provide video signals to the display device; andDedicated graphics port, coupling the memory controller hub to external graphics devices,Among them, external graphics devices include AGP internal storage modules,The graphics port is adapted to transfer graphics data between the system memory and the storage module in the AGP via the memory controller and through a dedicated bus interface,The dedicated graphics port transmits graphics data between the external graphics device and the system memory interface in AGP mode, and graphics data between the graphics subsystem and the local memory coupled to the external graphics device in Gfx mode.12.The computer system of claim 11, wherein the video output port is adapted to directly provide a video signal to the display device.13.The computer system of claim 11, wherein the video output port includes an analog output port.14.The computer system of claim 11, wherein the video output port includes a digital output port.15.The computer system of claim 11, wherein the video output port includes an analog output port and a digital output port.16.The computer system of claim 11, wherein the external graphics device includes a graphics coprocessor.17.The computer system of claim 11, wherein the graphics port is adapted to transfer graphics data between the system memory and the graphics coprocessor via the memory controller hub and through a dedicated bus interface.18.The computer system of claim 11, wherein the graphics coprocessor is adapted to provide the video signal to the display device.19.The computer system of claim 11, further comprising an engine for determining whether the graphics port is coupled to an external graphics device.20.The computer system of claim 19, wherein the engine is adapted to generate a signal to activate the external graphics device to control the processing of graphics data if an external graphics device is coupled to the graphics port when booting The engine is adapted to generate a signal to activate the memory controller hub to control the processing of graphics data if an external graphics device is not coupled to the graphics port when booting.21.The computer system of claim 11, further comprising a test module that determines whether an external graphics device is present, and sets a dedicated graphics port in AGP or Gfx mode in response to the test. |
Memory controller hubTechnical fieldThe invention relates to a memory controller hub.Background techniqueMicrocomputer systems usually include one or more controller hubs that control and coordinate data transmission between the computer system memory, central processing unit (CPU), and peripheral devices. Graphics applications can be supported by peripheral devices called graphics controllers, which require the memory controller hub to transfer data between the device, system memory, and CPU.One design consideration related to microcomputer systems is the quality of processing in two-dimensional (2D), three-dimensional (3D), and video images (commonly referred to collectively as "graphics" below). High-performance graphics processing requires deep computing power of the processor and fast manipulation of large amounts of data. Several designs have been implemented to achieve high-performance graphics processing while simultaneously reducing the cost of the entire system and enabling the upgrade of the computer system's capabilities.The computer system may include a graphics controller coupled to a local memory for storing graphics data, so that the amount of data that must be transferred between the graphics controller and the system memory and / or CPU is reduced. Increasing the amount of local memory provided to the graphics controller improves graphics performance, but also increases the cost of the computer system because local graphics memory is relatively expensive. However, if a dedicated bus such as an accelerated graphics port (AGP) is used to couple the controller to the memory controller, less local memory is required to achieve the same graphics performance. AGP allows the controller to process part of the system memory into dedicated local graphics memory, which reduces the amount of local memory required and reduces the cost of the entire system.Computer system costs can also be reduced by omitting peripheral graphics controllers and integrating their functions into the memory controller hub. In such a configuration, the memory controller hub is preferably described as a graphics / memory controller hub (GMCH) because it performs graphics processing functions in addition to memory control and transmission functions. In addition, it includes one or more output ports to send graphic signals to external devices, such as cathode ray tube (CRT) flat panel monitors. You can then omit the local graphics memory.Summary of the inventionAccording to a first aspect of the present invention, there is provided a memory controller hub, including: a data flow controller adapted to use a system memory to store graphics data and control functions of the system memory; a processor interface; a system memory interface; Graphics subsystem, coupled to the data flow controller, adapted to perform graphics operations based on graphics data; and dedicated graphics port, adapted to couple the memory controller to external graphics devices, where the graphics port is adapted The memory controller hub is coupled to an external graphics device through a dedicated bus interface. The external graphics device includes an AGP internal storage module. The graphics port is adapted to transfer system memory and AGP internal storage module through a data flow controller and through a dedicated bus interface For graphics data, the dedicated graphics port transmits graphics data between the external graphics device and the system memory interface in AGP mode, and graphics data between the graphics subsystem and the local memory coupled to the external graphics device in Gfx mode.According to a second aspect of the present invention, there is provided a computer system, including: a CPU; a display device; a system memory adapted to store video data and non-video data; and a memory controller hub, coupled to the CPU and System memory, the memory controller hub is adapted to perform memory control and graphics functions. The memory controller hub includes: a video output port to provide video signals to a display device; and a dedicated graphics port to couple the memory controller hub to External graphics device, where the external graphics device includes an AGP internal storage module, the graphics port is adapted to transfer graphics data between the system memory and the AGP internal storage module via a dedicated bus interface via a memory controller, and the dedicated graphics port is in AGP Graphics data is transferred between the external graphics device and the system memory interface, and graphics data is transferred between the graphics subsystem and the local memory coupled to the external graphics device in Gfx mode.BRIEF DESCRIPTIONFIG. 1 is a schematic block diagram of a computer system.2 is a schematic block diagram of a graphics memory controller hub.3 is a schematic block diagram of an accelerated graphics port (AGP) interface of a graphics memory controller hub.4 is a schematic block diagram of a graphics memory controller hub coupled to an AGP in-memory module (AIMM).5 is a schematic block diagram of a local memory interface of a graphics memory controller hub.6a and 6b are signal tables for communication through the AGP interface and through the local memory interface.7 is a schematic block diagram of internal graphics components of the graphics memory controller hub.8 is a flowchart of a method for selecting an AGP mode or a graphics mode used by a graphics memory controller hub.detailed description1.OverviewIn some implementations of the invention, the memory controller hub is integrated with an internal graphics controller and can interface with an external graphics device via AGP. Since the memory controller hub controls both graphics and memory functions, it is called a graphics / memory controller hub (GMCH). GMCH provides internal graphics processing and calibratable graphics performance through the AGP interface.GMCH can be used in one of two mutually independent modes: AGP mode, in which case GMCH uses its ability to interface with an external graphics controller, and its internal graphics function is disabled; or Gfx mode, in this case GMCH uses its internal graphics capabilities and disables its ability to interface with external graphics controllers. In Gfx mode, GMCH can still interface with local memory modules via AGP to provide additional graphics memory for internal graphics functions. It can automatically determine and set whether GMCH works in AGP mode or Gfx mode during the startup of the computer.Figure 1 shows an exemplary computer system 1 in which GMCH can be implemented. Computer system 1 includes a microprocessor (eg, a central processing unit, or "CPU") 2 coupled to GMCH 3, which contains a system memory controller hub. GMCH3 can also be called "chip set" or "core logic". GMCH3 provides an interface between the CPU 2 and the system memory 4 and between the CPU 2 and a bus (such as peripheral component interconnect (PCI) or HublinkTM bus 5). Various input / output (I / O) devices 6 are coupled to the PCI bus 5, and the PCI bus is coupled to the GMCH 3 via an input / output controller hub (ICH) 11. The computer system 1 may also include a graphics device 7, which may be a graphics controller coupled to the local memory 8, or an AGP internal memory module (AIMM) that provides external local memory for GMCH 3 internal graphics functions. The shared AGP / local memory interface 9 provides a dedicated interface bus between the GMCH 3 and the graphics device 7. If a graphics device 7 is provided in the computer system, graphics and video signals can be sent from the graphics device 7 to the display device 10, and if the graphics device 7 is missing, the signal can be sent from the GMCH 3 to the display device 10.2 shows other details of the GMCH 3, including the CPU interface 20 coupled to the AGP interface 21, the local memory interface 22, the input / output (I / O) hub interface 23, and the system memory interface 24. Graphics functions can be performed by internal graphics component 25, which includes data flow and distribution controller 26 to manage system memory interface 24, CPU interface 20, I / O hub interface 23, AGP interface 21, and local memory interface 22 Data flow.The AGP interface 21 and the local memory interface 22 enable the GMCH 3 to be coupled to the external graphics device 7 via a dedicated bus interface. The AGP interface 21 couples the GMCH 3 to an external graphics controller (not shown), and the local memory interface 22 couples the GMCH 3 to an AIMM card (not shown) for use by the internal graphics controller. The AGP interface 21 and the local memory interface 22 share a physical interface, but the communication protocols and signals through the interface will depend on whether the interface is used to couple the data flow and distribution controller 26 to an AGP graphics adapter or an AIMM card.2.AGP interfaceThe AGP interface 21 of the GMCH 3 provides a dedicated bus to transmit data and memory access requests between the graphics device 7 and the system memory 4. The AGP bus provides sufficient bandwidth for graphics controllers in computer systems to run composite 3D graphics and full-motion video applications, such as games and structural and engineering simulations. AGP is described in detail in the version 2.0 accelerated graphics port interface specification (hereinafter referred to as "AGP specification") published by Intel Corporation in Santa Clara, California. In addition to AGP compatible devices, PCI compatible devices can also communicate through the AGP interface 21.Figure 3 is a block diagram showing the AGP function of GMCH3. The AGP interaction is performed in a discrete interaction manner, in which the request for data transmission is disconnected from the data transmission itself in time. The AGP compatible graphics controller (bus supervisor) 7a initiates an interaction with an access request. The AGP interface 21 responds to the request by guiding the corresponding data transmission at a later time, allowing the AGP graphics controller 7a to pipeline several access requests while waiting for data transmission to occur. As a result of pipelining, several read and / or write access requests in the request queue 100 may be outstanding at the same time. Access requests can be pipelined through the AGP address / data bus (AD bus) 105, 107, or transmitted through the AGP 9 sideband address line 107, and received by the request queue 100.The scheduler 102 processes access requests in the request queue 100. The read data is obtained from the system memory 4 and is returned via the AD bus 105 of the AGP 9 via the read data return queue 104 under the trigger of the scheduler 102. When the space in the write data queue 108 is available, the write data is provided by the AGP compatible graphics controller 7a under the guidance of the scheduler 102. Therefore, AGP interaction usually includes interleaved access requests and data transmission.GMCH 3 uses a distributed decision model to integrate the functions of the AGP compatible graphics controller 7a with other components connected to the GMCH. Independent buses and interfaces (ie, CPU interface 20, AGP interface 21, local memory interface 22, hub interface 23, and system memory interface 24) and distributed decisions make it possible to initiate multiple interactions simultaneously. As long as the interactions on the independently decided bus do not compete for common resources, they can proceed in parallel. Decision algorithms and strategies meet specific agent requirements and can benefit from different aspects of system performance, such as lower bus / resource acquisition latency, optimized instantaneous peak bandwidth, or optimized continuous bandwidth.The AGP interface decider 106 detects the external request signal 109, the internal request signal 111 from the CPU interface 20, and the data queue status signal 113 from the scheduler 102. In addition to determining whether the AGP supervisor 7a or GMCH 3 owns the physical interface, the arbiter 106 indicates to the external graphics device 7a (AGP supervisor) the type of interaction that can be performed while it owns the interface signal. The functions of the decision handshake and AGP signals are described in detail in the AGP specification. When space in the write buffer 108 allows, the write access request will cause the write data status input sent from the scheduler 102 to the decider 106. Reading data from the memory and providing it to the read queue 104 to return via the AD bus 105 will cause the read data status input sent from the scheduler 102 to the decider 106.Since the decision of the decider 106 depends on the states of the read buffer 104 and the write buffer 108, the decider works in conjunction with the scheduler 102. The scheduler 102 internally assigns the AGP non-probe request to the system memory interface 4, and identifies to the AGP interface decider 106 the priority it should use when servicing pending requests and receiving new requests. The scheduler 102 enhances compatibility with AGP command rules and together with system memory decision logic (not shown) enables high priority requests to be processed as the highest priority event in the system.3.Local storage interfaceReferring to FIG. 4, the local memory interface 22 of GMCH 3 provides a dedicated 32-bit wide SDRAM channel to transfer graphics data between the internal graphics component 25 of GMCH 3 and the local graphics memory 202. The local memory interface 22 also manages the control and timing of such transmissions. The local memory interface 22 is decoupled from the internal graphics core 25 and can be enabled with frequencies such as 100 megahertz (MHz) and 133 MHz regardless of the graphics core.As noted earlier, the AGP interface 21 and the local memory interface 22 are physically shared, using the same component pins for both interfaces, although only one interface can be supported at any given time. The shared interface reduces the number of pins on GMCH3 required to support two independent interfaces, and facilitates the design of the routing motherboard, where GMCH3 and card-type local graphics memory 202 are inserted in the four layers of the motherboard. This reduces GMCH costs and board costs. As a result of the shared interface, almost all local memory interface signals can be mapped onto the AGP interface 21. When GMCH 3 is configured in AGP mode, the shared interface supports AGP interface 21. When GMCH 3 is configured in Gfx mode, the interface becomes the local memory interface 22, but the local memory is selective, and the SDRAM device does not need to be connected to the interface 22.The local memory can exist on an additional AIMM card 7b that complies with AGP form factor. The user can install the AGP graphics card 7a in the AGP slot of the GMCH system, so that the AGP mode system can use the graphics function on the AGP card, or install the AIMM card to enable the highest possible internal graphics performance in the Gfx mode. Alternatively, the AGP slot can be left empty to obtain the lowest cost Gfx mode solution. The AIMM card 7b is plugged into a standard AGP connector on the computer system motherboard, but instead provides AGP / PCI functionality, the card contains graphics memory, for example, a 2M × 32 SDRAM device or two 1M × 16 SDRAM devices 202.Since the local memory interface supports two frequencies of 100MHz and 133MHz, a jumper can be used to determine which frequency to choose. When the AIMM card 7b is inserted into the AGP slot on the motherboard, it notifies GMCH 3 of its proper operating frequency on one of the pins of the local memory interface 22. GMCH3 samples this pin during restart, but the value on this pin can also be overwritten by software via the GMCH configuration register.Because current SDRAM technology uses 3.3 volt (V) logic instead of the 1.5V option supported by AGP, the AIMM card 7b sets a signal on the pin of the local memory interface 22 to indicate that it requires a 3.3V power supply. In addition, the AIMM card should only provide a 3.3V plug to the local memory interface 22, rather than a 1.5V plug, in case it is inserted into a connector that only supports 1.5V.Referring to FIG. 5, the read queue 304 and the write queue 308 in the local memory interface 22 function similarly to the read / write queue of the AGP interface 21. However, the queues 304, 308 have been slightly modified to handle additional local storage data paths. Data is written from the AIMM card 7b to the read data queue 304 and from the write data queue 308 to the AIMM card 7b through the AGP in the local memory interface 22. The scheduler 302 and the local storage decider 306 work together to control the flow of data through the local storage interface 22.In Gfx mode, the signal used to indicate whether the GMCH is operating in AGP mode on a specific pin of the AGP interface 21 should continue to be valid as the reference voltage for sampling the 3.3V local memory data (LMD) input. Is the same as the level used in AGP mode.4.AGP and local memory signalsPin mapping assignment can be done for this main goal of optimizing the layout of the AIMM card. The AGP signal present on the standard AGP connector serves as the basis for pin mapping, but special types of AGP signals such as strobes and any open-drain signals can be omitted. Similarly, some signals present on the standard AGP connector do not exist on the GMCH AGP interface, so these will not be used for LM signals. The pin mapping assignments for the AGP signal and LM signal are listed in the table shown in FIGS. 6A and 6B.AGP addressing signals include PIPE # and SBA signals. The PIPE # signal is a continuous three-state signal from the supervisor (that is, the graphics controller) to GMCH3, which triggers a pipeline reading. The current supervisor maintains PIPE # to indicate that the target will arrange to wait for a full-width address. While maintaining PIPE #, the supervisor arranges to wait for a request on every rising edge of the clock. When PIPE # maintenance is released, new requests will not be arranged on the AD bus. The SBA signal is a sideband address signal sent through the sideband bus 107 and is used to transfer address and command signals from the AGP supervisor to GMCH3.Pipeline reading and sideband addressing are two independent mechanisms used to schedule requests from AGP. When PIPE # is used to arrange addresses, the supervisor will not be allowed to use the sideband bus 107 to arrange addresses. During configuration, if the supervisor indicates that it can use either mechanism, the configuration software will indicate which mechanism the supervisor will use. The supervisor will continue to use the selected mechanism until it is restarted and reprogrammed to use other modes. Mode changes do not happen dynamically, but only when the device is configured for the first time after a restart.AGP flow control signals include RBF #, WBF # and ST signals. RBF # (read buffer full signal) indicates whether the supervisor is ready to receive the previously requested low priority read data. RBF # is only sampled at the beginning of the loop, and when it is maintained, GMCH will not be allowed to return low priority read data on the first block to the AGP supervisor. WBF # (write buffer full signal) indicates whether the supervisor is ready to receive fast write data from GMCH 3. WBF # is only sampled at the beginning of the loop, and when it is maintained, GMCH3 will not be allowed to drive fast write data to the AGP supervisor. The ST signal provides status information from the decider 106 to the AGP supervisor. The ST signal can be used to indicate that the previously requested low or high priority read data is being returned to the supervisor, the supervisor will provide low or high priority write data for the write commands that were previously queued, or the supervisor has been allowed to start the bus interaction. The ST signal is always output from GMCH 3 and is input to the AGP supervisor.AGP frame # (PCI) signals include FRAME #, IRDY #, TRDY #, STOP #, DEVSEL #, REQ #, GNT #, AD, C / BE, and PAR signals, which are based on the PCI defined in the PCI component specification 2.1 Signal, but can be redefined when used for AGP interaction.GMCH maintains FRAME # during the fast write to indicate the start and duration of the data transfer interaction. REQ # is used to request access to the bus to initiate a PCI or AGP request. For fast write interactions, GMCH3 drives IRDY # to indicate that the AGP supervisor is ready to provide all write data for the current interaction. Once IRDY # is maintained for the write operation, the supervisor will not be allowed to insert the wait state. IRDY # indicates that the supervisor is ready to transmit write data for the maintenance of the read. The supervisor can insert a waiting state between the transmission of 32-byte data packets, but not during the transmission. GMCH 3 releases the maintenance of IRDY # to insert a waiting state between data packets. The AGP supervisor uses TRDY # during the fast write interaction to indicate whether and when the AGP supervisor can transmit the next read data packet. If multiple packets are sent, The target will be allowed to insert a wait state between the 32-byte data packet transmission. STOP # is used to indicate that the signal is disconnected or the target gives up termination. DEVSEL # is used to indicate that the interaction cannot be completed during packet transmission. REQ # is entered into AGP The interface decider 106 requests access to the AGP bus to initiate the AGP or PCI interaction. If the read data is waiting in the read data return queue 104, or a write command is received and the space in the write data queue 108 allows, GNT # is maintained. When there is no When the active input to the AGP interface decider 106, GNT # is released and maintained. The AD signal is the address and data signal sent through the AD bus 105, 107, and is used to transfer the address and command signals from the AGP supervisor to GMCH3. When in When requests are arranged during pipelined transmission, the C / BE (command / byte enable) signal provides command information and provides byte information during the AGP write interaction. C will not be used during the return of read data / BE signal.PAR is a parity signal used for PCI interaction through the AGP bus, but it is not used for AGP interaction.AGP clock and other signals include AD_STB, SB_STB, TYPEDET #, RST #, PME # and USB signals. AD_STB (AD bus strobe) provides timing for 2x and 4x timing data transmitted on the AD bus 105 as AD signals and C / BE signals. SB_STB (sideband strobe) provides timing for 2x and 4x timing data transmitted as SBA signals on the SB bus 107. TYPEDET # is used to indicate what type of logic signal it should use if an AIMM card is inserted in the AGP interface. Since the current SDRAM technology is always 3.3V, not the 1.5V option also supported by AGP, the AIMM card should correctly set the TYPEDET # signal (open circuit indication 3.3V; ground indication 1.5V) to indicate that it requires a 3.3V power supply ( Not grounded). In addition, the AIMM card should only have a 3.3V plug, and no 1.5V plug, in case it is inserted into a connector that only supports 1.5V. RST # is received from ICH11, and RST # is used to restart the AGP interface logic in the MCH. PME # (power management event) is used to wake the device from the suspended state. The USB signal is a universal serial bus signal.Local memory signals include MA, MD, DQM, CS #, RAS #, CAS #, WE #, FREQ_SEL, and TCLK. The MA (memory address) signal provides the multiplexed row and column addresses from the GMCH 3 to the local memory 200. The MD (memory data) signal is used to interface with the local memory data bus. The DQM signal controls the memory array and functions as a synchronous output enabled during the read cycle and a byte enabled during the write cycle. The CS # (chip set selection) signal selects the local memory SDRAM component when it is maintained, and indicates when a valid SDRAM command occurs. RAS # and CAS # are row address strobe and column address strobe respectively. The WE # (write enable) signal is maintained during writing to the local storage 200. FREQ_SEL indicates whether the local memory 200 operates at 100 MHz or 133 MHz. TCLK is a clock signal sent to the local memory 200.5.Internal graphics subsystemReferring to Figure 7, further details of the internal graphics function 25 of GMCH 3 are shown. GMCH 3 and internal graphics 25 retrieve geometry, texture, and frame buffer data from CPU 2, system memory 4, and AIMM card 200 via memory interface 24 or local memory interface 22. The internal graphics function 25 also includes a cache 34 to avoid frequent memory reads of recently used structural data.The overlay flow controller (OSC) 402, display flow controller (DSC) 404, and command flow controller (CSC) 406 manage the data flow and requests to and from agents that communicate with the system memory interface. The flow controller maintains the coherence of the request, performs limited data caching, and translates the addresses to absolute memory addresses or absolute memory addresses to addresses according to the data format.3The D pipeline subsystem 30 performs 3D rendering acceleration. The texture map is loaded into system memory 24 and then read into 3D pipeline subsystem 30 via system memory interface 24 and data flow and distribution controller 26. 3D pipeline subsystem 30 then converts each vertex data into The gradient of the interpolated data on any pixel within the polygon, for example, color, alpha value, z-depth, fog, texture coordinates, etc. The 3D pipeline subsystem 30 identifies the pixels covered by the polygon, and then calculates the texture address of each pixel.The bit block transfer engine 31 provides hardware acceleration for data bit block transfer (Blitting). The bit block transfer engine 31 provides copying of source data blocks from the system memory 4 and operations on the internal data of the internal graphics 25 (eg, raster operations) The ability of bit block transfer engine 31 accelerates the display of moving objects on the display device, for example, animation, scrolling, and moving windows in the graphical user interface (GUI). For example, when bit block transfer engine 31 treats text as a neighboring block When copying to the next part of the display window instead of processing each character on each line, the text scrolls faster.A filtering block 410 is shared among several functional engines, for example, motion compensation, texture filtering, overlaying, and stretch bit block engines. The stretch bit block transmission engine is used to move the source data to the destination if the source is transparent. The destination need not be the same size.The video output sub-portion of GMCH 3 includes a main display engine 27, an overlay engine 28, and a cursor engine 29, all of which feed a display digital-to-analog converter (DAC) 32. DAC 32 provides analog output to the display device. Video synchronization and timing are programmable and are internally generated by GMCH 3. The overlay engine 28 provides the ability to combine full dynamic video streams with frame buffer data. Before combining with the frame buffer data, the motion video data can be scaled in the horizontal and vertical directions. The display cursor is also integrated with the display stream in this subsection. The video display section supports display resolutions from 320 × 200 to 1600 × 1200 pixels, and can perform gamma correction on video data. Graphic data can also be output through the digital video output port 33 and then processed to drive a flat panel or television display device.The video capture sub-portion 420 of the graphics function 25 provides video capture of digital video. The capture engine can capture data in YUV format. It puts these data in local storage, the video display controller can directly use these data for its overlay output, or the application can receive a pointer to these data to generate a texture map from it.6.Choose between AGP mode and Gfx mode at startupReferring to FIG. 8, when the computer system is restarted, it automatically detects whether an external graphics controller or AIMM card is inserted into the shared AGP / local memory interface, and initializes the computer in an appropriate graphics mode.When the computer system is powered on, restarted by the user, or automatically restarted by the computer system, the computer system 1 can be restarted 500. During the early power-on self-test (POST) 502, the system's basic input-output system (BIOS) performed various computer system hardware and software tests, including detecting system memory and basic initialization of hardware and software. During POST, the BIOS tests whether the AGP graphics controller is inserted into the AGP slot 504 by performing the configuration read onto the PCI bus.If there is an AGP compatible controller, it will be detected by the system BIOS and used as the graphics controller of the computer system. The computer system is initialized in AGP mode 510, and the AGP / Gfx selection bit in the configuration register is set to 0 to conform to this fact of the system. No further initialization of internal graphics will be required.If no AGP compatible controller is found, the computer system is initialized to work in Gfx mode 520 to use internal graphics, and the AGP / Gfx selection bit is set to 1. After selecting the Gfx mode, the BIOS tests 522 whether there is an AIMM card. Without an AIMM card, the computer system is initialized to use internal graphics functions and system memory 526. If an AIMM card is detected, the operating frequency of the local memory interface (100MHz or 133MHz) is determined by sampling a signal on the pins of the local memory interface. The system BIOS then empirically determines memory timing options such as column address strobe delay and row address strobe precharge cycle. The BIOS is started by programming slower timing and then functionally testing the memory. The BIOS then attempts to set a gradually faster timing until a data mismatch occurs during the functional test of the memory. The setting to optimize performance without functions is selected.Other implementations are also within the scope of the claims. |
Techniques and structures are disclosed in which memory training for DDR or other memory can be performed more rapidly. A memory controller is configured so that one or more memory parameters (e.g., timing delay) can be determined for one or more hardware elements such as delay locked loops (DLLs). Training may be performed without intermediation by (or reporting of results to) a system BIOS. Thus, training may be performed fully in hardware. Voltage training techniques are also disclosed. |
1. a Memory Controller, it consist:Control circuit, it is configured to use the test of one or more storer training parameter execute store element;The parameter adjustment circuit, it is configured to receive the intermediate result of described test and adjusts at least one in described one or more storer training parameter according to described intermediate result.2. Memory Controller as claimed in claim 1, in said said one or more storer training parameter described at least one be the timing parameter;Wherein said parameter adjustment circuit is configured to be identified for according to a plurality of intermediate result one or more operating value of described timing parameter.3. The memory of the claim of claim 1Wherein said parameter adjustment circuit be configured to according to one or more of the described indication in the described a plurality of read/write tests that completed described memory component come in described one or more storer training parameter described at least one be adjusted into the value outside Said set-point.4. Memory Controller as claimed in claim 3, organize store bytes, said memory component includes more; AndWherein the control circuit is configured to by for every group in described many of the data store bytes, providing corresponding to the memory component to the delay locked loop of describing store byte group special use.5. Memory controller as claimed in claim 1, wherein said parameter adjustment circuit is configured to carry out a plurality of read/write by describing at least one a plurality of values for described one or more storer training parameter on described memory component and tests to carry out Described test;The calculating that said said Memory Controller also is configured to the result by carrying out described a plurality of read/write test is identified for describing at least one of operating value in described one or more storer training parameter.6. Memory controller as claimed in describing memory component. The control circuit is configured to carry out by changing voltage parameter and timing parameter the described test of described memory component.7. Memory controller as claimed in claim 6, wherein said parameter adjustment circuit is configured to use described voltage parameter value to carry out corresponding a plurality of read/write test on described memory component for each in a plurality of voltage parameter values, and each in Said corresponding a plurality of read/write tests is used different timing parameter values.8. Memory controller as claimed in claim 7, wherein said memory controller is configured to be identified for a plurality of function timing parameter values of described memory component, and each in said a plurality of function timing parameter values is corresponding at least one analog value in Describe a plurality of voltage parameter values.9. Memory Controller as claimed in claim 5, said Memory Controller is configured to by left hand edge value and right hand edge value are on average carried out to determine described operating value.10. a method, it consist:A plurality of tests of Memory Controller execute store element, the initial trial of said a plurality of tests is used by the same value for the timing parameter, each is used each the different value for describing timing parameter the test subsequently of Of tests, and said each different value is determined according to the result of one or more previous test of carrying out of a alike of tests of described memory component by described Memory Controller;Described Memory Controller is identified for the operating value of described timing parameter according to the result of described a plurality of tests.11. The method as claimed in claim 10, describes a plurality of tests of the described memory component of the said execution of the memory of the description.12. Method as claimed in claim 10, wherein said memory component includes many group store bytes;Described a plurality of tests of such carrying out described memory component include:By the different loops of a plurality of delay locked loops carry out write described many group store bytes not on the same group;Described not the reading on the same group of carrying out described many group store bytes.13. Method as claimed in claim 12, it also includes that described Memory Controller is identified for a plurality of operating values of described timing parameter, and each in the definite operating value of said a plurality of warps is corresponding at least one in describing a plurality Of delay locked loops.14. Method as claimed in claim 10, described a plurality of test responses of the described memory component of said said execution is in the indication of the environmental baseline changed.15. The method as claimed in claim 10, it also includes that described Memory Controller is identified for the operating value scope of described timing parameter according to the described result of described a plurality of tests, and said warp determines that operating value is in describing scope.16. An equipment, it consist:For using the member of test of one or more storer training parameter execute store element;For the intermediate result that received described test and adjust at least one member of described one or more storer training parameter according to described intermediate result.17. Equipment as claimed in claim 16, in which said one or more storer training parameter described at least one be the timing parameter;Wherein said equipment also includes for identifying the member of one or more operating value of described one or more storer training parameter according to a plurality of intermediate result.18. Equipment as claimed in claim 16, the described intermediate result of said test includes that the set-point used for described one or more storer training parameter completes one or more indication of a plurality of read/write tests of described memory component;Wherein said equipment also include one or more the described indication for the described a plurality of read/write tests according to having completed described memory component come in described one or more storer training parameter described at least one be adjusted into the member of the value outside Said set-point.A computer readable storage medium, it includes the data structure by the procedure operation that can carry out on computer system, and the program operates and makes the part of the implementation of the integrated circuit that includes the described Circuits Data structure, and the described described Circuits System of described data structure comprises:Control circuit, it is configured to use the test of one or more storer training parameter execute store element;The parameter adjustment circuit, it is configured to receive the intermediate result of described test and adjusts at least one in described one or more storer training parameter according to described intermediate result.20. Computer readable storage medium as claimed in claim 19, said said storage medium storage hardware descriptive language (HDL) data, Verilog data or Graphic Database System II (GDSII) data. |
Control circuit and method for the testing memory elementBackgroundTechnical fieldThe disclosure relates to the storer of calculation element. More particularly, the disclosure relate to the calculation element storer operating parameter test and/or determine.Description of Related ArtIn many computer architectures, computer processor is connected to computer memory by bus. For accurate execution store reads or writes, may need memory data signal delay to synchronizeing with the storer control signal.When control signal can be for example indicating the signal of access bit Stream.Due to control signal and not homophase arrival of data-signal, therefore use length of delay so that two signals synchronously return and can reduce wrong together. (synchronously can from height to low or from preventing that bit stream from being sampled by mistake In the middle of low paramount bit transition).In addition, due to the physical features of the variation of storer, the data line (or bus) that is connected to storer and/or overall operation environment, can be by the different piece of Different modes operational store.Can therefore by some delay locked loops (DLL), provide memory access.Each DLL can prop up the memory access of a part of pairing storer, and setting can be adjustable so that memory data and synchronizes with storer control for specific delays.Proofreading and The correct value of various DLL timing delay parameters can guarantee data are accurately write to all parts of computer memory and read from all parts of computer memory.Yet, especially, under higher storage device operating frequency, determine that it may be consuming time postponing To arrange.The introduction summaryIn one embodiment, a kind of Memory Controller that includes control circuit and parameter adjustment circuit is disclosed. Control circuit is configured to use the test of one or more storer training parameter execute store element, and the parameter adjustment circuit is configured to receive the intermediate Result of test and adjust at least one in one or more storer training parameter according to intermediate result.In another embodiment, a kind of method is disclosed, it includes a plurality of tests of Memory Controller execute store element, wherein the initial trial of a plurality of tests is used the first value for the timing parameter, wherein each use of test subsequently of a different of tests is used for each different value of timing parameter, and each different value of determined according to the result of one or more previous test of carrying out of a plurality of tests of memory component by Memory Controller. That Memory Controller is identified for the operating value of timing parameter according to the result of a plurality of tests.In another embodiment, disclose a kind of equipment, this equipment includes the member of the test for using one or more storer training parameter execute store element and for the intermediate result that receiving test and adjust at least one member of one or more storer training parameter According to intermediate result.In another embodiment, a kind of computer readable storage medium is disclosed, it includes the data structure by executable procedure operation on computer system, program operates with the part of implementation and makes the integrated circuit that includes the described Circuits System of data structure on data Structure, the described Circuits System of data structure includes the control circuit of the test that is configured to use one or more storer training parameter execute store element, and comprises the intermediate result that is configured to receive test and adjust at least one of parameter adjustment Circuit in one or more storer training parameter according to intermediate result.Instruction of the present disclosure and appended claims clearly are not limited to feature discussed above and embodiments in this summary.The accompanying drawing summaryFigure 1A illustrates Memory Controller to be connected to the calcspar of computer memory element by I/O (" I/O ") circuit.Figure 1B is the calcspar that an embodiment of I/O circuit is shown.Fig. 2 is the calcspar that the embodiment of parameter adjustment circuit and control circuit is shown.Fig. 3 illustrates the process flow diagram of the method for determining the memory parameter operating value.Fig. 4 is the calcspar that an embodiment of exemplary computer system is shown.Describe in detailThis appearance includes the reference to " embodiment " or " embodiment ". The appearance of term " in one embodiment " or " in embodiments " does not have representative of the embodiment. Specific function, structure or feature can be consistent with the disclosure the combination of Any suitable ways.Term.Following paragraph provides definition and/or the context of the term occurred in the disclosure (comprising claims):"include".This term is open.As used in claims, supernumerary structure or step do not got rid of in this term.Consider the claim be described below: " a kind of equipment that includes one or more processor unit ... " .Such claim not remover apparatus with additional assemblies (for example, network interface unit, graphics circuitry system etc.)."Be configured to".Various unit, circuit or other assembly can be described or ask to carry out one or more tasks for " being configured to ".In such context, " being configured to "carrier out (those) one or more tasks Structure (for example, Circuits System) for be included in operating process by indicating member/circuit/assembly including structure.Similarly, for example, even, when the current in operation of designating unit/circuit/assembly (, closing), also can say The unit/circuit/assembly is configured to execute the task. Unit/the circuit used together with "the configured to "the system, the circuit, the storage can be carried out with the storer of the programmed instruction Of implementation and operation etc.Unit/circuit/assembly " is configured to " carry out the narration of one or more task, does not mean and quotes the 6th section of 35U.SC § 112 for this unit/circuit/assembly.In addition , " being configured to " can comprise the Universal architecture (for example, universal circuit system) for example, operated in the mode that can carry out task to be solved by software and/or firmware (, the general processor of FPGA or executive software) manipulation." be configured to " Can include and make manufacture process (for example, semiconductor fabrication facility) be suitable for making the device (for example, integrated circuit) that is suitable for implementing or carrying out one or more task."first", "second" etc.As used herein, these terms are used as the mark before noun, and do not mean the order (for example, space, time, logic etc.) of any type.For example, " first " " memory parameter value and " second " memory parameter value can be used for referring to any two values, and do not mean that a value is higher or a value is definite prior to another value than another. In other words, " first " and " second " is descriptor."● to". One or more factor as used herein, that this term is determined for describing impact.This term is not got rid of can affect definite extra factor.That is to say, determining can be only according to those factor or at At least according to those factors to those factors. The consideration term "is determined A according to B ".Although B can be the definite factor that affects A, such term is not got rid of and is determined A according to C. In other situation, Can only according to B, determine A."processor".This term has general in this area and received implication, and includes the device that can carry out instruction. Processor can represent but be not limited to CPU (central processing unit) (CPU), coprocessor, arithmetic processing unit, Graphics Processing Unit, digital signal processor (DSP) etc. Processor can be the superscalar processor with single pipeline or multi-line. Processor can comprise that each is configured to carry out core or the multi-core of instruction." BIOS" or "BIOS device". This term has general in this area and received implication, and includes storer or memory storage (such as EPROM or EEPROM), the computer instruction that storer or memory storage have operating system stored, that be Independent of computer system, can be carried out by the processor of computer system, arrange with the order that changes hardware system setting, power supply setting, starter gear etc."storer training parameter" or "memory parameter".As used herein, these terms refer to affect that storer is read and/or any parameter of the operation of memory write.Computer system can comprise the Memory Controller be connected to one or more Memory Controller passage (MCC) of memory bus interface. (for example, the x86 processor can have the Memory Controller that is positioned at its north bridge, is connected to the dram controller Passing.) MCC can comprise with mark mode delay transmitter and receiver and write and read from storer the circuit that work is correct from controller guaranteeing.Comparable other length of delay work of some length of delays is good, and can allow storer in higher frequencies Of operation.But the process help system of the determination of the length of the delay of the realization of the process of the BIOS of Memory Controller passage sense data and to its data writing, for example, In transmitter and receiver by PCI access (, passing through south bridge) simultaneous.This is the example that is called the dynamic process of " storer t Raining ".At the storer training period, but the Memory Controller data writing to storer, then whether readback data it is compared with the data that before write, use correct delay that the write or read data are set to determine processor.Relatively the failure after, new Delay setting can be used for channel controller, but and repetitive process until more correct.Yet, when a large amount of storeies are attached to Memory Controller, can significantly increase the training time, especially when being carried out the access of many PCI by BIOS (because may require BIOS poll on completion bit to determine moving with before testing another different delays setting, to complete the access that specific delays arranges). For extraneous information, with reference to United States Patent (USP) No.2009/0244997 (is Reproduction corresponding to US application No. 12/059,653) and the open No. 2010/0325372 (of United States Patent (USP) corresponding to US application No. 12/486,488), these patents o The pens are incorporated herein by reference in full. The disclosure includes structure and the technology that can allow the storer training to carry out more fast. In one embodiment, execute store training and without mediating between two parties from BIOS.Now, referring to Figure 1A, square Figure 100 is shown, Memory Controller 105 is connected to computer memory element 180 by I/O ("I/O") circuit 150. Memory component 180 includes one or more memory storage element that can Be positioned at computer system (such as system 400) (following describe about Fig. 4). In one embodiment, memory component 180 is one or more modules of dynamic RAM (DRAM), but can be in other embodiments the storer of any other The type of is configured to store data. In one embodiment, memory component 180 is DDR2 or DDR3 DRAM. Therefore, memory component 180 includes a plurality of store byte groups. By I/O circuit 150, provide to memory component 180 (and these store byte Groups) access.In FIG. 1B, I/O circuit 150B is configured to be couple to control circuit 120 and memory component 180, and including transmit buffer 154, transmitter 156, receiver 158 and Reception buffer 160.In some embodiments, I/O circuit 150 includes a plurality of in any or all of in impact damper 154, transmitter 156, receiver 158 and reception buffer 160. In some embodiments, transmitter 156 and receiver 158 are combined into single Transmitter configuration, described such as disclose 2009/0244997 in the US.Therefore, many configurations of I/O circuit 150 are possible.During the information of memory component 180 to be written can be stored in transmit buffer 154 before sending by transmitter 156. Transmitter 156 can comprise (or being connected to) one or more delay locked loop, and each delay locked loop can be used for making One or more part (store byte group) of write store element 180 synchronous.Each DLL(in transmitter 156 or across each DLL of multiple emitter 156) arranged by different timing parameter values.For example, a DLL can match memory data signal (DQ And memory data gating signal (DQS) according to the first clocking value, and another DLL can be used second, different clocking values matches DQ and DQS.Similarly, receiver 158 can include one or more DLL that also according to one or more timing Length of delay, DQ and DQS is matched. In some embodiments, the DLL in transmitter 156 and receiver 158 can share. More generally, transmitter 156, receiver 158 and/or the DLL that is included in (or transmitter 156 and receiver 1 58 be configured to be connected to) can have any or all of feature of ' 997 open and/or ' 372 transmitter, receiver, transceiver and DLL described in open.Return and read FIG. 1A, as shown in the figure, Memory Controller 105 includes parameter adjustment circuit 110 and control circuit 120. Parameter adjustment circuit 110 is configured to start the test of one or more memory component in Figure 1A. Parameter adjustment circuit 110 receives the indication that starts memory test from another assembly (such as BIOS device or processor). In other embodiments, parameter adjustment circuit 110 can be configured to automatically (for example,, in response to the computer system just included at Alive circuit 110) and starts memory test. In other embodiments, parameter adjustment circuit 110 can be configured in response to trigger event (for example, such as the change (temperature that, rises or reduce of the environmental baseline gone out after testing; Voltage that rises or reduce)), for example, from the order of software (, operating system or BIOS), maybe can start with the timer based on hardware or Software of any combination of regular length and/or variable-length timing the test of (or restarting) one or more memory component. In some embodiments, control circuit 120, rather than parameter adjustment circuit 110, be configured to start memory test.In general, this paper can preferentially be placed in other Circuits System about parameter adjustment circuit 110 and described any or all 26S Proteasome Structure and Function of control circuit 120.Therefore, in some embodiments, parameter adjustment circuit 110 and control circuit 120 (and comprises Function) all or part of Memory Controller 105 outsides that can be placed in. In some embodiments, parameter adjustment circuit 110 (and including function) all or part of can be placed in control circuit 120, vice versa.In addition, I/O Circuit 150 (and including function) all or part of can be placed in other structure that Memory Controller 105, memory component 180 and/or this paper clearly do not describe within.In the embodiment of Figure 1A, control circuit 120 is configured to use one or more storer training parameter to carry out the test of one or more memory component. In one embodiment, one or more storer training parameter comprises one or more timing parameter. Timing parameters can be used to be used in reads and/or the behavior of one or more DLL of write store element 180.For example, given DLL can be (or a plurality of clock period of the certain mark of DQ signal delay, So that the DQ signal is better at at DQS signal (or, in some embodiments, can be about DQ signal delay DQS signal). In another embodiment, include one or more voltage parameter for one or more storer training parameter of the test of One or more memory component, such as the operating voltage (or rated peak voltage) of storage channel.Now, referring to Fig. 2, the calcspar of parameter adjustment circuit 210 and control circuit 260 is shown. These circuit can have any feature, structure or the function of parameter adjustment circuit 110 as above and control circuit 120, and vice versa. In the figure, the parametric circuit 210 includes that parameter determines logic 220, result store 230 and interface logic 240, and control circuit 260 includes test data generating 262, comparer 264 and interface logic 266. As above about as described in circuit 110 and 120 Circuit 210 (and including function) all or part of can be placed in circuit 260, or vice versa. In one embodiment, common circuit includes about circuit 210 and 260 described entire infrastructure and functions.Parameter adjustment circuit 210 is configured to be identified for one or more operating value of one or more storer training parameter. As used herein, term "operating value" refers to as the value of the part of normal calculating operation (with only the value for The value of the system is determined to be equivalent to the numerical value of the intermediate The result is stored in result store 230. In one embodiment, a plurality of intermediate results of the test of parameter adjustment circuit 210 use memory components are determined one or more parameter operating value. In one embodiment, the intermediate result of memory test is delivered to Storage 230 via interface logic 240 by control circuit 260. In the embodiment of Fig. 2, interface logic 240 including part 242 for communicating by Letter with control circuit 260 and for example, for the part 244 of communicate with letter (, receiving indication to start test).In the embodiment of Fig. 2, test data generating 262 is configured to use one or more storer training parameter to generate the test data of the test of memory component. In one embodiment, maker 262 is graphic generators, and can generate mass data ( For example, hundreds of megabyte or more) according to one or more pre-configured pattern or sequence.In some embodiments, the test data of all generations or its part can be at random or pseudo-random generation.Some data patterns include the part That is designed to test difficult edge situation (for example, it is single one that follow certain adjacent several zero back, and back follows many adjacent zeros can make more difficult the detecting of "one " numerical digit again, and vice versa).In the embodiment of Fig. 2, interface logic 266 is for using for one or more electric current (test) value of one or more storer training parameter writing data into memory element 180 (for example to come, by I/O circuit 150) .After test data write store element 180, from storer readback data (for example,, by receiver 158 and reception buffer 160). In some embodiments, the process of read test data can be overlapping with the process that writes test data (that is In these embodiments, be not all test datas all need read can start before write store element 180).Comparer 264 includes that whether circuit logic is identical with the test data (output data) of write store to determine the test data (input data Read from storer, and from generating intermediate result. Then these intermediate results can be reported result store 230 and be stored by result store 230.Test data write store the process of reading back are referred to herein as to " read/write test ".In some embodim Ents, the read/write test also includes other operation bidirectional, such as generating one or more intermediate result.According to embodiment, type and rich change of the intermediate result data that generated by comparer 264. In some embodiments, comparer 264 be configured to about input test data for the storer training parameter for given whether with identical simply the generation by/unsuccessfully indication that Output test data.In other embodiments, for example, if the critical mass of data or number percent are correct (, every 1GB test data are less than 1 bit mistake or byte error), comparer 264 can generate and pass through result so.In Other embodiments, comparer 264 can generate wrong position (for example, indicating which particular case to lead to the failure) in the quantitative data of quantity of bit mistake that indication occurs at the read/write duration of test or byte error and/or test Data pattern.In the one or more of the storer training parameter. Timing parameter value (such as the DQ for this DLL and the zero offset length of delay between DQS) for given DLL to start the test of memory component 180. After using set-point to complete initial read/write test, the parameter adjustment circuit Can increase fixed qty (for example, DQ and the skew between DQS for given DLL increases by 1/32 clock period) to set-point. Then can use for the new value of storer training parameter and carry out read/write test subsequently, Therefore can generate further intermediate result (can make further adjustment to the storer training parameter value for given DLL after). In various embodiments, comprise that the Memory Controller of parameter adjus Tent circuit 210 can a plurality of DLL of while parallel training. In some embodiments, parallel training can occur by a plurality of Memory Controller passages. In addition, the system that has a plurality of Memory Controllers also can be synchronously or those controllers of parallel Training.Parameter adjustment circuit 210 is also configured to be identified for the operating value of one or more storer training parameter. Therefore, in one embodiment, parameter determines that logic 220 is configured to carry out the calculating of intermediate result of a plurality of read/write The test of the calculating operation value. This type of calculating can include determining for "left hand edge " of the length of delay of given DLL and/or " right hand edge ". For example, if intermediate result by the passing through of following timing Parameter value and different read/write test/unsuccessfully indication forms:Can determine so successfully value of " left hand edge " 1/8(first) and the last successful value of " right hand edge " 4/8().Information thus, can be by obtaining operating value 5/16 to left hand edge Value and right hand edge value average computation. If can obtain the quantitative data (such as the quantity of bit or byte error) of each test, so also can carry out other method of determining operating value, such as weighted mean. Iteration tests to determine operating value---for example, can move the extra read/write test within left hand edge 1/8 and right hand edge 4/8, use subsequent the extra data calculating operation value generated.One or more operating value , after being determined, can be stored in result store 230, special register any other appropriate location that (for example,, in parameter adjustment circuit 210 or in the register in DLL self) or those skilled in the art can expect. , the parameter operating value can be stored in BIOS .The voltage memory parameter also can be by parameter adjustment circuit 210 and control circuit 260 training. Therefore in one embodiment, control circuit 260 is configured to come by changing voltage parameter and timing parameter the test of execute store element 180.For given DLL, can Be for example timing parameter by this DLL of being identified at a voltage level place operating value, then raise or reduce voltage level and carry out extra read/write test to be identified for carrying out such test at one or more other operating value of other Voltage place timing parameter.For example, the result of such test can be taked the following form:After the various timing settings that are identified for different voltage levels, the storage channel controller (for example can correspondingly be selected suitable timing parameter operating value, voltage for the different memory passage, use different clocking values, or, in response to the system voltage The level of the operating period, select different clocking values). Therefore, in one embodiment, for each in a plurality of voltage parameter values, can use this voltage parameter value to carry out corresponding a plurality of read/write test on memory component according to Said method, to be identified for the timing parameter operating value of this voltage level.Now, referring to Fig. 3, the process flow diagram of the method 300 for determining the parameter operating value is shown. In various embodiments, by the step of parameter adjustment circuit 210 and all or part of manner of execution 300 of control circuit 260 .In step 310, receive the indication that starts to carry out a plurality of read/write tests on memory component. As mentioned above, the environmental baseline (voltage, temperature etc.) that such indication can switch on, change in response to system or Hardware or software timer and automatically generate. In some embodiments, can receive such indication from the BIOS device. The indication that starts test also can comprise extraneous information, such as particular address range to be tested.In step 320, use for the first value of storer training parameter and carry out initial read/write test. In various embodiments, this initial value can be preset, dynamically determine or specify in indication, to start test.For example, for specific DLL Initial read/write test can be used DQ/DQS timing length of delay zero.Then read writing data into memory and from storer, to determine this timing length of delay, whether produce correct result.In step 330, according to the result of initial trial, be identified for the different value of storer training parameter. In some embodiments, this step can comprise increases fixed qty (for example, certain mark of clock period) to DQ/DQS timing length Of delay.In other embodiments, the timing length of delay can increase the dynamic definite quantity quantitative data of the quantity of the bit about from previous test or byte error (for example, in response to).Although it should be noted that the various Example of this paper relate to test period " increase " storer training parameter value, it is same possible that other mathematical operation changes this value, such as (deducting), multiplication or the division of successively decreasing.Therefore can determine different parameter values according to One or more previous result.In step 340, use from the new definite parameter value of step 330 and carry out extra read/write test. Step 340 can include above about the described any or all of element of step 320. In step 350, make and whether continue determining of The specific memory parameter discussed of test. For example, if detected left hand edge, when also detecting right hand edge, can stop test (for example,, after one or more previous success after testing one or more failure subsequently being detected, stopping test So.Perhaps, in some embodiments, can continue test until calculated the four corner of probable value. If determine and continue the test special parameter, the method is returned to step 330 and is proceeded according to the above.Yet, if determine, No longer further test, in step 360, be identified for the operating value of specific memory parameter. Any mode of can be as mentioned above or can remember as those skilled in the art is made this and is determined (for example, left hand ed Ge/right hand edge is average, weighted mean etc.).In some embodiments, the execution step 320-360 and without report the test to the BIOS device. Therefore, in these embodiments, can determine operational parameter value, and without the BIOS by computer system (such as system 400) make any mediate between two The parties or decision.This is speeds up memory parameter training process greatly, and as in computer systems, one or more Memory Controller is the part (or being connected to north bridge) of north bridge, and BIOS is attached to significantly slower south bridge .As discussed previously, in some embodiments, BIOS can be indicated in parameter adjustment circuit 210 or control circuit 260 one by transmission and start the storer training, and without taking any further action until trained. In addition, in the embodiment do not played a Role in the storer training after initial start up phase at BIOS, BIOS can freely carry out and start other required operation of computer system (therefore accelerating manyly) to the overall s Tartup time.As disclosed from above, in one embodiment, control circuit 260 is members of the test for using one or more storer training parameter execute store element, and parameter adjustment circuit 210 is the intermediate result for receiving test at least one of member of adjusting one or More storer training parameter according to intermediate result.Analog computer system4, described to include an embodiment of the exemplary computer system 400 of Memory Controller 105. Computer system 400 by cross tie part 460 (for example comprising, system bus) be couple to the processor subsystem 480 of system storage 420 And I/O interface 440.I/O interface 440 is a couple to one or more I/O device 450.Computer system 400 can be any in all kinds device, includes but not limited to server system, personal computer system, desk-top Computer, laptop computer or notebook, mainframe computer system, portable computer, workstation, network computer, such as the consumption device of mobile phone, beeper or personal digital assistant (PDA). Computer system 400 also can be the network peripheral unit of any kind , such as memory storage, switch, modulator-demodular unit, router etc.Although single computer systems 400 is shown for simplicity, system 400 also can be embodied as two or more computer systems of co-operate.Processor subsystem 480 can include one or more processor or processing unit. For example, processor subsystem 480 can include one or more processing unit (each had multiprocessing element or core) that is couple to one or more resource control treatment element 420. Various embodiments of computer system 400, the Multi-instance of processor subsystem 480 can be couple to cross tie part 460. In various embodiments, each processor unit or treatment elements in processor subsystem 480 (or 480) can comprise storer on the plate of buffer Memory or other form.In one embodiment, processor subsystem 480 can comprise Memory Controller 105 as above.In various embodiments, system storage 420 can be used by processor subsystem 480, and includes one or more memory component such as element 180. Can use different physical storage medium implementation system storeies 420, physical storage medium such as hard-disc storage, floppy disk Storage, moveable magnetic disc storage, flash memory, random access memory (RAM---static RAM (SRAM), growth data output (EDO) RAM, synchronous dynamic ram (SDRAM), Double Data Rate (DDR) SDRAM, RAMBUS RAM etc.), ROM (read-only memory) (ROM---programming ROM (PROM), electrically erasable programmable ROM (EEPROM) etc.) etc.Storer in computer system 400 is not limited to the primary storage such as Storer 420.On the contrary, computer system 400 also can include the storage of other form, for example, such as the secondary storage on the cache memory in processor subsystem 480 and I/O device 450 (, hard disk drive, storage array etc .). Some embodiments, the storage of these other forms also can b e stored the programmed instruction that can be carried out by processor subsystem 480.According to various embodiments, I/O interface 440 can be any of the various kinds of interfaces that are configured to be couple to other device and communicates by letter with other device. In one embodiment, I/O interface 440 is from front side to a piece or the bridging chip (for example, south bridge) of more rear side buses. I/O interface 440 can be coupled to one or more I/O device 450 by one or more respective bus or other interface. /O device (for example with memory storage (CD-ROM driver, removable flash drive, storage array, SAN or their correlation control unit), Network Interface Unit, to LAN (Local Area Network) or wide area network) Or other device (for example, graphic interface device, user's interface device etc.). In one embodiment, computer system 400 is coupled to network by Network Interface Unit.The routine instruction of for example, being carried out by computer system (, computer system 400) can be stored on various forms of computer readable storage medium. In general, computer readable storage medium can comprise by computer-readable with any nonvolatile that instruction and /or data are provided to computing machine/tangible storage medium. For example, computer readable storage medium can comprise the storage medium such as magnetic or light medium, for example, and disk (fixing or movably), tape, CD-ROM or DVD -ROM, CD-R, CD-RW, DVD-R, DVD-RW or blue light. Storage medium can further include by such as the accessible volatibility of peripheral interface or non-volatile storage medias such as USB (universal serial bus) (USB) interfaces, such as RAM (for example, synchronous dynamic ram (SDRAM), Double Data Rate (DDR, DDR2, DDR3 etc.) SDRAM, low-power DDR (LPDDR2 etc.) SDRAM, Rambus DRAM (RDRAM), Static RAM (SRAM) (SRAM) etc.), ROM, flash memory, nonvolatile memory (for exam Ple flash memory).Storage medium can comprise microelectromechanical systems (MEMS), and passes through such as accessible storage medium of the communication media of network and/or wireless link.In some embodiments, computer readable storage medium can be used for that storage is read by program and directly or indirectly for making for parameter adjustment circuit 110 as above and/or 210 and the instruction of the hardware of control circuit 120 and/or 260. For example, instruction can be summarized to use such as the high-level design languages (HDL) of Verilog or VHDL and describe the behavioral scaling of hardware capability or one or more data structure that Method at Register Transfer Level (RTL) is described. Can read description by synthetics, synthetics can synthesize describes to produce the net table. The net table can include one group of logic gate (for example, limiting in synthetic storehouse), logic gate representation parameter Circuit tuning 110 and/or 210 and the function Of control circuit 120 and/or 260.Then can be applied to the data set of the geometric configuration of mask to the placement of net table and route to produce description. Then mask can be us Ed for various semiconductor fabrication steps to produce corresponding to parameter adjustment circuit 110 and/or 210 and one or more semiconductor circuits of control circuit 120 and/or 260.Having more than described specific embodiments, even only only have an embodiment to be described about special characteristic, these organizations are unexpectedly restricted the scope of the present disclosure.Unless otherwise mentioned, otherwise that the example of the feature that the disclosure provides means is Illustrative, and nonrestrictive.As be apparent that for those skilled in the art in benefit of this disclosure, above description means and including this type of substitute, modification and equivalent.The scope of the present disclosure includes combination or its any conclusion of the disclosed any feature of this paper (explicit or implicit expression) or feature, and no matter whether it alleviates any or all of problem in this paper.Therefore, during the application's application Of its right of priority (or ask) is applied for, any this type of combination of feature can be formulated as new claim.Especially, consult appended claims, feature in dependent claims can with independent claims in Feature Combination, and the feature in independent Claims separately can combine in any appropriate manner, and is not only the particular combination of enumerating in appended claims. |
The present disclosure includes apparatuses, methods, and systems for error identification on executed code. An embodiment includes memory and circuitry configured to read data stored in a secure array of the memory, identify a different memory having an error correcting code (ECC) corresponding to the read data of the memory, execute an integrity check to compare the ECC to the read data of the memory; and take an action in response to the comparison of the read data of the memory and the ECC, wherein the comparison indicates that the ECC identified an error in the read data of the memory. |
What is Claimed is:1. An apparatus, comprising:a memory; andcircuitry associated with the memory, the circuitry configured to:read data stored in a secure array of the memory; identify a different memory having an error correcting code (ECC) corresponding to the read data of the memory;execute an integrity check to compare the ECC to the read data of the memory; andtake an action in response to the comparison of the read data of the memory and the ECC, wherein the comparison indicates that the ECC identified an error in the read data of the memory.2. The apparatus of claim 1, wherein the circuitry is further configured to determine that a correction applied to the error identified by the ECC introduced an additional error to the ECC.3. The apparatus of claim 1, wherein the ECC is read by the different memory in parallel to the read data of the memory.4. The apparatus of any one of claims 1-3, wherein the integrity check is executed by the memory in response to a start-up process of the memory.5. The apparatus of any one of claims 1-3, wherein the action taken is to refrain from correcting the error in the read data of the memory in response to the error identified by the ECC.6. The apparatus of any one of claims 1-3, wherein the circuitry is further configured to determine whether the error identified by the ECC affects an operation of a host device associated with the read data of the memory.7. The apparatus of any one of claims 1-3, wherein the ECC of the different memory and the read data of the memory are associated with a powertrain operation of a host device associated with the memory and the different memory; andwherein the circuitry is further configured to:generate an alert in response to the indication that the ECC identified the error in the read data of the memory; andtransmit the alert to the host device associated with the memory and the different memory.8. The apparatus of any one of claims 1-3, wherein the memory refrains from applying a correction to the read data of the memory, based on the error identified by the ECC of the different memory.9. An apparatus, comprising:a memory; andcircuitry associated with the memory, the circuitry configured to:read error correcting code (ECC) stored in a secure array of the memory;identify a different memory having read data corresponding to the ECC of the memory;execute an integrity check to compare the ECC to the read data of the different memory; andtake an action in response to the comparison of the read data of the different memory and the ECC of the memory, wherein the comparison indicates that the ECC identified an error in the read data of the different memory.10. The apparatus of claim 9, wherein the memory includes the ECC to monitor error introduced to the read data of the different memory when the data is read by the different memory.11. The apparatus of claim 9, wherein the memory and the different memory are associated with a host, and wherein the read data is written to the different memory and the ECC is written to the memory when the host is manufactured.12. The apparatus of any one of claims 9-11, wherein the read data of the different memory and the ECC of the memory are read at the start-up of the memory and the different memory.13. A system, comprising:a controller;a first memory device in communication with the controller;a second memory device in communication with the controller, the controller configured to:receive read data from the first memory device; receive an error correcting code (ECC) corresponding to the read data of the first memory device;execute an integrity check by comparing the read data of the first memory device and the ECC provided by the second memory device; andtake an action in response to the comparison of the read data of the first memory device and the ECC, wherein the comparison indicates that the ECC identified an error in the read data of the first memory device.14. The system of claim 13, wherein the controller is associated with a vehicle, and the read data of the first memory device is data associated with a powertrain operation of the vehicle.15. The system of claim 14, wherein operations of the vehicle are halted in response to the error identified by the ECC.16. The system of any one of claims 13-15, wherein comparing the read data of the first memory device to the ECC provided by the second memory device includes comparing a hash function corresponding to the data and a digest corresponding to the ECC.17. The system of any one of claims 13-15, wherein the error identified in the read data of the first memory device is corrected by the controller.18. A method, comprising:receiving read data of a first memory device;receiving an error correcting code (ECC) from a second memory device, wherein the ECC corresponds to the read data of the first memory device;comparing the ECC form the second memory device to the read data of the first memory device to determine error detected by the ECC;determining an integrity of the read data of the first memory based on the comparison of the ECC to the read data of the first memory; andtaking an action in response to the determined integrity of the read data of the first memory.19. The method of claim 18, wherein taking an action further comprises: generating, by the first memory device, an alert indicating an error in the read data of the first memory device and identified by the integrity check; and transmitting to a controller in communication with the first memory device, the alert indicating the error identified.20. The method of any one of claims 18-19, wherein receiving the read data of the first memory device includes receiving instructions to execute a routine to operate a powertrain of a host vehicle; andwherein taking an action further comprises:receiving, by a controller included in the vehicle, an alert in response to an error identified by the determined integrity; andaborting, by the controller, the powertrain operations of the host vehicle. |
ERROR IDENTIFICATION IN EXECUTED CODETechnical Field[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to identify error in executed code.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits and/or external removable devices in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and can include random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetic random access memory (MRAM), among others.[0003] Memory devices can be combined together to form a solid state drive (SSD), an embedded MultiMediaCard (e.g., MMC), and/or a universal flash storage (UFS) device. An SSD, e.MMC, and/or UFS device can include non-volatile memory (e.g., NAND flash memory and/or NOR flash memory), and/or can include volatile memory (e.g., DRAM and/or SDRAM), among various other types of non-volatile and volatile memory. Non-volatile memory may be used in a wide range of electronic applications such as personal computers, portable memory sticks, digital cameras, cellular telephones, portable music players such as MP3 players, movie players, among others.[0004] Flash memory devices can include memory cells storing data in a charge storage structure such as a floating gate, for instance. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. Resistance variable memory devices can include resistive memory cells that can store data based on
the resistance state of a storage element (e.g., a resistive memory element having a variable resistance).[0005] Memory cells can be arranged into arrays, and memory cells in an array architecture can be programmed to a target (e.g., desired) state. For instance, electric charge can be placed on or removed from the charge storage structure (e.g., floating gate) of a flash memory cell to program the cell to a particular data state. The stored charge on the charge storage structure of the cell can indicate a threshold voltage (Vt) of the cell. A state of a flash memory cell can be determined by sensing the stored charge on the charge storage structure (e.g., the Vt) of the cell.[0006] Errors introduced into code and threats imposed of stored code can affect the operation of a memory device and/or the data stored in the memory cells of the memory device. Errors may be introduced by noise, and/or during transmission. Threats can include, for example, threats from hackers or other malicious users, including intentional error introduction, man-in-the- middle (MITM) attacks, among others. Such threats can cause significant financial loss, and/or can present significant safety and/or security issues.Brief Description of the Drawings[0007] Figure 1 illustrates a diagram of a portion of a memory array having a number of physical blocks in accordance with an embodiment of the present disclosure.[0008] Figure 2A illustrates an example of a pair of registers used to define a secure memory array in accordance with an embodiment of the present disclosure.[0009] Figure 2B illustrates a diagram of a portion of a memory array that includes a secure memory array defined in accordance with an embodiment of the present disclosure.[0010] Figure 3 is a block diagram of a computing system including a host and an apparatus in the form of a memory device in accordance with an embodiment of the present disclosure.[0011] Figure 4 illustrates example block diagram of an example system including a host controller and an apparatus in accordance with an embodiment of the present disclosure.
[0012] Figure 5 illustrates an example flow diagram for error identification in executed code in accordance with embodiments of the present disclosure.[0013] Figure 6 is a block diagram of an example system including a host and a memory device in accordance with an embodiment of the present disclosure.[0014] Figure 7 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure.[0015] Figure 8 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure.[0016] Figure 9 is a block diagram of an example process to verify a certificate in accordance with an embodiment of the present disclosure.[0017] Figure 10 is a block diagram of an example process to verify a signature in accordance with an embodiment of the present disclosure.[0018] Figure 11 is a block diagram of an example memory device in accordance with an embodiment of the present disclosure.Detailed Description[0019] The present disclosure includes apparatuses, methods, and systems for error identification in executed code. Error correction operations can be performed on a host computing system and/or on a memory device. An embodiment includes a memory, and circuitry configured to identify error in executed code (e.g., read data) by comparing data read by the memory device to error correcting code (ECC) read by a different memory device. Comparing the read data of the memory device and the ECC of the different memory device to determine if an error exists in the read data.[0020] Memory devices may be used to store data in a computing system and can transfer such data between a host associated with the computing system. The data stored in a memory device may be code for routines important to the operation of the of the host. For example, the host device may be a vehicle and the routine may be an operation of the powertrain or the vehicle. Memory
devices can be utilized as non-volatile memory for a wide range of electronic applications.[0021] A host can be communicatively coupled to multiple memory devices. In one example, a memory device may include data stored in a secure array of the memory device. The memory device may include circuitry to identify a different memory device having error correcting code (ECC) which corresponds to the data read by the memory device. The circuitry can be configured to execute an integrity check. An integrity check refers to a comparison of error corrected data to read data. For example, the circuitry can be configured execute an integrity check to compare the ECC to the read data of the memory device and take an action in response to the comparison of the read data and the ECC. When the ECC indicates a correction, the data read by the memory device may include a similar error, and corrective actions may be taken to rectify the error.[0022] Error (e.g., faults) may be introduced into data (e.g., the data stored in the memory cells) stored by a memory device in multiple ways. Error can be unintentionally introduced into code by noise and/or impairments during transmission. In some instances, error can be inadvertently introduced to the data stored in the memory causing changes to the operation of the memory. Error may also be introduced to data stored by memory intentionally through threats. For example, a hacker or other malicious user may introduce error to attempt to perform activities (e.g., attacks), such as, for instance, a man-in-the-middle (MITM) attack, to make unauthorized changes to the operation of the memory, and/or to the data stored therein, for malicious purposes. Another example of a threat and/or a consequence to error introduced to data stored by the memory, is a hacker or other malicious user can attempt to skip a portion of a command (e.g., a portion of executable code) referred herein as a routine, written as a check and/or as a security protocol to authenticate the command.[0023] During such an attack and/or error, the routine is skipped and/or altered, but the host may receive an indication that the routine was executed.Said differently, a hacker and/or an error may falsify the command and cause an indication to be received by the host that the routine was executed. Important routines written to check the authenticity of a command (authenticate a component, authenticate a software version and/or update, user identity, etc.)
may be designed to execute during the start-up (e.g., boot) of the memory device. A hacker and/or an introduced error may change (e.g., mask) an external input to trigger conditions which may skip the routine written to validate the authenticity of the command. One example of such routine may be a portion of executable code written to check the authenticity of payment prior to execution of a software service (e.g., issuance of currency from an automatic teller machine and/or transfer of data, execution of software, etc.). Other examples may include routines to validate a software license to authenticate that the software is genuine prior to execution (computer systems updates, installation of software, etc.), important operation routine for the host device (e.g., start-up operations, powertrain operations, etc.), and/or a routine to check the genuineness of a system component and the configuration of the system component (e.g., process plant control, automotive components).[0024] The detection and correction of error can be challenging because the correction of detected error can produce additional (e.g., new) errors. This may cause an unreliability in the resulting architecture of the code (e.g., the routine) and can affect the operation of the memory and the code stored in the memory. Many memory devices employ error checking techniques such as ECC which detect bit errors in data. The ECC can be associated with groups of cells, e.g., memory blocks, memory segments, or memory sectors, and can rescue read failures by detecting and possibly correcting bit errors. Examples, of ECC codes include, Hamming codes, Reed-Solomon (RS) codes, Bose- Chaudhuri-Hochquenghem (BCH) codes, circular redundancy check (CRC) codes, Golay codes, Reed-Muller codes, Goppa codes, and Denniston codes, among others. In some approaches, these and other error checking techniques are performed on the memory device by a controller including circuitry that is coupled to the memory device. As mentioned, the ECC may inadvertently introduce new errors to important routines (e.g., commands), when the errors are identified and corrected.[0025] As such, in order to ensure that errors indicated by ECC are identified but that new errors are not introduced when the identified errors are corrected, an alert may be generated when such errors are detected. A host may be associated with multiple memory devices to detect these errors. For example, a host device may include a host controller which is communicatively coupled to
multiple memory devices (e.g., an ECC memory device and a memory device absent ECC). The multiple memory devices may be respectively provisioned with data (e.g., commands and/or routines) and/or ECC corresponding to the data.[0026] In some examples, the ECC and corresponding routines may be securely provisioned onto the memory devices during a manufacturing step and/or securely validated using a public/private key exchanged (further discussed herein). The ECC memory device including the ECC and a memory device including corresponding data (e.g., the routine) may be read in parallel by the respective memory devices. The ECC and the data executed may be compared by the host device and/or a controller associated with the host device. When an error is identified by the ECC running on a memory device having ECC, the data executed by the data memory device can be identified as including a potential error. Said differently, because the data in the memory device corresponds to the ECC in another memory device, an error identified by the ECC indicates an error in the data of the memory device. In this instance, to avoid inadvertent error introduced by implementing an automatic correction, an action can be taken to alert the host and/or the controller that an error in the data of the memory device has been identified. At that time, multiple decisions can be made as to how to correct the error without altering the architecture of important routines.[0027] Embodiments of the present disclosure can utilize cryptographic primitive solutions (e.g., the ECC and or a calculated digest) for error detection in important routines by incorporating a comparison of ECC and data executed in parallel by different memory devices communicatively coupled to the host device. Such solutions can identify error inadvertently and/or intentionally introduced to the code. This can prevent poor operation of important routines written to avoid financial loss, security protocols, and/or provide safety checks for operations of the host device. Further, when an error is identified, the introduction of new errors can be avoided by refraining from an automatic correction. Instead an action (e.g., an alert, alarm, and/or abortion of the routine) can be determined by the host and/or a memory device associated with the host.[0028] As used herein,“a”,“an”, or“a number of’ can refer to one or more of something, and“a plurality of’ can refer to two or more such things. For
example, a memory device can refer to one or more memory devices, and a plurality of memory devices can refer to two or more memory devices.Additionally, the designators“M”,“P”,“R”,“B”,“S”, and“N”, as used herein, particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with a number of embodiments of the present disclosure. The number may be the same or different between designations.[0029] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 101 may reference element“01” in Figure 1, and a similar element may be referenced as 201 in Figure 2.[0030] Figure 1 illustrates a diagram of a portion of a memory array 101 having a number of physical blocks in accordance with an embodiment of the present disclosure. Memory array 101 can be, for example, a flash memory array such as a NAND flash memory array. As an additional example, memory array 101 can be a resistance variable memory array such as a PCRAM, RRAM, MMRAM, or spin torque transfer (STT) array, among others. However, embodiments of the present disclosure are not limited to a particular type of memory array. Further, memory array 101 (e.g., a subset of array 101, or the whole array 201) can be a secure memory array, as will be further described herein. Further, although not shown in Figure 1, memory array 101 can be located on a particular semiconductor die along with various peripheral circuitry associated with the operation thereof.[0031] As shown in Figure 1, memory array 101 has a number of physical blocks 107-0 (BLOCK 0), 107-1 (BLOCK 1), . . ., 107-B (BLOCK B) of memory cells. The memory cells can be single level cells and/or multilevel cells such as, for instance, two level cells, triple level cells (TLCs) or quadruple level cells (QLCs). As an example, the number of physical blocks in memory array 101 may be 128 blocks, 512 blocks, or 1,024 blocks, but embodiments are not limited to a particular power of two or to any particular number of physical blocks in memory array 101.
[0032] A number of physical blocks of memory cells (e.g., blocks 107-0,107-1, 107-B) can be included in a plane of memory cells, and a number of planes of memory cells can be included on a die. For instance, in the example shown in Figure 1, each physical block 107-0, 107-1, . . ., 107-B can be part of a single die. That is, the portion of memory array 101 illustrated in Figure 1 can be a die of memory cells.[0033] As shown in Figure 1, each physical block 107-0, 107-1, . . ., 107-B includes a number of physical rows (e.g., 103-0, 103-1, . . ., 103-R) of memory cells coupled to access lines (e.g., word lines). The number of rows (e.g., word lines) in each physical block can be 32, but embodiments are not limited to a particular number of rows 103-0, 103-1, . . ., 103-R per physical block. Further, although not shown in Figure 1, the memory cells can be coupled to columns of sense lines (e.g., data lines and/or digit lines).[0034] As one of ordinary skill in the art will appreciate, each row 103-0,103-1, . . ., 103-R can include a number of pages of memory cells (e.g., physical pages). A physical page refers to a unit of programming and/or sensing (e.g., a number of memory cells that are programmed and/or sensed together as a functional group). In the embodiment shown in Figure 1, each row 103-0, 103-1,. . ., 103-R comprises one physical page of memory cells. However,embodiments of the present disclosure are not so limited. For instance, in an embodiment, each row can comprise multiple physical pages of memory cells (e.g., one or more even pages of memory cells coupled to even-numbered data lines, and one or more odd pages of memory cells coupled to odd numbered data lines). Additionally, for embodiments including multilevel cells, a physical page of memory cells can store multiple pages (e.g., logical pages) of data (e.g., an upper page of data and a lower page of data, with each cell in a physical page storing one or more bits towards an upper page of data and one or more bits towards a lower page of data).[0035] As shown in Figure 1, a page of memory cells can comprise a number of physical sectors 105-0, 105-1, . . ., 105-S (e.g., subsets of memory cells). Each physical sector 105-0, 105-1, . . ., 105-S of cells can store a number of logical sectors of data. Additionally, each logical sector of data can correspond to a portion of a particular page of data. As an example, a first logical sector of data stored in a particular physical sector can correspond to a
logical sector corresponding to a first page of data, and a second logical sector of data stored in the particular physical sector can correspond to a second page of data. Each physical sector 105-0, 105-1, . . ., 105-S, can store system and/or user data, and/or can include overhead data, such as error correction code (ECC) data, logical block address (LBA) data, and metadata.[0036] Logical block addressing is a scheme that can be used by a host for identifying a logical sector of data. For example, each logical sector can correspond to a unique logical block address (LBA). Additionally, an LBA may also correspond (e.g., dynamically map) to a physical address, such as a physical block address (PBA), that may indicate the physical location of that logical sector of data in the memory. A logical sector of data can be a number of bytes of data (e.g., 256 bytes, 512 bytes, 1,024 bytes, or 4,096 bytes). However, embodiments are not limited to these examples.[0037] It is noted that other configurations for the physical blocks 107-0,107-1, . . ., 107-B, rows 103-0, 103-1, . . ., 103-R, sectors 105-0, 105-1, . . ., 105- S, and pages are possible. For example, rows 103-0, 103-1, . . ., 103-R of physical blocks 107-0, 107-1, . . ., 107-B can each store data corresponding to a single logical sector which can include, for example, more or less than 512 bytes of data.[0038] Figure 2A illustrates an example of a pair of registers 214-1 and214-2 used to define a secure memory array in accordance with an embodiment of the present disclosure, and Figure 2B illustrates a diagram of a portion of a memory array 201 that includes a secure memory array defined using registers 214-1 and 214-2 in accordance with an embodiment of the present disclosure. Although embodiments are not so limited, and one or more registers and/or one or more pairs of registers could be used. As shown in Figure 2B, secure memory array 201 can include a number of physical blocks 207-0, 207-1, . . ., 207-B of memory cells, each including a number of physical rows 203-0, 203-1, . . ., 203- R having a number of sectors of memory cells, in a manner analogous to memory array 101 described in connection with Figure 1.[0039] As shown in Figure 2A, register 214-1 can define addresses of the secure array (e.g., the addresses of different portions of the secure array), and register 214-2 can define sizes of the secure array (e.g., the sizes of the different portions of the secure array). The addresses of the secure array defined by
register 214-1 can correspond to, for instance, starting points (e.g., starting LB As) of the secure array (e.g., the starting points of the different portions of the secure array), and the sizes of the secure array defined by register 214-2 can correspond to, for instance, ending points (e.g., ending LB As) of the secure array (e.g., the ending points of the different portions of the secure array).[0040] For example, as shown in Figure 2A, registers 214-1 and 214-2 can define N pairs of values, with each respective pair comprising an address value (e.g., addr) defined by register 214-1 and a size value (e.g., size) defined by register 214-2. For instance, in the example illustrated in Figure 2A, Pairo comprises address value addro and size value sizeo (e.g., Pairo = [addro, sizeo]), Pain comprises address value addri and size value sizei (e.g., Pain = [addri, sizei]), and so on, with Paim comprising address value addm and size value sizeN (e.g., Paim = [add , sizeN]). The address value of a pair can correspond to a starting point (e.g., starting LB A) of a portion of the secure array, and the sum of the address value and the size value of that pair can correspond to the ending point (e.g., ending LBA) of that portion of the secure array. As such, the entire secure array (e.g., the portions that comprise the entire secure array) can be given by: [addro, addro + sizeo] U [addri, addri + sizei] U ... U [addm, addm + sizeN].[0041] The first pair whose size value defined by register 214-2 is zero can stop the definition of the secure array. For instance, in the example illustrated in Figure 2A, if the size value of Pain is zero, then the secure array would be given by: [addro, addro + sizeo] U [addri, addri + sizei][0042] An example of a secure array defined by registers 214-1 and 214-2 (e.g., with all size values defined by register 214-2 as non-zero) is illustrated in Figure 2B. For instance, as shown in Figure 2B, the address (e.g., LBA) associated with sector 205-0 of memory array 201 is addro, the address associated with sector 205-1 of memory array 201 is addro + sizeo, the address associated with sector 205-2 of memory array 201 is addri, the address associated with sector 205-3 of memory array 201 is addri + sizei, the address associated with sector 205-4 of memory array 201 is addm, and the address associated with sector 205-5 of memory array 201 is addm + sizeN. As such, the secure array comprises sectors (e.g., the data stored in sectors) 205-0 through 205-1, sectors 205-2 through 205-3, and 205-4 through 205-5. However, the
sectors of memory array 201 that are before sector 205-0, and sectors 205-1 through 205-2 of memory array 201, are not part of the secure array (e.g., the secure array comprises a subset of array 201).[0043] Figure 3 is a block diagram of a computing system 300 including a host 302 and an apparatus in the form of a memory device 306 in accordance with an embodiment of the present disclosure. As used herein, an“apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. Further, in an embodiment, computing system 300 can include a number of memory devices analogous to memory device 306.[0044] In the embodiment illustrated in Figure 3, memory device 306 can include a memory 316 having a memory array 301. Memory array 301 can be analogous to memory array 101 described in connection with Figure 1 and memory array 201 described in connection with Figure 2B. Further, in an embodiment, memory array 301 (e.g., a subset of array 301, or the whole array 301) can be a secure array (e.g., an area of memory 316 to be kept under control).[0045] Figure 3 illustrates a pair of registers 314-1 and 314-2 although embodiments are not so limited, and one or more registers and/or one or more pairs of registers could be used. Registers 314-1 and 314-2 can be, for instance, registers 214-1 and 214-2, described in connection with Figure 2 A, and secure memory array 301 can be, for instance, memory array 201 described in connection with Figure 2B. Data (e.g., the data 333) stored in memory array 301 can include sensitive (e.g., non-user) data, such as device firmware and/or code to be executed for sensitive applications (e.g., the routine). In some examples, the memory device 306 can include ECC corresponding to data 333 where the ECC, and/or a digest of data calculated by the memory device 306 are stored by the memory 316 in the same manner as the data 333 illustrated by Figure 3 (this embodiment is discussed in connection with Figure 4).[0046] In such embodiments, the pair of non-volatile registers 314-1, and314-2 can be used to define the secure array to store the data 333 (and/or the ECC, corresponding data, and/or a digest). For example, in the embodiment illustrated in Figure 3, circuitry 310 includes registers 314-1 and 314-2 that can
be used to define the secure array. For instance, register 314-1 can define the address (e.g., the starting LBA of the data) of the secure array, and register 314-2 can define the size (e.g., the ending LBA of the data) of the secure array. Using this method, the data 333 can be stored and protected by the memory device 306.[0047] As illustrated in Figure 3, host 302 can be coupled to the memory device 306 via interface 304. Host 302 and memory device 306 cancommunicate (e.g., send commands and/or data) on interface 304. Host 302 and/or memory device 306 can be, or be part of, a computing device, a laptop computer, personal computer, digital camera, digital recording and playback device, mobile telephone, PDA, memory card reader, interface hub, or Internet of Things (IoT) enabled device, such as, for instance, an automotive (e.g., vehicular and/or transportation infrastructure) IoT enabled device or a medical (e.g., implantable and/or health monitoring) IoT enabled device, an automatic teller machine (ATM), among other host systems, and can include a memory access device (e.g., a processor). One of ordinary skill in the art will appreciate that“a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc.[0048] Interface 304 can be in the form of a standardized physical interface. For example, when memory device 306 is used for information storage in computing system 300, interface 304 can be a serial advanced technology attachment (SATA) physical interface, a peripheral component interconnect express (PCIe) physical interface, a universal serial bus (USB) physical interface, or a small computer system interface (SCSI), among other physical connectors and/or interfaces. In general, however, interface 304 can provide an interface for passing control, address, information (e.g., data), and other signals between memory device 306 and a host (e.g., host 302) having compatible receptors for interface 304.[0049] Memory device 306 includes controller 308 to communicate with host 302 and with memory 316 (e.g., memory array 301). For instance, controller 308 can send commands to perform operations on memory array 301, including operations to sense (e.g., read), program (e.g., write), move, and/or erase data, among other operations.[0050] Controller 308 can be included on the same physical device (e.g., the same die) as memory 316. Alternatively, controller 308 can be included on a
separate physical device that is communicatively coupled to the physical device that includes memory 316. In an embodiment, components of controller 308 can be spread across multiple physical devices (e.g., some components on the same die as the memory, and some components on a different die, module, or board) as a distributed controller.[0051] Host 302 can include a host controller 321 to communicate with memory device 306. The host controller 321 can be included on the same physical host device 302. Alternatively, the host controller 321 can be a separate physical device that is communicatively coupled to the memory device 306 and or multiple memory devices (discussed further in connection with Figure 4). The host controller 321 can send commands to memory device 306 via interface 304. The host controller 321 can communicate with memory device 306 and/or the controller 308 on the memory device 306 to read, write, and/or erase data, among other operations. Further, in an embodiment, host 302 can be an IoT enabled device, as described herein, having IoT communication capabilities.[0052] Controller 308 on memory device 306 and/or the host controller321 on host 302 can include control circuitry and/or logic (e.g., hardware and firmware). In an embodiment, controller 308 on memory device 306 and/or the host controller 321 can be an application specific integrated circuit (ASIC) coupled to a printed circuit board including a physical interface. Also, memory device 306, host controller 321 and/or host 302 can include a buffer of volatile and/or non-volatile memory and a number of registers (e.g., the registers 314-1 and 314-2).[0053] For example, as shown in Figure 3, memory device 306 can include circuitry 310. In the embodiment illustrated in Figure 3, circuitry 310 is included in controller 308. However, embodiments of the present disclosure are not so limited. For instance, in an embodiment, circuitry 310 may be included in (e.g., on the same die as) memory 316 (e.g., instead of in controller 308).Circuitry 310 can comprise, for instance, hardware, firmware, and/or software.[0054] Computing system 300 (e.g., host 302 and memory device 306) can utilize error identification in executed code to determine if an error has been identified in data 333. For example, the circuitry 310 may read data 333 stored in the array 301 of the memory 316. The circuitry 310 can identify a different memory which can include an ECC corresponding to the data 333 read by the
circuitry 310. As mentioned, the automatic correction of error introduced to the data 333 may introduce additional error. The memory device 306 may execute an integrity check to compare the ECC to the data 333 read by the memory device 306. The memory device 306 may take an action in response to the comparison of the read data of the memory 316 and the ECC, where the comparison indicates that the ECC identified an error in the data 333 read by the memory 316. In this way, the memory device 306, the host controller 321, and/or the host 302 can make a determination of how to correct the error identified by the ECC.[0055] For example, the circuitry 310 can be configured to determine whether the error identified by the ECC affects an operation of the host device 302 associated with the data 333 read by the memory 316. For example, the data 333 can be code for a routine relating to a powertrain operation for a host 302 in the form of a vehicle. The powertrain routine can be provisioned to the memory 316 as data 333 and to a different memory as ECC during manufacture and/or another secure instance. The memory device 306 may be a boot memory device that executes the powertrain routine (e.g., the data 333).[0056] The host controller 321, the circuitry 310 and/or the memory 316 can execute an integrity check in response to a start-up process of the memory 316 and/or other indication to execute the data 333. The integrity check can include a comparison by the host controller 321, the circuitry 310 and/or the memory 316, of ECC read by a different memory included on a different memory device in parallel to the data 333 read by the memory 316. The integrity check can include a determination by the circuitry 310 and/or the host 302 that a correction applied to an error identified by the ECC introduced an additional error to the ECC. Said differently, the automatic correction of identified error by the ECC may have introduced new error to the ECC and applying a similar correction to the data 333 may introduce additional error the data 333.Introduction of other error may cause the routine to be skipped, altered, and/or other operational problems. Based on the integrity check, the host 302, host controller 321, and/or the circuitry 310 can be take an action to refrain from correcting the error corresponding to the read data 333 of the memory in response to the error identified by the ECC, and/or may determine an alternative method of correction.
[0057] Figure 4 illustrates a block diagram of an example system 409 including a host controller 421 and example memory devices 406-1 and 406-2 in accordance with an embodiment of the present disclosure. A host (e.g., the host 302 of Figure 3) may include a host controller 421, where the host controller 421 can be communicatively coupled to at least one memory device (e.g., memory device 406-1) and at least one other memory device (e.g., the memory device 406-2). For example, the system 409 illustrated in Figure 4 includes a host controller 421 communicatively coupled via interface 404-1 to a memory device 406-1 having a memory 416-1 and an array 401-1. The host controller 421 is illustrated as communicatively coupled via interface 404-2 to another memory device 406-2 having a memory 416-2 and an array 401-2.[0058] The memory device 406-1 can be provisioned with data 433-1,433-2, and 433-N (e.g., data 333 of Figure 3). The data 433-1, 433-2, and 433-N can be a code stream corresponding to a routine. The data 433-1, 433-2, and 433-N coding for the routine may be securely provisioned onto the memory device 406-1 using a public/private key exchange between the host associated with the host controller 421 and the memory device 406-1. The ECC 432-1, 432- 2, and 432-M may correspond to the data 433-1, 433-2, and 433-N and be securely provisioned onto the memory device 406-2 using a public/private key exchange between the host associated with the host controller 421 and the memory device 406-1. The generation and validation of the public and private keys are discussed further in connection with Figures 6-11.[0059] In some examples, the data 433-1, 433-2, and 433-N making up the code stream for the routine can be fixed units of data (e.g., 5-8 double words, but examples are not so limited). The routine can be run-time executable code which may be important to the operation of the host corresponding to the host controller 421. To detect error in the data 433-1, 433-2, 433-N and determine an action to correct the error, the host controller 421 can be associated with a different memory device 406-2, where the different memory device 406-2 is provisioned with the routine (e.g., a code stream for the routine coded in data 433-1, 433-2, and 433-N) and error correction/detection capabilities.[0060] For example, the memory device 406-2 can include ECC 432-1,432-2, 432-M corresponding to the routine, and/or digest 435-1, 435-2, 435-P corresponding to the routine. The ECC 432-1, 432-2, 432-M may include code
corresponding to data 433-1, 433-2, and 433-N of the memory device 406-1 where the ECC is a bit parity concatenated therewith. For example, ECC 432-1 may include the code of data 433-1 concatenated with an error correcting portion, ECC 432-2 may include the code of data 433-2 concatenated with an error correcting portion, and ECC 432-M may include the code of data 433-N concatenated with an error correcting portion.[0061] The digests 435-1, 435-2, and 435-P are products of a hash function applied by the circuitry (e.g., the circuitry 310 of Figure 3) to the code (e.g., the data 433-1, 433-2, 433-N) for the routine. For example, the digest 435- 1, 435-2, and 435-P can be cryptographic primitives (e.g., a hash) produced from corresponding data (e.g., the data 433-1, 433-2, 433-N), where a change to the data can produce a different digest. Said differently, a digest calculated by the circuitry of data 433-1 will be change when an error is present in the data 433-1.[0062] For example, the digest 435-1 can be a hash for the data 433-1, the digest 435-2 can be a hash for the data 433-2, and the digest 435-P can be a hash for the data 433-N. Used individually or together, the ECC (432-1, 432-2, 432-M) and the digests (435-1, 435-2, 435-P) can be used by the host (e.g., the host 302 of Figure 3), the host controller 421, and/or circuitry (e.g., the circuitry 310 of Figure 3) to determine the integrity of the code of the routine (e.g., the data 433-1, 433-2, and 433-N) as it is executed by the memory device 406-1.[0063] The data 433-1, 433-2, 433-N may be executed in parallel with the ECC 432-1, 432-2, 432-M and/or the digest 435-1, 435-2, 435-P to identify error in the executed code (e.g., the data 433-1, 433-2, 433-N). Specifically, the memory device 406-2 can include circuitry (e.g., the circuitry 310 of Figure 3) configured to read ECC (e.g., 432-1, 432-2, and 432-M) stored in an array 401-2 of the memory 416-2, and identify a different memory device 406-1 having read data 433-1, 433-2, 433-N corresponding to the ECC e.g., 432-1, 432-2, and 432- M of the memory 416-2. The circuitry of the memory device 406-2, and/or the host controller 421 can execute an integrity check to compare the ECC e.g., 432- 1, 432-2, and 432-M to the read data 433-1, 433-2, and 433-N of the different memory device 406-1. The integrity check can determine and/or monitor the read data 433-1, 433-2, and 433-N for error, based on the comparison to the ECC 432-1, 432-2, and 432-M.
[0064] In response to the comparison of the read data 433-1, 433-2, 433-N and the ECC 432-1, 432-2, and 432-M, the host controller 421 and/or the circuitry can take an action, where the comparison indicates that the ECC identified an error in the read data 433-1, 433-2, and/or 433-N of the memory device 406- E Because the correction of error can introduce new error into the data to be executed, the host controller 421 and/or the circuitry associated with the memory device 406-1 may take an action to refrain from correcting the error identified by the ECC 432-1, 432-2, and 432-M. Alternatively and/or additionally, the host controller 421 and/or the circuitry associated with the memory device 406-1 can determine how the error and/or the correction of the error may affect the routine coded by the data 433-1, 433-2, and 433-N and determine to correct the error identified by the comparison of the ECC 432-1, 432-2, and 432-M and the data 433-1, 433-2, 433-N. In this way, inadvertent error introduced by a corrective action can be monitored and/or identified.[0065] Figure 5 illustrates an example flow diagram for error identification in executed code in accordance with embodiments of the present disclosure. At 522, a host device (e.g., the host 302 of Figure 3) can set-up at least one memory device to execute a routine for an operation of the host device. For example, at 541, the host device can define a routine. The host device may securely communicate with one or more memory devices (e.g., the memory devices 406-1 and 406-2 of Figure 4) by exchanging public/private keys to exchange encrypted data (e.g., the data 433-1, 433-2, 433-N) to code for the routine. The host device and/or circuitry (e.g., the circuitry 310 of Figure 3) associated with the memory devices can provision at least one memory device with ECC (e.g., the ECC 432-1, 432-2, 432-M) and/or may calculate a digest (e.g., 435-1, 435-2, and 435-P of Figure 4) based on the ECC and/or data.[0066] As mentioned, the host device may be a vehicle and the host controller may be included in the vehicle or external to the vehicle. The host controller may be in communication with multiple memory devices, the memory devices may store data strings that code for important routines (e.g., powertrain operation) of the vehicle (e.g., the host device). The memory devices may be provisioned with the data and corresponding ECC and/or the digests may be calculated at a secure location and/or a secure time. For example, the memory
devices may be provisioned with the data, ECC, and digest during the manufacture of the host and/or host controller.[0067] The host device may include a host controller (e.g., the host controller 421 of Figure 4) communicatively coupled to the memory devices to identify error in executed code. At 542, the host controller, and/or the circuitry of the respective memory devices can execute the data stream and corresponding ECC in parallel. For example, a first memory device may be a boot memory device having memory and circuitry and in communication with the host controller. A second memory device may be an error correcting memory device having memory and circuitry, and also in communication with the host controller. A system (e.g., the system 409 of Figure 4) may start-up and the first memory device may read the data stream of the routine in parallel with the second memory device reading the ECC corresponding to the routine. The first memory device, via circuitry (e.g., the circuitry 310 of Figure 3), may transmit the read data to the host controller. The second memory device, via circuitry (e.g., the circuitry 310 of Figure 3), may transmit the read ECC to the host controller.[0068] At 543, the host controller may receive the executed (e.g., read) data transmitted from circuitry of the first memory device and the ECC corresponding to the data from the circuitry of the second memory device. In some examples, the circuitry of the second memory device may be configured to transmit a calculated digest (e.g., a calculated hash) corresponding to the data stream transmitted by the first memory device. The digest may be transmitted by the circuitry of the second memory device individually or together with the ECC data such that the host device may execute an integrity check.[0069] For example, at 544, the host controller may execute an integrity check by comparing the received ECC and/or the digest corresponding to the data (and calculated by the circuitry of the second memory device) to the read data from the first memory device. If the ECC of the second memory device and the read data of the first memory device do not match (“NO” at 545), there may be an error in the read data. Put another way, the ECC may automatically correct one or more errors thereby no longer corresponding to the data of the first memory device. In this example, at 547, the host controller and/or the circuitry associated with the first memory device may take an action in response to the
comparison that the ECC identified an error in the read data of the first memory device.[0070] In some examples, comparing the read data of the first memory device to the ECC provided by the second memory device includes comparing a hash function corresponding to the data and a digest corresponding to the ECC. The digest calculated may not match the expected read data (e.g., or a hash of the expected read data) of the first memory device. Each digest (e.g., 435-1, 435- 2, 435-P of Figure 4) may be calculated based on the data (e.g., 433-1, 433-2, 433-N of Figure 4), where any change to the data may change the value of the digest. When an error has occurred in the data, the digest outputted by the second memory device may change indicating an error. The host controller and/or the circuitry of the first memory device may compare, at 544, the received digest to the read data. If the digest and the read data do not match,(e.g.,“NO” at 546) an error has occurred and the host controller and/or the circuitry of the memory devices may take an action at 547.[0071] For example, the circuitry of the first and/or second memory device can determine where the error occurred in the routine and determine what effect the error may have on the routine. The circuitry of the first and/or second memory device can abort (e.g., halt) the operation of the host device based on the identification of the error by the ECC and/or the digest. Alternatively, and/or additionally, the host controller may correct the error in the read data based on the identification by the ECC and/or the digest. In another example, the action taken may be an alert indicating the error, where the alert is created by the circuitry of the first memory device and/or the second memory device and communicated to the host device.[0072] During the during the integrity check comparison at 544, the host controller, and/or the circuitry of the first memory device and/or the second memory device may determine that the received read data from the first memory device and the ECC and/or the digest of the second memory device match (“YES” at 548). The matching read data to the ECC and/or the digest indicates that there is no error present in the read data from the first memory device. In this example, at 549, the circuitry of the first and/or the second memory device may proceed with the operation of the routine coded by the data.
[0073] Figure 6 is a block diagram of an example system including a memory device 606 and a host 602 in accordance with an embodiment of the present disclosure. Memory device 606 and host 602 can be, for example, host 302 and memory device 306, respectively, described in connection with Figure 3.[0074] A computing device can boot in stages using layers, with each layer authenticating and loading a subsequent layer and providing increasingly sophisticated runtime services at each layer. A layer can be served by a prior layer and serve a subsequent layer, thereby creating an interconnected web of the layers that builds upon lower layers and serves higher order layers. As is illustrated in Figure 6, Layer 0 (“Lo”) 651 and Layer 1 (“Li”) 653 are within the memory device 606. Layer 0 651 can provide a Firmware Derivative Secret (FDS) key 652 to Layer 1 653. The FDS key 652 can describe the identity of code of Layer 1 653 and other security relevant data. In an example, a particular protocol (such as robust internet of things (RIOT) core protocol) can use the FDS 652 to validate code of Layer 1 653 that it loads. In an example, the particular protocol can include a device identification composition engine (DICE) and/or the RIOT core protocol. As an example, an FDS can include Layer 1 firmware image itself, a manifest that cryptographically identifies authorized Layer 1 firmware, a firmware version number of signed firmware in the context of a secure boot implementation, and/or security-criticalconfiguration settings for the device. A device secret 658 can be used to create the FDS 652 and be stored in memory associated with the memory device 606.[0075] The memory device can transmit data, as illustrated by arrow 654, to the host 602. The transmitted data can include an external identification that is public, a certificate (e.g., an external identification certificate), and/or an external public key. Layer 2 (“L2”) 655 of the host 602 can receive the transmitted data, and execute the data in operations of the operating system (“OS”) 657 and on a first application 659-1 and a second application 659-2.[0076] In an example operation, the memory device 606 can read the device secret 658, hash an identity of Layer 1 653, and perform a calculation including:KLI = KDF [Fs(s), Hash (“immutable information”)]
where KLI is an external public key, KDF (e.g., KDF defined in the National Institute of Standards and Technology (NIST) Special Publication 800-108) is a key derivation function (e.g., HMAC-SHA256), and Fs(s) is the device secret 658. FDS 652 can be determined by performing:FDS = HMAC-SHA256 [ Fs(s), SHA256(“immutable information”)] Likewise, the host 602 can transmit data, as illustrated by arrow 656, to the memory device 606.[0077] Figure 7 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure. Figure 7 is an example of a determination of the parameters including the external public identification, the external certificate, and the external public key that are then sent, indicated by arrow 754, to Layer 2 (e.g., Layer 2 655) of a host device (e.g., 602 in Figure 6). Layer 0 (“Lo”) 751 in Figure 7 corresponds to Layer 0 651 in Figure 6 and likewise FDS 752 corresponds to FDS 652, Layer 1 753 corresponds to Layer 1 653, and arrows 754 and 756 correspond to arrows 654 and 656, respectively.[0078] The FDS 752 from Layer 0 751 is sent to Layer 1 753 and used by an asymmetric ID generator 761 to generate a public identification (“IDik public”) 765 and a private identification 767. In the abbreviated“IDik public,” the “lk” indicates Layer k (in this example Layer 1), and the“public” indicates that the identification is openly shared. The public identification 765 is illustrated as shared by the arrow extending to the right and outside of Layer 1 753 of the memory device. The generated private identification 767 is used as a key input into an encryptor 773. The encryptor 773 can be any processor, computing device, etc. used to encrypt data.[0079] Layer 1 753 of a memory device can include an asymmetric key generator 763. In at least one example, a random number generator (RND) 736 can optionally input a random number into the asymmetric key generator 763. The asymmetric key generator 763 can generate a public key (“Kik pubiic”) 769 (referred to as an external public key) and a private key (“KLK private”) 771 (referred to as an external private key) associated with a memory device such as memory device 606 in Figure 6. The external public key 769 can be an input (as “data”) into the encryptor 773. The encryptor 773 can generate a result K’775 using the inputs of the external private identification 767 and the external public
key 769. The external private key 771 and the result K’775 can be input into an additional encryptor 777, resulting in output K” 779. The output K” 779 is the external certificate (“IDLI certificate”) 781 transmitted to the Layer 2 (655 of Figure 6). The external certificate 781 can provide an ability to verify and/or authenticate an origin of data sent from a device. As an example, data sent from the memory device can be associated with an identity of the memory device by verifying the certificate, as will be described further in association with Figure 9. Further, the external public key (“KLI public key”) 783 can be transmitted to Layer 2. Therefore, the public identification 765, the certificate 781, and the external public key 783 of a memory device can be transmitted to Layer 2 of a host device.[0080] Figure 8 is a block diagram of an example process to determine a number of parameters in accordance with an embodiment of the present disclosure. Figure 8 illustrates a Layer 2 855 of a host (e.g., host 602 in Figure 6) generating a device identification (“IDL2 public”) 866, a device certificate (“IDL2 Certificate”) 882, and a device public key (“KL2 public key”) 884.[0081] The external public key (“KLI public key”) 883 transmitted fromLayer 1 of the memory device to Layer 2 855 of a host, as described in Figure 7, is used by an asymmetric ID generator 862 of the host to generate a public identification (“IDik public”) 866 and a private identification 868 of the host. In the abbreviated“IDik public,” the“lk” indicates Layer k (in this example Layer 2), and the“public” indicates that the identification is openly shared. The public identification 866 is illustrated as shared by the arrow extending to the right and outside Layer 2 855. The generated private identification 868 is used as a key input into an encryptor 874.[0082] As shown in Figure 8, the external certificate 881 and external identification 865, along with the external public key 883, are used by a certificate verifier 824. The certificate verifier 824 can verify the external certificate 881 received from a memory device (e.g., memory device 606), and determine, in response to the external certificate 881 being verified or not being verified, whether to accept or discard data received from the memory device. Further details of verifying the external certificate 881 is described in connection with Figure 9.
[0083] Layer 2 855 of the host can include an asymmetric key generator864. In at least one example, a random number generator (RND) 838 can optionally input a random number into the asymmetric key generator 864. The asymmetric key generator 864 can generate a public key (“Kuc public”) 870 (referred to as a device public key) and a private key (“KLK private”) 872 (referred to as a device private key) associated with a host device such as host 602 in Figure 6. The device public key 870 can be an input (as“data”) into the encryptor 874. The encryptor 874 can generate a result K’ 876 using the inputs of the device private identification 868 and the device public key 870. The device private key 872 and the result K’ 876 can be input into an additional encryptor 878, resulting in output K” 880. The output K” 880 is the device certificate (“IDL2 certificate”) 882 transmitted back to the Layer 1 (653 of Figure 6). The device certificate 882 can provide an ability to verify and/or authenticate an origin of data sent from a device. As an example, data sent from the host can be associated with an identity of the host by verifying the certificate, as will be described further in association with Figure 9. Further, the device public key (“KL2 public key”) 884 can be transmitted to Layer 1. Therefore, the public identification 866, the certificate 882, and the device public key 884 of the host can be transmitted to Layer 1 of a memory device.[0084] In an example, in response to a memory device receiving a public key from a host, the memory device can encrypt data to be sent to the host using the device public key. Vice versa, the host can encrypt data to be sent to the memory device using the external public key. In response to the host receiving data encrypted using the device public key, the host can decrypt the data using its own device private key. Likewise, in response to the memory device receiving data encrypted using the external public key, the memory device can decrypt the data using its own external private key. As the device private key is not shared with another device outside the host and the external private key is not shared with another device outside the memory device, the data sent to the host and the memory device remains secure.[0085] Figure 9 is a block diagram of an example process to verify a certificate in accordance with an embodiment of the present disclosure. In the illustrated example of Figure 9, a public key 983, a certificate 981, and a public identification 965 is provided from a memory device (e.g., from Layer 1 653 of
memory device 606 in Figure 6). The data of the certificate 981 and the external public key 983 can be used as inputs into a decryptor 985. The decryptor 985 can be any processor, computing device, etc used to decrypt data. The result of the decryption of the certificate 981 and the external public key 983 can be used as an input into a secondary decryptor 987 along with the public identification, result in an output. The external public key 983 and the output from the decryptor 987 can indicate, as illustrated at 989, whether the certificate is verified by a comparison, resulting in a yes or no 991 as an output. In response to the certificate being verified, data received from the device being verified can be accepted, decrypted, and processed. In response to the certificate not being verified, data received from the device being verified can be discarded, removed, and/or ignored. In this way, nefarious devices sending nefarious data can be detected and avoided. As an example, a hacker sending data to be processed can be identified and the hacking data not processed.[0086] Figure 10 is a block diagram of an example process to verify a signature in accordance with an embodiment of the present disclosure. In the instance where a device is sending data that may be verified in order to avoid subsequent repudiation, a signature can be generated and sent with data. As an example, a first device may make a request of a second device and once the second device performs the request, the first device may indicate that the first device never made such a request. An anti-repudiation approach, such as using a signature, can avoid repudiation by the first device and insure that the second device can perform the requested task without subsequent difficulty.[0087] A memory device 1006 (such as memory device 306 in Figure 3) can send data 1090 to a host (such as host 302 in Figure 3). The memory device 1006 can generate, at 1094, a signature 1096 using a device private key 1071.The signature 1096 can be transmitted to the host 1002. The host 1002 can verify, at 1098, the signature using data 1092 and the external public key 1069 previously received. In this way, the signature is generated using a private key and verified using a public key. In this way, the private key used to generate a unique signature can remain private to the device sending the signature while allowing the receiving device to be able to decrypt the signature using the public key of the sending device for verification. This is in contrast toencryption/decryption of the data, which is encrypted by the sending device
using the public key of the receiving device and decrypted by the receiving device using the private key of the receiver. In at least one example, the device can verify the digital signature by using an internal cryptography process (e.g., Elliptical Curve Digital signature (ECDSA) or a similar process.[0088] Figure 11 is a block diagram of an example memory device 1106 in accordance with an embodiment of the present disclosure. Memory device 1106 can be, for example, memory device 306 previously described in connection with Figure 3.[0089] As shown in Figure 11, memory device 1106 can include a number of memory arrays 1101-1 through 1101-7. Memory arrays 1101-1 through 1101-7 can be analogous to memory array 101 previously described in connection with Figure 1. Further, in the example illustrated in Figure 11, memory array 1101-3 is a secure array, subset 1111 of memory array 1101-6 comprises a secure array, and subsets 1113 and 1115 of memory array 1101-7 comprise a secure array. Subsets 1111, 1113, and 1115 can each include, for instance, 4 kilobytes of data. However, embodiments of the present disclosure are not limited to a particular number or arrangement of memory arrays or secure arrays.[0090] As shown in Figure 11, memory device 1106 can include a remediation (e.g., recovery) block 1117. Remediation block 1117 can be used as a source of data in case of errors (e.g., mismatches) that may occur during operation of memory device 1106. Remediation block 1117 may be outside of the area of memory device 1106 that is addressable by a host.[0091] As shown in Figure 11, memory device 1106 can include a serial peripheral interface (SPI) 1104 and a controller 1108. Memory device 1106 can use SPI 1104 and controller 1108 to communicate with a host and memory arrays 1101-1 through 1101-7, as previously described herein (e.g., in connection with Figure 3).[0092] As shown in Figure 11, memory device 1106 can include a secure register 1119 for managing the security of memory device 1106. For example, secure register 1119 can configure, and communicate externally, to an application controller. Further, secure register 1119 may be modifiable by an authentication command.
[0093] As shown in Figure 11, memory device 1106 can include keys1121. For instance, memory device 1106 can include eight different slots to store keys such as root keys, DICE-RIOT keys, and/or other external session keys.[0094] As shown in Figure 11, memory device 1106 can include an electronically erasable programmable read-only memory (EEPROM) 1123. EEPROM 1123 can provide a secure non-volatile area available for a host, in which individual bytes of data can be erased and programmed.[0095] As shown in Figure 11, memory device 1106 can include counters (e.g., monotonic counters) 1125. Counters 1125 can be used as an anti replay mechanism (e.g., freshness generator) for commands (e.g., to sign a command set or sequence) received from and/or sent to a host. For instance, memory device 1106 can include six different monotonic counters, two of which may be used by memory device 1106 for the authenticated commands, and four of which may be used by the host.[0096] As shown in Figure 11, memory device 1106 can include anSHA-256 cryptographic hash function 1127, and/or an HMAC-SHA256 cryptographic hash function 1129. SHA-256 and/or HMAC-SHA256 cryptographic hash functions 1127 and 1129 can be used by memory device 1106 to generate cryptographic hashes, such as, for instance, the cryptographic hashes of the update 220 previously described herein in connection with Figure 3, and/or a golden hash used to validate the data stored in memory arrays 1101-1 through 1101-7 as previously described herein. Further, memory device 1106 can support L0 and L 1 of DICE-RIOT 1131.[0097] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of a number of embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of ordinary skill in the art upon reviewing the above description. The scope of a number of embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of a
number of embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[0098] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
Methods, apparatus and systems for facilitating explicit flow control for RDMA transfers using implicit memory registration. To setup an RDMA data transfer, a source RNIC sends a request to allocate a destination buffer at a destination RNIC using implicit memory registration. Under implicit memory registration, the page or pages to be registered are not explicitly identified by the source RNIC, and may correspond to pages that are paged out to virtual memory. As a result, registration of such pages result in page faults, leading to a page fault delay before registration and pinning of the pages is completed. In response to detection of a page fault, the destination RNIC returns an acknowledgment indicating that a page fault delay is occurring. In response to receiving the acknowledgment, the source RNIC temporarily stops sending packets, and does not retransmit packets for which ACKs are not received prior to retransmission timeout expiration. |
CLAIMS What is claimed is: 1. A method, comprising: receiving, at a first Remote Direct Memory Access (RDMA) Network Interface Controller (RNIC), a first message containing a request for registration of memory for use as a destination buffer to be employed in connection with an RDMA data transfer from a second RNIC to the first RNIC using a reliable transport protocol; and in response to a memory registration operation relating to the request and resulting in a page fault event, sending a first acknowledgement message to the second RNIC acknowledging the first message has been received and containing information indicating a page fault delay event is occurring. 2. The method of claim 1, wherein the first acknowledgement message includes a backoff time and comprises a request for the second RNIC to not send packets corresponding to the RDMA data transfer until the backoff time has expired. 3. The method of claim 1 or 2, further comprising: receiving a plurality of packets from the second RNIC during the page fault delay; temporarily buffering the packets on the first RNIC while the page fault delay is occurring; and, after the page fault delay has completed, sending an second acknowledgement message to the second RNIC indicating the plurality of packets have been received. 4. The method of any of the preceding claims, wherein the first RNIC is installed in a host platform having system memory used for RDMA buffers and a processor including a memory management unit (MMU) used to manage access to the system memory, and wherein an operating system that employs virtual memory is running on the host platform, the method further comprising implementing a mechanism to synchronize a portion of page table information employed by the MMU for pages allocated to RDMA destination buffers with a cached copy of the page table information accessed locally by the first RNIC. 5. The method of claim 4, wherein memory to be registered corresponding to request to comprises one or more memory pages, the method further comprising providing indicia to the MMU requesting pinning of the one or more memory pages. 6. The method of claim 5, further comprising providing indicia to the MMU after the RDMA data transfer has been completed identifying the one or more memory pages may be unpinned. 7. The method of any of the preceding claims, wherein the first RNIC is installed in a host platform having system memory used for RDMA buffers and a processor including a memory management unit (MMU) used to manage access to the system memory, and wherein an operating system that employs virtual memory is running on the host platform and employs a paging table in kernel memory, the method further comprising updating page table entries in the paging table via the first RNIC to identify corresponding memory pages are pinned. 8. The method of claim 7, further comprising updating page table entries via the first RNIC to identify corresponding memory pages are unpinned after usage of an RDMA destination buffer employing the memory pages is complete. 9. The method of any of the preceding claims, wherein the first RNIC is installed in a host platform having system memory used for RDMA buffers and a processor including a memory management unit (MMU) used to manage access to the system memory, and wherein an operating system that employs virtual memory is running on the host platform and employs a paging table in kernel memory, the method further comprising detecting, via the first RNIC, that the page fault has occurred. 10. The method of any of the preceding claims, further comprising determining that a page fault will result prior to attempting to register memory to be used for the destination buffer. 11. The method of any of the preceding claims, further comprising: receiving a request to allocate a destination buffer or extend the size of an existing destination buffer during an ongoing RDMA data transfer corresponding to a single RDMA work request; and allocating the destination buffer or extending the size of an existing destination buffer using one or more pages of memory that are currently paged out, wherein the one or more pages are paged in, registered, and pinned. 12. The method of any of the preceding claims, wherein the request to allocate the destination buffer or extend the size of an existing destination buffer contains indicia identifying it as a buffer pre-allocation request and the use of a paged-out page of memory results in a page fault, the method further comprising not sending an acknowledgment message to the second RNIC including a backoff time in response to the page fault. 13. An apparatus, comprising: a network interface, configured to send and receive packetized data using a reliable transport protocol; and Remote Direct Memory Access (RDMA) logic, configured to be employed to facilitate performing operations when the apparatus is operating including, receiving a first message containing a request for registration of memory for use as a destination buffer to be employed in connection with an RDMA data transfer from a remote apparatus using a reliable transport protocol; and in response to a memory registration operation relating to the request and resulting in a page fault event, sending a first acknowledgement message to the second apparatus acknowledging the first message has been received and containing information indicating a page fault delay event is occurring, wherein the first acknowledgement message includes a backoff time and comprises a request for the remote apparatus to not send packets corresponding to the RDMA data transfer until the backoff time has expired. 14. The apparatus of claim 13, wherein the apparatus comprises an RDMA-enabled Network Interface Controller (RNIC). 15. The apparatus of claim 13 or 14, wherein the apparatus is configured to be installed in a host platform having system memory used for RDMA buffers and a processor including a memory management unit (MMU) used to manage access to the system memory, wherein, during operation of the host platform an operating system that employs virtual memory is running on the host platform, and wherein the apparatus further comprises a mechanism to synchronize a portion of page table information employed by the MMU for pages allocated to RDMA destination buffers with a cached copy of the page table information accessed locally by the first RNIC. 16. The apparatus of any of claims 13-15, wherein the apparatus is configured to be is installed in a host platform having system memory used for RDMA buffers and a processor including a memory management unit (MMU) used to manage access to the system memory, wherein, during operation of the host platform an operating system that employs virtual memory is running on the host platform and employs a paging table in kernel memory, and wherein the apparatus is further configured to update page table entries in the paging table to identify corresponding memory pages are pinned. 17. The apparatus of any of claims 13-15, wherein the apparatus is further configured to perform operations, when operating, comprising: receiving a request to dynamically allocate a destination buffer or extend the size of an existing destination buffer during an ongoing RDMA data transfer corresponding to a single RDMA work request; and allocating the destination buffer or extending the size of an existing destination buffer using one or more pages of memory that are currently paged out, wherein the one or more pages are paged in, registered, and pinned. 18. A method, comprising: sending, from a first Remote Direct Memory Access (RDMA) Network Interface Controller (RNIC) to a second RNIC, a first message containing a request for registration of memory for use as a destination buffer to be employed in connection with an RDMA data transfer from the first RNIC to the second RNIC using a reliable transport protocol; streaming a first plurality of packets from the first RNIC to the second RNIC corresponding to the RDMA data transfer; receiving a first acknowledgement message from the second RNIC acknowledging the first message has been received and containing information indicating a page fault delay event is occurring and a backoff time; and in response thereto, employing the backoff time for use by a backoff timer and not sending any additional packets for the RDMA data transfer from the first RNIC to the second NIC until the backoff timer has expired. 19. The method of claim 18, further comprising not retransmitting any of the first plurality of packets during a backoff period associated with use of the backoff time. 20. The method of claim 18 or 19, wherein the RDMA data transfer corresponds to an RDMA work request to transfer a file having a file size, and wherein the destination buffer for which registration of memory is request in the first message has a size that is less than the file size and comprises a first destination buffer, the method further comprising: transmitting a stream of packets from the first RNIC to the second RNIC corresponding to an RDMA data transfer of the file; during the RDMA data transfer of the file, sending a second message containing a request for implicit registration of memory for use as a second destination buffer to be employed in connection with the RDMA data transfer of the file. 21. The method of claim 20, wherein the second message is sent in advance of the second destination buffer being needed using an advance time period that is greater than a projected page fault delay that might result during the implicit registration of the memory for the second destination buffer. 22. An apparatus, comprising: a network interface, configured to send and receive packetized data using a reliable transport protocol; and Remote Direct Memory Access (RDMA) logic, configured to be employed to facilitate performing operations when the apparatus is operating including, sending, from the apparatus to a remote apparatus, a first message containing a request for registration of memory for use as a destination buffer to be employed in connection with an RDMA data transfer from the apparatus to the remote apparatus using a reliable transport protocol; streaming a first plurality of packets corresponding to the RDMA data transfer to the remote apparatus; receiving a first acknowledgement message from the remote apparatus acknowledging the first message has been received and containing information indicating a page fault delay event is occurring and a backoff time; and in response thereto, employing the backoff time for use by a backoff timer and not sending any additional packets for the RDMA data transfer to the remote apparatus until the backoff timer has expired. 23. The apparatus of claim 22, wherein the apparatus is further configured to not retransmit any of the first plurality of packets during a backoff period associated with use of the backoff time. 24. The apparatus of claim 22 or 23, wherein the RDMA data transfer corresponds to an RDMA work request to transfer a file having a file size, and wherein the destination buffer for which registration of memory is request in the first message has a size that is less than the file size and comprises a first destination buffer, the method further comprising: transmitting a stream of packets to the remote apparatus corresponding to an RDMA data transfer of the file; during the RDMA data transfer of the file, sending a second message containing a request for implicit registration of memory for use as a second destination buffer to be employed in connection with the RDMA data transfer of the file. 25. The apparatus of claim 24, wherein the second message is sent in advance of the second destination buffer being needed using an advance time period that is greater than a projected page fault delay that might result during the implicit registration of the memory for the second destination buffer. 26. A computer system, comprising: system memory comprising a plurality of memory pages; a processor, operatively coupled to the system memory, including a memory management unit (MMU) used for managing access to pages of system memory; a secondary storage device; an Input Output (10) interface component, operatively coupled to or integrated in the processor and operatively coupled to the memory and the secondary storage device; and a network adaptor, operatively coupled to the 10 interface component and including logic configured to interface with the MMU via a driver, wherein the network adaptor further including logic for performing Remote Direct Memory Access (RDMA) network operations including, receiving, from a remote computer system, a first message containing a request for registration of a memory buffer to be employed for storing data corresponding to a first RDMA data transfer from the remote computer system to the computer system using a reliable transport protocol; in connection with registering memory to be employed for the buffer, identifying a page fault has resulted; and sending a first acknowledgement message to the remote computer system acknowledging the first message has been received and containing information indicating a page fault delay event is occurring. 27. The computer system of claim 26, wherein the first acknowledgement message includes a backoff time and comprises a request for the remote computer system to not send packets corresponding to the first RDMA data transfer until the backoff time has expired. 28. The system of claim 26 or 27, wherein the logic used to interface with the MMU is configured to synchronize a portion of page table information employed by the MMU for pages allocated to RDMA destination buffers with a cached copy of the page table information accessed locally by the network adaptor. 29. The system of any of claims 26-28, wherein the network adaptor is operatively coupled to the 10 interface component via a Peripheral Component Interconnect Express (PCIe) link. 30. The system of any of claims 26-29, wherein the network adaptor is figured configured to perform operations, comprising: sending a second message containing a request for registration of memory for use as a destination buffer to be employed in connection with a second RDMA data transfer from the system to the remote apparatus using a reliable transport protocol; streaming a first plurality of packets corresponding to the second RDMA data transfer to the remote apparatus; receiving a first acknowledgement message from the remote apparatus acknowledging the second message has been received and containing information indicating a page fault delay event is occurring and a backoff time; and in response thereto, employing the backoff time for use by a backoff timer and not sending any additional packets for the second RDMA data transfer to the remote apparatus until the backoff timer has expired. |
EXPLICIT FLOW CONTROL FOR IMPLICIT MEMORY REGISTRATION FIELD OF THE INVENTION The field of invention relates generally to computer networking and, more specifically but not exclusively relates to techniques for performing flow control for RDMA transfers using implicit memory registration. BACKGROUND INFORMATION Remote Direct Memory Access (RDMA) is a direct memory access mechanism that enables a computer to access memory from another computer without involving the computers' operating systems. RDMA supports zero-copy networking by enabling a network adapter to transfer data directly to or from application memory, eliminating the need to copy data between application memory and the data buffers in the operating system. Such transfers require no work to be done by CPUs, caches, or context switches, and transfers continue in parallel with other system operations. When an application performs an RDMA Read or Write request, the application data is delivered directly to the network, reducing latency and enabling fast message transfer. To efficiently communicate with remote systems via user space (i.e., the non-kernel memory space allocated for applications by an operating system), conventional RDMA devices require pre-registered, pre-pinned memory regions for all data transfers over the fabric or network. This consumes large amounts of system memory that could be used by other applications. In order to avoid page faults, memory may often be overallocated to (hopefully) address worst-case traffic conditions. However, under heavy traffic loads even this approach may fail, leading to page faults under which the amount of memory allocated to a pre-pinned memory region is insufficient, resulting in temporary use of virtual memory that is accessed from local or remote secondary storage devices rather than system memory; these devices, such as hard disk drives, have access speeds that are an order of magnitude or more slower than typical system memory. Under conventional approaches, page faults are either transparent to RDMA senders or are otherwise identified indirectly well after the page fault has occurred (e.g., lack of ACKnowledgements within a timeout period may indicate some type of fault or congestion). There are several RDMA capable network interface cards (RNIC) available on today's market that provide both open source and proprietary methods for implicit memory registration. They all attempt to remove the requirement of pre-pinning memory regions for RDMA transfers. In these cases, the RNIC essentially acts as a memory management unit (MMU) and provides some form of synchronization with system MMU. This MMU synchronization comes in many forms but essentially guarantees that the adapter will participate in all user memory region accesses and tolerate a page fault and page pinning during data transfers. These paging events are indeterminist and can stall the data stream significantly, especially if the system is busy or if the fault requires paging from a local or network attached drive. BRIEF DESCRIPTION OF THE DRAWINGS The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified: Figure 1 is a schematic diagram illustrating the result of a page fault in connection with an implicit memory registration for an RDMA destination buffer according to a conventional approach using a standard transport flow-control mechanism; Figure 2 is a schematic diagram illustrating how a page fault in connection with an implicit memory registration for an RDMA destination buffer is handled using explicit flow control, according to one embodiment; Figure 3 is a schematic diagram illustrating a platform configuration that may be used to implement aspects of the embodiments described herein; Figure 4 is a message flow diagram illustrating aspects of an RDMA data transfer employing implicit memory registration and pipelined buffer pre-allocation, according to one embodiment; and Figure 5 is a schematic diagram illustrating an architecture for an RNIC that may be used for implementing aspects of the embodiments disclosed herein, DETAILED DESCRIPTION Embodiments of methods and apparatus for performing flow control for RDMA transfers using implicit memory registration are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In accordance with aspects of the embodiments now described, a novel communications method is provided that enables RDMA devices to avoid pre-pinning and better tolerate page- faults by providing explicit data flow acknowledgements on the wire to avoid transfer of unnecessary packets and congestion. This disclosure describes an architecture and method for explicit flow control allowing optimized back pressure to the remote RDMA device or devices. Explicit Memory Registration and Memory Page Pinning To better understand and appreciate the advantageous of the embodiments, a comparison to existing conventional approaches is first provided. As discussed above, RDMA enables direct memory access to memory on a remote system in a manner that bypasses the system CPU and operating system. RDMA supports zero-copy networking by enabling an RNIC to transfer data directly to or from application memory (i.e., a memory space in system memory allocated to an application) that is maintained separate for kernel memory used by an operating system, eliminating the need to copy data between application memory and data buffers in kernel memory employed by the operating system. This is facilitated via DMA operations under which a DMA engine on an RNIC is enabled to directly write to and read from data buffers in system memory that have been allocated to the RNICs. Modern operating system implement memory management by allocating pages in virtual memory, and handling the mappings between logical addresses employed by the virtual memory address space and physical addresses corresponding to physical memory (i.e., system memory hardware addresses). This provides several advantages, including the ability to extend the size of memory beyond the physical memory in the system. Also, each process is run in its own logical address space. Typically, page tables are used to translate the virtual addresses seen by applications into physical addresses used by the hardware to process instructions; the hardware that usually handles this specific translation is a memory management unit (MMU). Each entry in the page table holds a flag indicating whether the corresponding page is in real (i.e., physical) memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises a page fault exception, invoking the paging supervisor component of the operating system. In response to the page fault, the paging supervisor accesses secondary storage (or whatever storage the virtual memory is mapped to), returns the page that has the virtual address that resulted in the page fault, updates the page tables to reflect the physical location of the virtual address and tells the translation mechanism to restart the request. When all physical memory is already in use, the paging supervisor must free a page in physical memory to hold the swapped-in (aka "paged in") page. At the same time, for each page that is paged in, an existing page in physical memory must be paged out. In essence, paging out a memory page involves copying the data in the memory page from its location in physical memory to a file stored in a secondary storage device. Paging in a memory page accomplished the reverse of paging out - in this case the data corresponding to a page stored in a secondary storage device file is copied to a page in physical memory. The paging supervisor uses one of a variety of page replacement algorithms such as least recently used to determine which page to free. Ideally, pages with low utilization are paged out first, but the result of paging invariably increases memory access latencies. If the situation gets bad enough, disk "thrashing" may occur under which pages are constantly being paged in and out of memory. As discussed above, a zero copy memory access is designed to bypasses the CPU, meaning it also bypasses the MMU (at least during RDMA memory writes and reads). This creates a problem with respect to normal virtual memory usage, which is addressed through use of explicit memory registration and "pinning" memory pages allocated for RDMA usage. The RDMA Consortium has published the RDMA Protocol Verbs Specification that describes the behavior of RNIC hardware, firmware, and software as view by the RNIC host (i.e., computer system or platform in which an RNIC is implemented). The behavior description is specified in the form of an RNIC Interface and a set of RNIC Verbs. An RNIC Interface defines the semantics of the RDMA services that are provided by an RNIC that supports the RNIC Verb Specification, and can be implemented through a combination of hardware, firmware, and software. A Verb is an operation what an RNIC Interface is expected to perform. The current draft RDMA Verbs specification is published at http://tools.ietf.org/html/draft- hilland-rddp-verbs-OO. As used herein below, the specification is referred to as RDMA Verbs. RDMA Verbs defines a mechanism for allocating memory called Memory Registration. Memory registration enables access to a Memory Region by a specific RNIC. Binding a Memory Window enables the specific RNIC to access memory represented by that Memory Window. Memory registration provides mechanisms that allow consumers (i.e., the applications that employ RDMA for data transfers) to describe a set of virtually contiguous memory locations or a set of physically contiguous locations to the RI in order to allow the RNIC to access either as a virtually contiguous buffer using a Steering Tag (STag) and a Tagged Offset. Memory registration provides the RNIC with a mapping between a STag and Tagged Offset and a Physical Memory Address. It also provides the RNIC with a description of the access control associated with the memory location. The set of memory locations that have been registered are referred to as a Memory Region. Before an RNIC can use a Memory Region, the resources associated with the Memory Region must be allocated and the Memory Region must be registered with the RNIC. Under a conventional use of RDMA, the RDMA components at both ends of an RDMA communication channel (i.e., components at a sending and a receiving computer platform, such as a server) allocate (or request allocation from the OS of) buffers in system memory for a given application. A data transfer between applications is performed by copying data in a source buffer and the sender's computer to a destination buffer on the receivers computer. Since the OS, CPU, and MMU are not involved during a transfer, the addresses for the buffers cannot be changed during a transfer. This is accomplished by pinning the memory pages associated with the memory region. Pinned memory pages cannot be swapped to secondary storage. In particular, under conventional usage, data buffers that are accessed directly by peripheral devices that use direct memory access or Input-Output (10) channels must reside in pinned pages while the 10 operation is in progress because such devices and the buses to which they are attached expect to find data buffers located at physical memory addresses; regardless of whether the bus has a memory management unit for 10, transfers cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. Accordingly, not pinning the pages in a zero-copy RDMA system may result in corruption of the contents of memory. Under explicit memory registration, buffers used for an RDMA data transfer are registered with the RNICs prior to initiating the actual data transfer. This is facilitated, in part, through use of work requests (WRs). Each WR defines: 1) the data transfer operation type (Send, Receive, RDMA Read, RDMA Write); 2) The source buffer for Sends, RDMA Reads and RDMA Writes; and 3) The destination buffer for Receives, RDMA Reads and RDMA Writes. In turn, each of the source and destination buffers has an explicitly defined location (i.e., address range) within a pinned memory region. After registration, these buffers are referred to as "tagged buffers" and are identified by unique STags, as discussed above. While use of explicit memory registration and pinned memory has historically been the most common way that RDMA is implemented, it has significant drawbacks. In particular, pinning memory takes time and additional memory to set up, reduces the quantity of memory the operating system can allocate to other processes, limits the overall flexibility of the memory system to adapt over time, and may even lead to underutilization of memory if processes unnecessarily pin pages. Implicit Memory Registration with Conventional Flow Control In order to address some of the drawbacks of requiring pinned memory regions, an implicit memory registration scheme has been developed. Under an implicit memory registration approach, an RDMA data transfer may be initiated prior to allocation of a destination buffer, where the buffer is allocated on the fly. This may result in a page fault if one or more memory pages allocated for the buffer are paged-out to secondary storage. Under such a page fault event, a memory page or page must first be paged in to system memory prior to writing data to the destination buffer. An example of an architecture configured to support implicit memory registration and conventional flow control in response to page fault events is shown in Figure 1. The architecture includes a computer platform 100 having a central processing unit (CPU) 102 coupled to system memory 104 and an IO chipset 106 via respective interconnects 105 and 107, while IO chipset 106 is operatively coupled to system memory 104 via an interconnect 109. IO chipset 106 is also connected to an R IC 108 via a Peripheral Component Interconnect (PCI) interconnect 100, such as a PCI Express (PCIe) link. Similarly, IO chipset 106 is connected to a mass storage device (e.g., hard disk or solid-state disk) comprising secondary storage 1 12 via a PCI interconnect 1 13. Platform 100 further includes components for facilitating memory management and memory access, as depicted by a memory management unit 114 on CPU 102, an RNIC driver 116 including MMU sync logic 1 18, and an operating system (OS) 120. As discussed above, RDMA-enabled systems support direct memory access to memory on a remote system (e.g., platform 100) in a manner that bypasses the system CPU. This is implemented via DMA operations under which a DMA engine in RNIC 108 is enabled to directly write to and read from buffers in an RDMA memory region 122 in system memory 104. The connection between RNIC 108 and system memory 104 is facilitated via IO chipset 106, interconnect 109, and PCI link 1 10, wherein IO chipset operates as an IO interface between RNIC 108 and system memory 104. Since a conventional RDMA memory access bypasses the CPU, it also bypasses the MMU. This is OK when all of the pages for the buffer in memory region 122 are pinned. However, in order to enable buffers to be allocated to pages in virtual memory, a mechanism needs to be provided that both apprises an RNIC of when page faults occur and how the MMU is mapping pages between virtual memory and physical memory once the buffer is paged back in to system memory. This is facilitated through the use of MMU sync logic 1 16 in RNIC driver 116 in combination with operations performed by MMU 114 and use of a page table 125 in the kernel space of system memory 104. To initiate an implicit memory registration, a second RDMA host (not shown) sends a buffer registration request message 126 comprising a first packet 'Ρ in a packet stream 128 associated with the RDMA data transfer including second and third packets 'P2' and 'P3' containing data to be written to the buffer request to be allocated. In one embodiment the buffer address information in packet PI includes an address, a length (len) requested for the buffer, and a key value referred to as an "rkey" that is used to validate access rights and provides adapter side address translation. Unlike the case with explicit memory registration, under which memory pages for the destination buffer are pre-registered prior to commencing the RDMA data transfer, implicit memory registration may result in a page fault if the identified location (address) and size (len) of the requested buffer cannot be allocated from an used portion of memory region 122 (or otherwise existing in physical system memory) at the time a request for allocation of a destination buffer is made. An example of this situation is illustrated in Figure 1, which shows a buffer 124 (corresponding to a requested buffer allocation) being paged in from secondary storage 1 12 to memory region 122 in response to a page fault event. Existing RNIC implementations rely on standard transport flow-control mechanisms and existing link level timers to provide back pressure on the wire. Under a page fault event in connection with an implicit memory registration, this may result in premature packet retransmission, congestion, and the termination of a reliable connection. In further detail, Figure 1 illustrates an example of the result of a page fault in connection with an RDMA data transfer employing an implicit memory registration under a conventional approach using a standard transport flow-control mechanism, and proceeds as follows. In response to receiving packet P 1 , an implicit memory registration for the requested buffer is initiated. This results in a memory page fault, and the page of virtual memory in secondary storage must be paged in to physical memory before the any of the data in packet stream 128 may be written to buffer 124. As discussed above, a memory page-in involves latency during which data may not be written into memory; this latency is depicted as Page-in Time (Ptime) in Figure 1. In accordance with a common type of standard reliable transport protocol, confirmation of the receipt of a packet or sequence of packets is acknowledged using an ACK message or the like. Under this approach, a source or sending side retransmits packets for which it does not receive an ACK message upon expiration of a retransmission timeout period. Under some protocols, such as TCP, the length of the retransmission timeout period initially begins as a function of a round-trip time calculation for the connection (e.g., set to a some delta above an averaged round trip calculation or otherwise through use of an algorithm employing round-trip time calculations), followed by an exponential timeout backoff sequence under which the timeout period for each subsequent retransmission is doubled. In response to expiration of the timeout, the packet is queued for retransmission. A given packet may be retransmitted several times, until either an ACK for the packet is received by the sender or the connection itself timeouts, requiring a reset. As further illustrated in Figure 1, the first ACK message 130 (corresponding to packet P I) is not sent from RNIC 108 until after paging in of the memory pages for the destination buffer has been completed, resulting in a page fault delay of Ptime. During this Ptime page fault delay period either packet PI or packet sequence P I, P2, and P3 is retransmitted several times (depending on the protocol used; both cases depicted by packets labeled 'Ρ with a gray background), followed by a connection reset (RST). The foregoing conventional approach is fairly wasteful and inefficient. Any time packets have to be retransmitted, a corresponding portion of network bandwidth is lost. In addition, extra buffering and/or operations may be required by one or both of the sending and receiving RNICs whenever packets are retransmitted - particularly if the same packets are retransmitted multiple times during an extended Ptime period. Implicit Memory Registration with Explicit Flow Control In accordance with teaching and principles disclosed via the embodiments herein, the foregoing deficiencies are addressed via use of an explicit flow control mechanism that is implemented in response to implicit memory registration page faults and is configured so has to substantially reduce or eliminate the retransmission of packets during Ptime periods. The teachings and principles also provide enhanced memory management by providing greater granularity with respect to allocation of memory pages for RDMA purposes. According to one aspect, a more efficient data flow is facilitated by the use of an explicit flow control mechanism that employs a new type of ACK message that is provided as part of the transport wire protocol. During implicit memory registration, resulting in a paging event, this new ACK, called receipt acknowledgement (RACK), is used to acknowledge the receipt of the corresponding RDMA implicit memory registration message and signify a page fault "delay" event is occurring. In addition to performing an acknowledgement function, the RACK includes a calculated back-off time based on standard ACK timer timeout values defined by the underlying fabric or network protocol. As a result, the transport engine in the remote RNIC will temporarily adjust the ACK timeout for the connection's data stream until a normal ACK is received. If the back-off is too significant, the sending RNIC may choose to abort. Once the memory page fault page-in is complete and the page is pinned, a normal ACK will be sent and the remote RNIC will resume transmitting based on the original ACK timeout set for the reliable data channel. An exemplary use of a RACK acknowledgement is shown in Figure 2, which depicts a platform 100a having similar components to platform 100 sharing common reference numbers. Differences between platforms 100 and 100a include a modified RNIC 200 including hardware- based MMU sync logic 202 and an RNIC driver 204 including an MMU sync driver 206. As before, a stream of packets PI, P2, and P3 are sent from a remote sending RNIC (not shown) and received by RNIC 200, resulting in a memory page fault event. In response to detection of the memory fault, RNIC 200 returns a RACK message 206 including a Ptime value. The Ptime value corresponds to a back off time during which the sending RNIC is requested to not send (i.e., back off sending) an additional packets. After the faulting memory page has been paged in to system memory 104 (thus allocating buffer space for data in packets P2 and P3, RNIC 200 returns a conventional ACK message 208 indicating packets P I, P2, and P3 have been successfully received. At this point, the sending RNIC resumes sending packets corresponding to the packet stream, as depicted by a data packet labeled 'P4-data.' To support explicit flow control, MMU sync logic 202 are MMU sync driver 206 are configured such that MMU sync logic will synchronize with the system MMU 114 and discern the current state of memory mappings related to RDMA memory usage (i.e., as depicted, in part, by pages allocated for memory region 120). In one embodiment, MMU sync logic 202 sets a bit in its local cached page table 125a signifying pinning states of the active RDMA address space in system memory 104. As showed by the dashed outline box 126, this portion of local cached page table 125 a corresponds to page table information that is synchronized with corresponding page table information maintained by MMU 114 that pertain to RDMA buffer usage. In some embodiments, the local cached page table also includes min/max times of preceding paging events for dynamic control of data streams. When an RDMA write or read operation arrives, the RNIC checks the mapping state of the RDMA address using its cached page table entries. If it's mapped and the page is pinned, the RNIC will respond immediately with normal ACK's. If it is not pinned, the RNIC will return a RACK message and request page pinning via MMU 1 14. The RACK message may include back-off times which in one embodiment are based on the cached min/max times of previous paging events. Once the data flow has begun, it is assumed pages remain pinned for the life of the transfers and they will not page-out or be moved. This frozen mapping state is important due to the in-order requirements of RDMA data transfers. Figure 3 shows an architecture for a platform 100a that is a variation of platform 100 that replaces CPU 102 and IO chipset 106 with a CPU 102a including an integrated set of IO interfaces 106a that are configured to perform similar operations to those performed by IO chipset 106 discussed above. In one embodiment, CPU 102a comprises a System on a Chip (SoC) architecture, including a plurality of processor cores 210 and multiple interconnects (such as exemplified by an coherent interconnect 212) connecting various components and logic on the SoC in communication. In general, 10 interfaces 106a is representative of one or more 10 interfaces that may be configured to support access to system memory 104, secondary storage 1 12 and RNIC 200, either directly or via another component. As depicted, in one embodiment at least a portion of the 10 interfaces are configured as PCI-based interfaces that support PCI interconnects, such as PCIe interfaces. In some embodiments, DMA data communication between RNIC 200 and system memory 104 may involve an additional internal interface and interconnect 214 between a memory controller 216 (used to access system memory 104) on CPU 102a and 10 Interfaces 106a. In respective embodiments, this internal interconnect comprises an Intel® QuickPath Interconnect® (QPI) or an Intel® Keizer Technology Interconnect® (KTI). In one embodiment, cores 210 and memory controller 216 are coupled to a ring interconnect employing QPI or KTI interconnect wiring and employing the QPI or KTI protocol, and an 10 interface in 10 interfaces 106a is configured to receive PCIe memory write packets read requests and convert them to QPI or KTI memory write transactions and memory read transactions. Although the messages/write packets for these DMA operations go through CPU 102a, they bypass cores 210 and do not involve use of an operating system running on platform 100a. Figure 400 is a method flow and processing diagram illustrating various operations and logic performed by a source RNIC and a Destination RNIC during an RDMA data transfer employing aspects of the explicit flow control mechanism described above. The process on the source RNIC begins in a block 402 in which a determination is made a buffer needs to be allocated for an upcoming RDMA data transfer. In a block 404 a determination is made to whether the buffer allocation corresponds to the start of a data transfer or is made in connection with a dynamic additional buffer allocation that occurs during the data transfer. If the buffer allocation corresponds to the start of a transfer, the source RNIC sends a request for a destination buffer allocation in a block 406, and begins streaming packets in a block 408. The packets corresponding to both of these operations are received at an input buffer 409 of the destination RNIC, where they are processed using packet processing logic and RDMA processing logic, as depicted by the processing block shown in input buffer 409. As shown in a block 410, the destination RNIC receives the destination buffer request and initiates registration of one or more memory pages to be used for the buffer. In connection with the memory registration operation, a page fault may result if the corresponding page is paged- out, as discussed above. Accordingly, a determination is made in decision block 412 to whether a page fault occurs. If a page fault occurs, the logic proceeds to a block 414 in which a Ptime backoff calculation is performed. As discusses above, in one embodiment this calculation is based on prior page fault events, such as minimum and maximum durations of such events. As shown in cached page table 125a, there may be minimum and maximum values for each memory page. Optionally, minimum and maximum values and/or other statistical data may be maintained for a group of memory pages or the entire memory region. The Ptime calculation may also include consideration of the link round trip time or a value derived thereform. Once the Ptime is calculated, a RACK acknowledgment message including a Ptime backoff is sent in a block 416 from the destination RNIC to the source RNIC. In response to receiving the RACK message, the source RNIC sets a backoff time and holds of sending additional packets until the time expires, as shown in a block 418. Upon expiration of the time, streaming of packets from the source RNIC to the destination RNIC is resumed in a block 422. During the Ptime period, one or more pages of memory that are allocated for the destination buffer are paged in to system memory, whereupon they are registered and pinned, as depicted in a block 420. At this stage, the destination buffer is ready to receive data, as depicted by DMA write data to buffer in a block 424. In addition to processing streamed packets from block 422, the previously streamed packets from block 406 that have been temporarily buffered in input buffer 409 are also processed. In accordance with the RDMA specification, the packets are streamed in order and written in their streamed order. If necessary, the source RNIC may have to resend packets from among the packets sent in block 406 if they are dropped by the destination RNIC. The source and destination RNICs may also be configured to support dynamic allocation of buffers (e.g., using a pipeline approach) corresponding to the same data transfer (i.e., the same work request). Under one embodiment, the request for a buffer (or, additional buffer space) is made in advance of an anticipated need for the buffer such that if a page fault event delays registration of a memory page or pages for the buffer the memory page(s) will still be registered and pinned prior to being needed. Accordingly, there is no disruption in the data transfer. Operations and logic supporting this functionality are depicted in Figure 4, beginning with a determination in a decision block 426 to whether a next buffer is to be allocated. If the answer is YES, the logic returns to block 402 to begin the buffer allocation process. However, in this case, the allocation of a buffer does not corresponds to that start of a data transfer, and thus the answer to decision block 404 is NO, with the logic proceeding to a block 428 in which a request for pre-allocation of a buffer is sent to the destination RNIC. In one embodiment, a request for allocation of a buffer or pre-allocation of a buffer are one in the same - from the perspective of the destination RNIC they appear identical. In another embodiment, a buffer pre-allocation request is marked with a flag or a like to inform the destination RNIC of what type of buffer allocation request it is. Accordingly, in one embodiment the destination RNIC will not return a RACK message in response to detection of a page fault event, since the source RNIC is not planning on streaming packets to be stored in the buffer (to be allocated) until after the buffer is projected to be registered in pinned (assuming a page fault will result). Optionally, the destination RNIC may return an - RACK that will simply be ignored by the Source RNIC. During the time period corresponding to the anticipated page fault delay, the source RNIC does begin streaming the portion of data that is to be stored in the pre-allocated buffer, as depicted by a block 430. Preferably, the timing of the buffer pre-allocation will be such that a continuous stream of packets for the data transfer proceed uninterrupted. At the same time, it is preferred the buffer not be pre-allocated significantly in advance of when it will be needed such that utilization of the memory space used for RDMA buffering is made more efficient. In addition to the foregoing embodiments, other methods may be implemented to optimize the data flow and limit the use of back off periods. For example, an RNIC adapter could request pinning on segments of the RDMA memory regions instead of the entire region as long as the order is preserved and subsequent pinning is schedule ahead of arriving data (similar to the pipelining example above). As another option, a protocol could also provide reliable connection attributes during setup that signifies the use of implicit memory registration. This would tell the source RNIC to delay start of the data stream until a first ACK is received, or RACK Ptime has expired, when starting a new RDMA transfer operation. An exemplary system architecture for an RNIC 500 is shown in Figure 5. RNIC 500 includes a NIC system board 502 on which a network processor/controller 504, and memory comprising Dynamic Random Access Memory (DRAM) 506 and SRAM 508 are mounted. In one embodiment, SRAM 508 is integrated on processor/controller 504. Under various embodiments. NIC system board 502 is representative of an Ethernet controller card, a daughter board, a multi-chip module board or substrate, or it may be part of a computer system board, such as a main board or motherboard for a computer server. Processor/controller 504 is representative of Ethernet processing and/or control unit, and may be embodied in various forms, including as an Ethernet controller chip or a network processor unit (NPU). In the illustrated embodiment, processor/controller 504 includes an instruction store 510, a cluster of processor engines 512, an SRAM controller 514, a DRAM controller 516, a Write DMA block 518, a Read DMA block 520, a PCIe interface 522, a scratch memory 524, a hash unit 526, Serializer/Deserializers (SerDes) 528 and 530, and Physical Layer (PHY) interfaces 532 and 534. Each of the components is interconnected to one or more other components via applicable interconnect structure and logic that is collectively depicted as an internal interconnect cloud 535. Instruction store 510 includes various instructions that are executed by processor engines cluster 512, including packet identification/classification instructions 536, RDMA main logic 538, MMU sync logic 202 and packet assembling logic 540. Processor engines cluster 512 includes a plurality of microengines 542, each coupled to a local control store 544. Under one embodiment, various operations such as packet identification and classification are performed using a pipelined architecture, such as illustrated in Figure 5, with each microengine performing an associated operation in the pipeline. As an alternative, processor engines cluster 512 is representative of one or more processor cores in a central processing unit or controller. As yet another option, the combination of processor engines 512 and instruction store 510 may be implemented as embedded logic, such as via a Field Programmable Gate Array (FPGA) or the like. In one embodiment, instruction store 510 is implemented as an on-chip store, such as depicted in Figure 5. Optionally, a portion or all of the instructions depicted in instruction store 510 may stored in SRAM 508 (if off-chip) and accessed using SRAM controller 514 via an interface 546. SRAM 508 may also be used for storing selected data and/or instructions relating to packet processing operations, as well as cache page table entries. DRAM 506 is used to for implementing one or more Input Buffers 409 and one or more Output Buffers 548, and is accessed using DRAM controller 516 via an interface 550. Write DMA block 518 and Read DMA block 520 are respectively configured to support DMA Write and Read operations in accordance with the embodiments described herein. In the illustrated embodiment, DMA communication between DRAM 506 and a platform host circuitry is facilitated over PCIe interface 522 via a PCIe link 552 coupled to a PCIe interconnect or PCIe expansion slot 554, enabling DMA Write and Read transfers between DRAM 506 and system memory for a host 556 using the PCIe protocol. Scratch memory 524 and hash unit 526 are illustrative of components employed by NICs for facilitating scratch memory and hashing operations relating to packet processing. For example, a hash operation may be implemented for deriving flow IDs and for packet identification. PHYs 532 and 534 facilitate Physical layer operations for the RNIC, and operate as a bridge between the digital domain employed by the RNIC logic and components and the analog domain employed for transmitting data via electrical, optical or wired signals. For example, in the illustrated embodiment of Figure 5, each of PHYs 532 and 534 is coupled to a pair of I/O ports configured to send electrical signals over a wire cable such as a Cat5e or Cat6 cable. Optical and wireless signal embodiments would employ additional circuitry and interfaces for facilitating connection via optical and wireless signals (not shown). In conjunction with PHY operations, SerDes 528 and 530 are used to serialize output packet streams and deserialize inbound packet streams. In addition to the instructions shown in instruction store 510, other instructions may be implemented via execution of processor engines 512 or other processing means to facilitate additional operations. For example, in one embodiment, NIC 500 is configured to implement a TCP/IP stack on the RNIC itself. RNIC 500 may also be configured to facilitate TCP operations in a manner that is offloaded from the Operating System TCP facilities, whereby once a packet is sent outbound, RNIC 500 is responsible for processing an ACK message and resending the packet if an ACK message is not received within an applicable TCP timeout period. RDMA main logic 536 comprises instructions and logic for facilitating RDMA data transfer operations, which may include conventional RDMA operations in addition to the augmentation to RDMA data transfer processes described herein. MMU sync logic 202 is configured to implement the MMY sync logic and operations described herein. In addition to support for RDMA operations, an RNIC may be configured perform conventional NIC operations, including operation relating to packet forwarding. Accordingly, RNIC 500 may be configured to store data for facilitating packet identification and classification, including forwarding filters and rules either locally or using a Memory-Mapped IO (MMIO) address space in system memory. When stored locally, this data may be stored in either DRAM 506 or SRAM 508. Data stored in a MMIO address space may be accessed by RNIC 500 via Read DMA operations. Generally, setting up MMIO address space mapping may be facilitated by an RNIC device driver in coordination with the operating system. The RNIC device driver may also be configured to enable instructions in instruction store 510 to be updated via the operating system. Optionally, the instructions in instruction store may comprise firmware instructions that are stored in non-volatile memory, such as Flash memory, which may either be integrated on processor/controller 504 or mounted to NIC system board 502 (not shown). Generally, aspects of the embodiments disclosed herein may apply to any existing or future network protocol that supports RDMA implementations and flow control. These include but are not limited to TCP or other reliable transport protocols over Ethernet, iWARP, and Infiniband. Moreover, any existing physical transport layer used to facilitate the physical transmission of communication may be employed, including wired, optical, and wireless transmissions. Although some embodiments have been described in reference to particular implementations, other implementations are possible according to some embodiments. Additionally, the arrangement and/or order of elements or other features illustrated in the drawings and/or described herein need not be arranged in the particular way illustrated and described. Many other arrangements are possible according to some embodiments. In each system shown in a figure, the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar. However, an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary. In the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. An algorithm is here, and generally, considered to be a self-consistent sequence of acts or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. An embodiment is an implementation or example of the inventions. Reference in the specification to "an embodiment," "one embodiment," "some embodiments," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions. The various appearances "an embodiment," "one embodiment," or "some embodiments" are not necessarily all referring to the same embodiments. Not all components, features, structures, characteristics, etc. described and illustrated herein need be included in a particular embodiment or embodiments. If the specification states a component, feature, structure, or characteristic "may", "might", "can" or "could" be included, for example, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to "a" or "an" element, that does not mean there is only one of the element. If the specification or claims refer to "an additional" element, that does not preclude there being more than one of the additional element. As presented in the text and drawing herein, aspects of some embodiments may be implemented in an RNIC that includes one or more integrated components (e.g., semiconductor chips) via which logic for facilitating RDMA-related operations is implemented. Moreover, embodiments of the present description may be implemented not only within a semiconductor chip but also within machine-readable media. For example, the designs described above may be stored upon and/or embedded within machine readable media associated with a design tool used for designing semiconductor devices. Examples include a netlist formatted in the VHSIC Hardware Description Language (VHDL) language, Verilog language or SPICE language. Some netlist examples include: a behavioral level netlist, a register transfer level (RTL) netlist, a gate level netlist and a transistor level netlist. Machine-readable media also include media having layout information such as a GDS-II file. Furthermore, netlist files or other machine-readable media for semiconductor chip design may be used in a simulation environment to perform the methods of the teachings described above. In addition, aspects of some embodiments herein may be facilitated by corresponding software and/or firmware components and applications, such as but not limited to RNIC drivers, MMU sync drivers, and firmware implemented on R ICs. Thus, embodiments of this invention may be used as or to support a software program, software modules, firmware, and/or distributed software executed upon some form of processing core (such as the CPU of a computer, one or more cores of a multi-core processor), a virtual machine running on a processor or core or otherwise implemented or realized upon or within a machine-readable medium. A machine- readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium may include a read only memory (ROM); a random access memory (RAM); a magnetic disk storage media; an optical storage media; and a flash memory device, etc. The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the drawings. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation. |
A system includes two or more agents and a distributed arbitration scheme for the bus to which the agents are connected. Thus, an arbiter corresponding to each agent is provided. The arbiters are reset using a first reset signal, while the agents are reset using a separate reset signal or signals. The arbiters are concurrently released from reset when the first reset signal is deasserted, and may have a consistent reset state to provide for synchronization of the arbiters. The agents may be independently released from reset by the separate reset signals. Accordingly, the arbiters may be synchronized and may remain synchronized even if the corresponding agents are released from reset at different times, or are temporarily held in reset for any reason. <IMAGE> |
A system comprising:a reset control circuit configured to generate a first reset signal, a second reset signal different from said first reset signal, and a third reset signal different from said first reset signal and said second reset signal;a first agent coupled to receive said first reset signal and configured to reset in response to an assertion of said first reset signal;a first arbiter configured to determine if said first agent wins an arbitration for a bus, wherein said first arbiter is coupled to receive said second reset signal and is configured to reset in response to an assertion of said second reset signal;a second agent coupled to receive said third reset signal, and wherein said second agent is configured to reset in response to an assertion of said third reset signal; anda second arbiter coupled to receive said second reset signal and configured to reset in response to said assertion of said second reset signal, wherein said second arbiter is configured to determine if said second agent wins an arbitration for said bus.The system as recited in claim 1 wherein said reset control circuit is configured to deassert said first reset signal prior to deasserting said third reset signal.The system as recited in claim 2 wherein said reset control circuit comprises a register, wherein said register is configured to store an indication corresponding to said second agent, and wherein said indication is indicative of whether said second agent is to be held in reset or released from reset, and wherein said reset control circuit is configured to deassert said reset signal responsive to said indication being indicative that said second agent is to be released from reset.The system as recited in claim 3 wherein said first agent comprises a processor, and wherein said processor is configured to execute an instruction to update said indication in said register to indicate that said second agent is to be released from reset.The system as recited in claim 4 wherein said second agent comprises a second processor.The system as recited in claim 1 wherein said reset control circuit comprises a register, wherein said register is configured to store a system reset indication, and wherein said reset control circuit is configured to assert said first reset signal, said second reset signal, and said third reset signal responsive to said system reset indication indicating that said system is to be reset.The system as recited in claim 1 wherein said reset control circuit is coupled to receive a system reset signal, and wherein said reset control circuit is configured to assert said first reset signal, said second reset signal, and said third reset signal responsive to said system reset signal indicating that said system is to be reset.The system as recited in claim 1 wherein said reset control circuit is configured to deassert said second reset signal prior to or coincident with deasserting said first reset signal.The system as recited in claim 1 wherein said first arbiter comprises an address arbiter corresponding to an address portion of said bus. and wherein said second arbiter comprises an address arbiter corresponding to an address portion of said bus.The system as recited in claim 1 wherein said first arbiter comprises a data arbiter corresponding to a data portion of said bus, and wherein said second arbiter comprises a data arbiter corresponding to a data portion of said bus.The system as recited in claim 1 further comprising a plurality of agents including said first agent and said second agent and still further comprising a plurality of arbiters including said first arbiter and said second arbiter, each of said plurality of arbiters coupled to receive said second reset signal and configured to reset in response to said second reset signal, and each of said plurality of agents coupled to receive a different one of a plurality of reset signals including said first reset signal and said third reset signal, and wherein said plurality of agents are configured to reset in response to said one of said plurality of reset signals.The system as recited in claim 1 wherein said reset control circuit, said first agent, said first arbiter, said second agent, and said second arbiter are integrated onto a single chip.A circuit defining mechanism comprising one or more databases representing the system as recited in any of claims 1-12.A carrier medium carrying the circuit defining mechanism as recited in claim 13.In a system including (i) a first agent; (ii) a first arbiter configured to determine if said first agent wins an arbitration for a bus; (iii) a second agent; and (iv) a second arbiter configured to determine if said second agent wins an arbitration for said bus, a method comprising:resetting said first agent and said second agent; andindependently resetting said first arbiter and said second arbiter.The method as recited in claim 15 further comprising releasing said first arbiter and said second arbiter from reset concurrently.The method as recited in claim 16 further comprising releasing at least said first agent from reset coincident with or subsequent to said releasing said first arbiter and said second arbiter.The method as recited in claim 16 further comprising releasing said second agent from reset subsequent to said releasing said first agent.The method as recited in claim 18 wherein said system includes a register storing an indication of whether said second agent is to be held in reset or released from reset, the method further comprising:updating said indication to indicate that said register is to be released from reset; andreleasing said second agent from reset responsive to said updating.The method as recited in claim 15 wherein said system includes a register storing a reset indication, wherein said resetting and said independently resetting are responsive to said indication.The method as recited in claim 15 further comprising receiving a system reset signal, wherein said resetting and said independently resetting are responsive to said receiving. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention is related to the field of digital systems and, more particularly, to reset of digital systems including two or more arbiters for a bus.2. Description of the Related ArtA bus is frequently used in digital systems to interconnect a variety of devices included in the digital system. Generally, one or more devices are connected to the bus, and use the bus to communicate with other devices connected to the bus. As used herein, the term "agent" refers to a device which is capable of communicating on the bus. The agent may be a requesting agent if the agent is capable of initiating transactions on the bus and may be a responding agent if the agent is capable of responding to a transaction initiated by a requesting agent. A given agent may be capable of being both a requesting agent and a responding agent. Additionally, a "transaction" is a communication on the bus. The transaction may include an address transfer and optionally a data transfer. Transactions may be read transactions (transfers of data from the responding agent to the requesting agent) and write transactions (transfers of data from the requesting agent to the responding agent). Transactions may further include various coherency commands which may or may not involve a transfer of data.The bus is a shared resource among the agents, and thus a mechanism for determining which agent is permitted to use the bus at any given time is needed. Generally, determining which of several agents is permitted to use the bus (often referred to as "mastering the bus") is referred to as "arbitration". An agent desiring to use the bus may signal its request to use the bus, referred to as "arbitrating". The circuitry for performing arbitration is referred to as an "arbiter". One or more agents may arbitrate for the bus, and the arbiter determines which of the arbitrating agents is permitted to use the bus. The agent granted use of the bus by the arbiter is referred to as the winner of the arbitration.Arbitration may be centralized or distributed. In centralized arbitration, all arbitration requests are sent to a central arbiter which provides a grant to one of the agents. In distributed arbitration, each agent includes an arbiter which receives arbitration requests and determines the winner of the arbitration. If the agent corresponding to the arbiter is the winner, the arbiter informs the agent that it has won and that agent uses the bus. Distributed arbitration may reduce the time required from request to grant as compared to centralized arbitration, since the grant may be transmitted to the winning agent locally from the distributed arbiter at the winning agent.In a distributed arbitration scheme, the distributed arbiters must remain synchronized with each other unless fixed priority is the arbitration policy. If synchronization is not maintained, two or more of the arbiters may signal a grant to their respective agents for the same arbitration. The agents would then simultaneously attempt to perform transactions on the bus. Such a situation is erroneous, and would lead to unpredictable results.Additionally, it may be desirable to temporarily delay or disable access by one or more agents to the bus. For example, it may be desirable in a multiprocessing system (in which two or more processors are connected to a bus) for one of the processors to be permitted access to the bus while other processors are not permitted access. This may be useful during boot of the system, to allow the processor permitted access to read boot code from a boot read-only memory (ROM) while the other processors are held off.Even though the agents are temporarily delayed, the arbiters corresponding to the delayed agents must remain synchronized with the other arbiters. Thus, when the agents are subsequently permitted to use the bus, the arbiters corresponding to the agents may participate correctly in the distributed arbitration scheme.SUMMARY OF THE INVENTIONThe problems outlined above are in large part solved by a system as described herein. The system includes two or more agents and a distributed arbitration scheme for the bus to which the agents are connected. Thus, an arbiter corresponding to each agent is provided. The arbiters are reset using a first reset signal, while the agents are reset using a separate reset signal or signals. The arbiters are concurrently released from reset when the first reset signal is deasserted, and may have a consistent reset state to provide for synchronization of the arbiters. The agents may be independently released from reset by the separate reset signals. Accordingly, the arbiters may be synchronized and may remain synchronized even if the corresponding agents are released from reset at different times, or are temporarily held in reset for any reason. When the corresponding agents are released from reset and arbitrate, the arbiters are synchronized and arbitration may operate properly.Providing for holding one or more agents in reset while other agents and the arbiters are operating may have a variety of uses. For example, in a multiprocessor system, one of the processors may be released from reset while the remaining processors are held in reset. The released processor may, for example, read its boot code from a boot ROM before the other processors and/or perform system initialization before the remaining processors are released. As another example, debug and testing may be simplified by allowing agents not involved in the test to be disabled. Furthermore, a defective agent may be isolated by being held in reset while other agents operate normally.Broadly speaking, a system is contemplated. The system comprises a reset control circuit, a first agent and a second agent, and a first arbiter and a second arbiter. The reset control circuit is configured to generate a first reset signal and a second reset signal different from the first reset signal. The first agent is coupled to receive the first reset signal and configured to reset in response to an assertion of the first reset signal. The first arbiter is configured to determine if the first agent wins an arbitration for a bus, and is coupled to receive the second reset signal. The first arbiter is configured to reset in response to an assertion of the second reset signal. Coupled to receive the second reset signal and configured to reset in response to the assertion of the second reset signal, the second arbiter is configured to determine if the second agent wins an arbitration for the bus.Additionally, in a system including (i) a first agent; (ii) a first arbiter configured to determine if the first agent wins an arbitration for a bus; (iii) a second agent; and (iv) a second arbiter configured to determine if the second agent wins an arbitration for the bus, a method is contemplated. The first agent and the second agent are reset; and the first arbiter and the second arbiter are independently reset.BRIEF DESCRIPTION OF THE DRAWINGSOther objects and advantages of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings in which:Fig. 1 is a block diagram of one embodiment of a system.Fig. 2 is a block diagram of one embodiment of a reset control circuit shown in Fig. 1.Fig. 3 is a timing diagram illustrating various reset signals for the circuits shown in Figs. 1 and 2.Fig. 4 is a flowchart illustrating exemplary code which may be executed by one embodiment of a processor shown in Fig. 1.Fig. 5 is a block diagram of a carrier medium.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSTurning now to Fig. 1, a block diagram of one embodiment of a system 10 is shown. Other embodiments are possible and contemplated. In the embodiment of Fig. 1, system 10 includes processors 12A-12B, an L2 cache 14, a memory controller 16, a high speed input/output (I/O) bridge 18, an I/O bridge 20, I/O interfaces 22A-22B, and a reset control circuit 30. System 10 may include a bus 24 for interconnecting the various components of system 10. As illustrated in Fig. 1, each of processors 12A-12B, L2 cache 14, memory controller 16, high speed I/O bridge 18, I/O bridge 20, and reset control circuit 30 are coupled to bus 24. Each of processors 12A-12B, L2 cache 14, high speed I/O bridge 18, and I/O bridge 20 include an address bus arbiter (A Arb) labeled with reference numerals 26A-26E as illustrated in Fig. 1. Each of processors 12A-12B, L2 cache 14, memory controller 16, high speed I/O bridge 18, and I/O bridge 20 include a data bus arbiter (D Arb) labeled with reference numerals 28A-28F as illustrated in Fig. 1. I/O bridge 20 is coupled to I/O interfaces 22A-22B. L2 cache 14 is coupled to memory controller 16, which is further coupled to a memory 126. Reset control circuit 30 is coupled to receive a system reset signal and is coupled to provide reset signals to other components of system 10. More specifically, reset control circuit 30 provides a reset signal (Reset_Arb) to the arbiters 26A-26E and 28A-28F. Additionally, reset control circuit 30 provides reset signals separate from the Reset_Arb signal to processors 12A-12B, L2 cache 14, memory controller 16, I/O bridge 20, and high speed I/O bridge 18. Reset control circuit 30 may further provide reset signals for other circuitry (e.g. I/O interfaces 22A-22B), as desired. In the illustrated embodiment, I/O interface 22B may include circuitry for interfacing to one or more ROMs 32A-32B, coupled to provide data to I/O interface 22B through a multiplexor (mux) 34. Each of processors 12A-12B, L2 cache 14, memory controller 16, I/O bridge 20, and high speed I/O bridge 18 may be an agent on bus 24 for the illustrated embodiment.Generally, reset control circuit 30 is configured to generate a reset signal for the arbiters within system 10 and to generate a different reset signal or signals to the agents corresponding to those arbiters. Accordingly, the arbiters may be reset independently of the agents to which those arbiters correspond. In the illustrated embodiment, a single reset signal (Reset_Arb) is provided to all the arbiters. Thus, the arbiters are reset concurrently, and are also released from reset concurrently. Each arbiter may establish a reset state in response to the Reset_Arb signal, and that reset state is consistent with the reset state of the other arbiters, so that only one arbitration winner will be determined each arbitration. For example, the reset state may be that the priority of the agents in the arbitration is arranged in order of their agent identifier numbers (e.g. agent 0 is highest priority, agent 1 is next highest, etc.) Since the arbiters are released from reset concurrently, they are synchronized.Additionally, since separate reset signals from the Reset_Arb signal are provided to the agents, the arbiters may be released from reset while one or more of the agents are held in reset. While an agent is held in reset, it is not operating and thus may not access the bus 24. However, the arbiter for that agent is operating (since it is unaffected by the agent's reset signal). Accordingly, if the agent is subsequently released from reset and arbitrates for the bus 24, that agent's arbiter is correctly synchronized with the other arbiters. Thus, that agent's arbiter may determine that that agent wins an arbitration that none of the other arbiters determine is won by their respective agents. Accordingly, proper arbitration operation is achievable.Reset control circuit 30 generates the reset signals responsive to the system reset signal, and may also generate the reset signals responsive to an indication in a register within reset control circuit 30 (shown in Fig. 2 below). Reset control circuit 30 may be configured to assert each of the reset signals in response to the system reset signal. In one embodiment, reset control circuit 30 controls the length of the reset assertion to be at least a minimum period of time guaranteed to reset the receiving circuitry. Subsequent to the minimum period of reset assertion, reset control circuit 30 may deassert the reset signal to the arbiters (Reset_Arb) and may also deassert one or more of the reset signals to the agents. The arbiters and those agents for which reset is deasserted may begin operation, including arbitrating for bus 24.Additionally, reset control circuit 30 may be configured to hold one or more agents in reset after the deassertion of the other reset signals. Since each agent receives a separate reset signal from reset control circuit 30 in the illustrated embodiment, reset control circuit 30 may hold any combination of agents in reset while other agents are released from reset. In one embodiment, the register within reset control circuit 30 may include indications for each agent which may be set to a state indicating that the agent is to be held in reset or to a state indicating that the agent is to be released from reset. The register may have a predefined state established in response to reset, and the state may be modified by executing instructions in a processor 12A-12B which update that register.For example, reset control circuit 30 may be configured to hold processor 12B in reset while other agents and the arbiters (including arbiters 26B and 28B) are released from reset. Holding processor 12B in reset may allow processor 12A to read boot code from ROM 32A and then remap the boot addresses to correspond to ROM 32B for processor 12B to read its boot code. In this manner, processor 12A and processor 12B may read different code. In the illustrated example, the remapping of ROM addresses is provided via mux 34 on the output of the ROMs 32A-32B. Both ROMs may receive the same input address from I/O interface 22B. ROM 32A stores the code for processor 12A, and ROM 32B stores the code for processor 12B. Since processor 12B is held in reset after the arbiters and processor 12A are released from reset, reset control circuit 30 may initially select ROM 32A to output data (the instructions to be executed) to I/O interface 22B, which routes the data through I/O bridge 20 to bus 24. Processor 12A's code may include an instruction to remap the ROM addresses to ROM 32B (e.g. by updating a configuration register in reset control circuit 30 to change the selection control of mux 34 to select ROM 32B). Then, processor 12A's code may update the register storing the indication of processor 12B's reset status to release processor 12B. Processor 12B may then read its code from ROM 32B. It is noted that the register storing the selection control for mux 34 could be located anywhere within system 10 or external to system 10, as desired.While Fig. 1 illustrates two separate ROM's for processor 12A and processor 12B, respectively, a single ROM could be used with different portions of the address range of the ROM used to store code for each of the processors 12A and 12B. Address decode circuitry could be signalled to determine which of the portions to read in response to boot addresses.Furthermore, the boot code in the ROM could be programmed to determine which processor 12A-12B is executing the code (e.g. by reading a processor identification register or some similar resource) and branch to the appropriate code. However, holding processor 12B in reset while releasing processor 12A may still be desirable to allow processor 12A to initialize various system resources before processor 12B begins operating.Generally, the ability to hold one or more agents in reset while allowing other agents to operate, and then to release the agents held in reset with the corresponding arbiters operating properly, may have a variety of uses. For example, during debugging and testing of system 10, it may be advantageous to hold one or more agents in reset while allowing others to operate. An agent having a defect (logical, manufacturing, or otherwise) could be held in reset while other agents are tested. Also, testing may be eased by holding in reset those agents not involved in the test.It is noted that, while the same signal (Reset_Arb) is delivered to each arbiter in the illustrated embodiment, various embodiments may use multiple separate conductors to convey the signal to the arbiters. The use of multiple conductors may reduce the electrical loading on any one conductor, improving timing characteristics. However, the separate conductors may convey the same logical signal (e.g. the signals on each conductor may assert and deassert concurrently).It is further noted that, while the illustrated embodiment provides separate reset signals to each agent to allow flexibility for which agents are held in reset and which agents are released from reset, other embodiments may consolidate reset signals and use a single reset signal for all agents which are to be held in reset and released from reset concurrently.Bus 24 may be a split transaction bus in the illustrated embodiment. A split transaction bus splits the address and data portions of each transaction and allows the address portion (referred to as the address phase) and the data portion (referred to as the data phase) to proceed independently. In the illustrated embodiment, the address bus and data bus are independently arbitrated for, allowing for out of order data phases with respect to the corresponding address phases. Each transaction including both address and data thus includes an arbitration for the address bus, an address phase, an arbitration for the data bus, and a data phase. Additionally, coherent transactions may include a response phase for communicating coherency information after the address phase.Accordingly, an address arbiter (A Arb) 26A-26E for arbitration for the address portion of the bus 24 is included in each agent in Fig. 1 which is capable of being a requesting agent. Similarly, a data arbiter (D Arb) 28A-28F for arbitration for the data portion of the bus 24 is included in each agent in Fig. 1 which is capable of being a responding agent. Each requesting agent is assigned an address request signal, and each responding agent is assigned a data request signal. More particularly, each agent is assigned an agent identifier and the corresponding address request signal and/or data request signal may be used by that agent. Additionally, the agent identifier may be driven by the agent as part of the corresponding address or data phase to identify that agent as the winner of the preceding arbitration.The fairness scheme implemented by one embodiment of system 10 may be one in which the agent granted the bus is made lowest priority for being granted the bus again. The highest priority agent which is requesting the bus is granted the bus. Since address and data buses are separately arbitrated, separate priority states are maintained for the address and data buses.Each address arbiter 26A-26E is coupled to receive at least the address request signals corresponding to each other requesting agent besides the requesting agent to which that address arbiter corresponds (the "corresponding agent"). In various embodiments the address arbiters may also receive the address request signal of the corresponding agent. For example, the corresponding agent of address arbiter 26A is processor 12A and address arbiter 26A receives the address request signals from each other agent (including the address request signals from processor 12B, L2 cache 14, I/O bridge 20, and high speed I/O bridge 18). The address arbiter tracks which of the agents are higher priority than the corresponding agent and which agents are lower priority than the corresponding agent for address bus arbitration. Thus, given the request signals from each other agent, the address arbiter can determine whether or not the corresponding agent wins the arbitration for the address bus. This determination may be relatively quick, and thus arbitration may be performed rapidly. Rather than attempt to calculate which other agent did win the arbitration, the address arbiter uses the agent identifier in the address phase of the transaction performed by the arbitration winner to update the priority state for the corresponding agent. More particularly, the agent which won the arbitration is marked as lower priority than the corresponding agent. On the other hand, if the corresponding agent does win the arbitration, the address arbiter updates the priority state to indicate that each other agent is higher priority than the corresponding agent.Each data arbiter 28A-28F is similarly coupled to receive at least the data request signals corresponding to each other responding agent besides the responding agent to which that data arbiter corresponds. In various embodiments, the arbiters may further be coupled to receive the data request signal of the corresponding agent as well. The data arbiter tracks which of the agents are higher priority than the corresponding agent and which agents are lower priority than the corresponding agent for data bus arbitration. Thus, given the request signals from each other agent, the data arbiter can determine whether or not the corresponding agent wins the arbitration for the data bus. This determination may be relatively quick, and thus arbitration may be performed rapidly. Rather than attempt to calculate which other agent did win the arbitration, the data arbiter uses the agent identifier in the data phase of the transaction performed by the arbitration winner to update the priority state for the corresponding agent. More particularly, the agent which won the arbitration is marked as lower priority than the corresponding agent. On the other hand, if the corresponding agent does win the arbitration, the data arbiter updates the priority state to indicate that each other agent is higher priority than the corresponding agent.While the above discussion illustrates a particular embodiment of address arbiters and data arbiters implementing a particular arbitration scheme, any arbitration scheme may be employed as desired.Bus 24 may employ any suitable signalling technique. For example, in one embodiment, bus 24 may employ differential signalling. For example, in one implementation, each signal within bus 24 may be a differential pair of signals for high speed signal transmission. Other embodiments may employ any other signalling technique (e.g. TTL, CMOS, GTL, HSTL, etc.).Processors 12A-12B may be designed to any instruction set architecture, and may execute programs written to that instruction set architecture. Exemplary instruction set architectures may include the MIPS instruction set architecture (including the MIPS-3D and MIPS MDMX application specific extensions), the IA-32 or IA-64 instruction set architectures developed by Intel Corp., the PowerPC instruction set architecture, the Alpha instruction set architecture, the ARM instruction set architecture, or any other instruction set architecture. L2 cache 14 is a high speed cache memory. L2 cache 14 is referred to as "L2" since processors 12A-12B may employ internal level 1 ("L1") caches. If L1 caches are not included in processors 12A-12B, L2 cache 14 may be an L1 cache. Furthermore, if multiple levels of caching are included in processors 12A-12B, L2 cache 14 may be a lower level cache than L2. L2 cache 14 may employ any organization, including direct mapped, set associative, and fully associative organizations. In one particular implementation, L2 cache 14 may be a 512 kilobyte, 4 way set associative cache having 32 byte cache lines. A set associative cache is a cache arranged into multiple sets, each set comprising two or more entries. A portion of the address (the "index") is used to select one of the sets (i.e. each encoding of the index selects a different set). The entries in the selected set are eligible to store the cache line accessed by the address. Each of the entries within the set is referred to as a "way" of the set. The portion of the address remaining after removing the index (and the offset within the cache line) is referred to as the "tag", and is stored in each entry to identify the cache line in that entry. The stored tags are compared to the corresponding tag portion of the address of a memory transaction to determine if the memory transaction hits or misses in the cache, and is used to select the way in which the hit is detected (if a hit is detected).Memory controller 16 is configured to access memory 126 in response to memory transactions received on bus 24. Memory controller 16 receives a hit signal from L2 cache 14, and if a hit is detected in L2 cache 14 for a memory transaction, memory controller 16 does not respond to that memory transaction. If a miss is detected by L2 cache 14, or the memory transaction is non-cacheable, memory controller 16 may access memory 126 to perform the read or write operation. Memory controller 16 may be designed to access any of a variety of types of memory. For example, memory controller 16 may be designed for synchronous dynamic random access memory (SDRAM), and more particularly double data rate (DDR) SDRAM. Alternatively, memory controller 16 may be designed for DRAM, Rambus DRAM (RDRAM), SRAM, or any other suitable memory device.High speed I/O bridge 18 may be an interface to a high speed I/O interconnect. For example, high speed I/O bridge 18 may implement the Lightning Data Transport (LDT) I/O fabric developed by Advanced Micro Devices, Inc. Other high speed interfaces may be altematively used.I/O bridge 20 is used to link one or more I/O interfaces (e.g. I/O interfaces 22A-22B) to bus 24. I/O bridge 20 may serve to reduce the electrical loading on bus 24 if more than one I/O interface 22A-22B is bridged by I/O bridge 20. Generally, I/O bridge 20 performs transactions on bus 24 on behalf of I/O interfaces 22A-22B and relays transactions targeted at an I/O interface 22A-22B from bus 24 to that I/O interface 22A-22B. I/O interfaces 22A-22B may be lower bandwidth, higher latency interfaces. For example, I/O interfaces 22A-22B may include one or more serial interfaces, Personal Computer Memory Card International Association (PCMCIA) interfaces, Ethernet interfaces (e.g. media access control level interfaces), Peripheral Component Interconnect (PCI) interfaces, etc.It is noted that system 10 (and more particularly processors 12A-12B, L2 cache 14, memory controller 16, I/O interfaces 22A-22B, I/O bridge 20, I/O bridge 18 and bus 24 may be integrated onto a single integrated circuit as a system on a chip configuration. In another configuration, memory 126 may be integrated as well. Alternatively, one or more of the components may be implemented as separate integrated circuits, or all components may be separate integrated circuits, as desired. Any level of integration may be used.As used herein, a transaction "targets" a location or device if the location or device is the provider of data for the transaction (for a read transaction) or receiver of data for the transaction (for a write transaction). Viewed in another way, a transaction may target a location or device if the address of the transaction is mapped to that location or device.It is noted that, while the illustrated embodiment employs a split transaction bus with separate arbitration for the address and data buses, other embodiments may employ non-split transaction buses arbitrated with a single arbitration for address and data and/or a split transaction bus in which the data bus is not explicitly arbitrated.The above discussion refers to the assertion and deassertion of a reset signal. As used herein, a reset signal is "asserted" if the state of the reset signal indicates that reset is to be performed. The reset signal is "deasserted" if the state of the reset signal indicates that reset is not to be performed. The reset signal may be asserted when it is in a logically high or logically low state, as desired, and may be deasserted in the opposite state. Furthermore, a circuit is "reset" if it is forced into a predetermined initial state from which predictable operation may occur based on the inputs to that circuit and its predetermined initial state. A circuit is "held in reset" if the reset signal to that circuit remains asserted after the minimum period of time used to establish the predetermined state. A circuit that is held in reset may remain in the predetermined state and may not begin operation. A circuit is "released from reset" when the reset signal to that circuit is deasserted. The circuit may begin operation from its predetermined initial state.Turning now to Fig. 2, a block diagram of a portion of one embodiment of reset control circuit 30 is shown. Other embodiments are possible and contemplated. In the illustrated embodiment, reset control circuit 30 includes a reset control register 40, a reset pulse generator circuit 42, and an agent reset circuit 44. Reset pulse generator circuit 42 is coupled to receive the system reset signal. Reset pulse generator circuit 42 is further coupled to reset control register 40 and agent reset circuit 44. Reset pulse generator circuit 42 is coupled to the Reset_Arb signal, and agent reset circuit 44 is coupled to provide the reset signals to each agent.Generally, in response to a system reset signalled on the system reset signal or a software initiated reset via a software reset (SWR) indication in reset control register 40, reset pulse generator 42 generates a reset pulse of at least the minimum width required to reset the agents and arbiters in system 10. More particularly, the reset pulse is an assertion of the reset signals for at least the minimum period of time to reset the agents and arbiters, followed by a deassertion of the reset. The reset pulse may be provided directly as the Reset_Arb signal to the arbiter circuits, and may be provided to agent reset circuit 44. Agent reset circuit 44 may assert each of its output reset signals to the agents for at least the duration of the reset pulse, and may continue assertion of the reset signals to one or more agents responsive to corresponding indications in reset control register 40. Thus, in the illustrated embodiment, agent reset circuit 44 may include an OR gate for each reset signal, ORing the reset pulse from reset pulse generator circuit 42 with the corresponding indication from reset control register 40.Reset control register 40 may be used by software (e.g. code sequences executing in processor 12A and/or 12B) to control which agents are held in or released from reset. Reset control register 40 may be memory mapped to an address which may be read and/or written by instructions executing on the processors to determine the contents of reset control register 40 and to update the contents therein. In the illustrated embodiment, reset control register 40 includes an indication for each agent on bus 24. The indication may have at least two states: one indicating that the corresponding agent is to be held in reset and one indicating that the corresponding agent is to be released from reset. Thus, the indication may be a bit, for example, with the set state of the bit indicating that the agent should be held in reset and the clear state indicating that the agent should be released from reset. Similarly, the SWR indication may be a bit indicating, when set. that a system reset is being initiated by software and indicating, when clear, that the reset is not being initiated. Other embodiments may use the opposite sense of the set and clear states or may use other encodings.Reset control register 40 may have a reset state established in response to a system reset (either initiated via the SWR indication or the system reset signal). More particularly, in one embodiment, bits P0 (corresponding to processor 12A), L2 (corresponding to L2 cache 14), MC (corresponding to memory controller 16), IO0 (corresponding to I/O bridge 18), and I01 (corresponding to I/O bridge 20) may reset to a clear state (the "release from reset" state in the present example). Bit P1 (corresponding to processor 12B) may reset to the set state (the "hold in reset" state in the present example), causing processor 12B to be held in reset until an instruction executed by processor 12A clears the P1 bit. Other embodiments may have reset states in which other indications are reset to the set ("hold in reset") state, as desired.Thus, after a system reset, the arbiters and all agents not indicated as held in reset may be released from reset and may begin operation. As desired by software, the agents that are held in reset may be released via updates to reset control register 40. Furthermore, agents which are released from reset but which software desired to deactivate may be deactivated by setting the corresponding bit in reset control register 40. The reset signal corresponding to that agent would then be asserted and the agent would be held in reset until subsequently released by clearing the bit or causing a system reset.It is noted that, if both processors 12A and 12B were indicated as being held in reset via the bits P0 and P1 in reset control register 40, system 10 may lockup (from a software point of view) since there is not processor active to clear one of the bits P0 or P1. To prevent such a condition, one embodiment of reset control circuit 30 may treat the P0 bit somewhat differently than the other bits. If the P0 bit is set, a reset pulse of at least the required width may be sent to processor 12A but then the P0 bit may be automatically cleared by reset control circuit 30, allowing processor 12A to be released from reset.It is noted that various agents may require result pulses of different widths. Reset pulse generator circuit 42 may be configured to provide different reset pulses for different agents, as desired. Alternatively, reset pulse generator circuit 42 may be configured to generate a reset pulse having a duration at least as long as the longest required reset duration. Still further, reset pulse generator 42 may be configured to generate a separate result pulse for the arbiters than is generated for the agents. The arbiters reset pulse may be terminated (causing the Reset_Arb signal to deassert) prior to or coincident with the deassertion of reset of the agents.It is noted that, while agent reset circuit 44 is illustrated with specific logic gates in Fig. 2, any suitable circuitry may be used. Particularly, any Boolean equivalents to the circuitry illustrated in Fig. 2 may be used.Turning next to Fig. 3, a timing diagram is shown illustrating operation of one embodiment of reset control circuit 30 for a system reset. Other embodiments are possible and contemplated. In the illustrated embodiment, the system reset and Reset_Arb signals are shown, as well as reset signals to each agent. The reset signals to the other agents are suffixed with a label similar to the labels used in reset control register 40. Thus, Reset_P0 is provided to processor 12A, Reset_P1 is provided to processor 12B, Reset_L2 is provided to L2 cache 14, Reset_MC is provided to memory controller 16, Reset_IO0 is provided to I/O bridge 18, and Reset_IO1 is provided to I/O bridge 20. Time is the horizontal axis of the timing diagram (in arbitrary units).The system reset is signalled in Fig. 3 via assertion of the system reset signal. Reset pulse generator circuit 42 detects the system reset signal assertion, and generates a reset pulse. Thus, each of the reset signals Reset_Arb, Reset_P0, Reset_P1, Reset_L2, Reset_MC, Reset_100, and Reset_IO is asserted for the duration of the reset pulse (illustrated as TR in Fig. 3).Once the result pulse is terminated, each of the reset signals shown in Fig. 3 deasserts except for the Reset_P1 signal. As discussed above, processor 12B may remain held in reset until processor 12A updates the reset control register 40 to indicate releasing processor 12B from reset. At a time TPIR illustrated in Fig. 3, reset control register 40 is updated and Reset_P1 deasserts.As mentioned above, different result pulse durations may be provided to different agents, and the duration of the reset pulse to the arbiters may be less than the duration of the reset pulse to any of the agents, as desired.Turning now to Fig. 4, a flowchart is showing illustrating an exemplary code sequence that may be executed by processor 12A after being released from reset. The code may be located at the reset vector from which processor 12A fetches code after being released from reset. These addresses may be mapped, e.g., to ROM 32A shown in Fig. 1 for the embodiment of Fig. 1. Other embodiments are possible and contemplated.Processor 12A may read the code to be executed by processor 12A from ROM 32A (block 50) and may store the code in memory. The code may include the code that processor 12A will execute during normal operation. The code may also include, for example, code to initialize various system resources.Processor 12A, subsequent to reading the code and executing any system initialization code, may then set the configuration register which selects the code for processor 12B in the ROM for fetching in response to reset vector addresses (block 52). Block 52 may be eliminated in embodiments which use a read of a processor identification register and branch in the code to distinguish between the processors. Finally, processor 12A may update reset control register 40 to release processor 12B from reset (block 54).Turning next to Fig. 5, a block diagram of a carrier medium 60 including a database representative of system 10 is shown. Generally speaking, a carrier medium may include storage media such as magnetic or optical media, e.g., disk or CD-ROM, volatile or non-volatile memory media such as RAM (e.g. SDRAM, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link.Generally, the database of system 10 carried on carrier medium 60 may be a database which can be read by a program and used, directly or indirectly, to fabricate the hardware comprising system 10. For example, the database may be a behavioral-level description or register-transfer level (RTL) description of the hardware functionality in a high level design language (HDL) such as Verilog or VHDL. The description may be read by a synthesis tool which may synthesize the description to produce a netlist comprising a list of gates in a synthesis library. The netlist comprises a set of gates which also represent the functionality of the hardware comprising system 10. The netlist may then be placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks may then be used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to system 10. Alternatively, the database on carrier medium 60 may be the netlist (with or without the synthesis library) or the data set. as desired.While carrier medium 60 carries a representation of system 10, other embodiments may carry a representation of any portion of system 10, as desired, including arbiters, agents, reset control circuits, etc.The databases described above may comprise a circuit defining mechanism for the system 10 or portions thereof.Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
PROBLEM TO BE SOLVED: To suitably control devices remotely by gesture input.SOLUTION: A method includes dynamically selecting a set of mappings defining how a gesture made by a movement of a wearable item will be interpreted as a command; determining whether the gesture has a mapping in the set of mappings; and translating the gesture into a command on the basis of the determination. Interpreting movements of the wearable item as gestures associated with a command to control a controlled device is also disclosed that includes interpreting a movement of the wearable device as a gesture relating to the command on the basis of a first context. |
A method for controlling a device, the movement of a mapping set specifying how a gesture made by the movement of at least one wearable item will be interpreted as one or more commands. Selection, determining whether the gesture has one mapping in the mapping set, and converting the gesture to a command for the device based on the determination. Method.The method of claim 1, wherein the mapping set the dynamic selection is based on application context.The method of claim 1, further comprising: detecting movement of at least one wearable item; and interpreting the movement as the gesture.The method of claim 3, wherein detecting the movement of the at least one wearable item comprises detecting raw sensor data corresponding to the movement.The method of claim 1, further comprising providing feedback acknowledging the gesture.A device for controlling a device, wherein a processing system specifies how a gesture made by the movement of at least one wearable item will be interpreted as one or more commands. Make a dynamic selection of a mapping set, determine whether the gesture has one mapping in the mapping set, and convert the gesture into a command for the device based on the determination The device that is configured toThe apparatus of claim 6, wherein the dynamic selection of the mapping set is based on application context.7. The apparatus of claim 6, wherein the processing system is further configured to detect movement of at least one wearable item and interpret the movement as the gesture.The apparatus of claim 8, wherein the detecting comprises detecting raw sensor data corresponding to the movement.7. The apparatus of claim 6, wherein the processing system further comprises providing feedback acknowledging the gesture.A device for controlling a device, the movement of a mapping set specifying how a gesture made by the movement of at least one wearable item will be interpreted as one or more commands. Means for selectively selecting, means for determining whether the gesture has a mapping in the mapping set, and converting the gesture to a command for the device based on the determination And means forThe apparatus of claim 11, wherein the mapping set the dynamic selection is based on application context.The apparatus of claim 11, further comprising: means for detecting movement of at least one wearable item; and means for interpreting the movement as the gesture.14. The apparatus of claim 13, wherein the means for detecting the movement of the at least one wearable item comprises means for detecting raw sensor data corresponding to the movement.The apparatus of claim 11, further comprising means for providing feedback acknowledging the gesture.A computer program product for controlling a device, the computer program product comprising a computer readable medium, wherein the computer readable medium is a gesture created by the movement of at least one wearable item. Dynamic selection of a mapping set specifying how it will be interpreted as one or more commands, and determining whether said gesture has a mapping in said mapping set. Computer program product having instructions executable to convert the gesture into a command for the device based on the determination.A watch for controlling a device, comprising: a receiver configured to receive a signal for a gesture made by the movement of at least one wearable item; and a processing system, the processing system comprising , Making a dynamic selection of a mapping set specifying how the gesture will be interpreted as one or more commands, and whether the gesture has one mapping in the mapping set A watch, configured to determine whether to convert the gesture into a command for the device based on the determination.A method for interpreting the movement of a worn item as a gesture associated with a command for controlling a control device, comprising: perception of the movement of the worn item; determining the first situation in which the movement is detected Interpreting the movement as representing a gesture associated with the command based on the first contexts.19. The method of claim 18, wherein if the movement is not perceived in the context, the movement is interpreted as not indicating a gesture associated with a command to control the control device.The method of claim 18, wherein the movement comprises at least one of slide, rotation, tilt, flex, or tap.19. The method of claim 18, wherein the first context comprises at least one of position, application, time or environment.19. The method of claim 18, wherein the determination is based on an operational state of the wearable item.19. The method of claim 18, wherein the determination is based on an application status of the wearable item, wherein the application status is selected by the wearable item or control device based on direct user action or implicit conditions. the method of.19. The method of claim 18, wherein the determination is based on a set of applicable contexts previously configured by other wearable items.The first situation in which the movement is perceived is used to interpret data perceived by sensors in the wearable item or sensors installed in the wearable item. The method of claim 18.19. The method of claim 18, wherein the first context in which the movement is perceived is used to interpret data received from a plurality of sensors equipped in one or more wearable items.19. The method of claim 18, wherein the first situation in which the motion is perceived is used to initiate gesture detection.The method according to claim 18, wherein the first situation in which the movement is perceived is used to determine which step of the process to perform gesture detection needs to be started.19. The method of claim 18, further comprising receiving data from a plurality of sensors, and prioritizing the data based on a first context to aid in the interpretation of the data.An apparatus for interpreting movement of a wearable item as a gesture associated with a command for controlling a control device, the sensor being configured to perceive movement of the wearable item and coupled to the sensor A processing system, wherein the processing system determines a first situation in which the motion is perceived, and the motion of the motion as an indication of a gesture associated with the command based on the first situation. An apparatus that is configured to interpretThe processing system is further configured to interpret the movement as not indicating a gesture associated with a command to control the control device if the movement is not perceived in the context. 31. The apparatus of claim 30, wherein:31. The apparatus of claim 30, wherein the movement comprises at least one of slide, rotation, tilt, flex, or tap.31. The apparatus of claim 30, wherein the first situation comprises at least one of position, application, time or environment.31. The apparatus of claim 30, wherein the determination is based on an operational state of the wearable item.31. The apparatus of claim 30, wherein the determination is based on an application status of the wearable item, wherein the application status is selected by the wearable item or control device based on direct user action or implicit conditions. Device.31. The apparatus of claim 30, wherein the determination is based on a set of applicable situations preconfigured by other wearable items.The first situation in which the movement is perceived is used to interpret data perceived by sensors in the wearable item or sensors installed in the wearable item. 31. The apparatus of claim 30.31. The apparatus of claim 30, wherein the first condition in which the movement is perceived is used to interpret data received from a plurality of sensors mounted in one or more wearable items.31. The apparatus of claim 30, wherein the first situation in which the motion is perceived is used to initiate gesture detection.31. The apparatus of claim 30, wherein the first context in which the movement is perceived is used to determine which step of the process to perform gesture detection needs to be initiated.31. The apparatus of claim 30, wherein the processing system is further configured to receive data from a plurality of sensors and to prioritize the data based on a first situation to assist in the interpretation of the data. .An apparatus for interpreting movement of a wearable item as a gesture associated with a command to control a control device, the means coupled for sensing movement of the wearable item, and a process coupled to the sensor A processing system, as a means for determining a first situation in which the movement is perceived, and as an indication of a gesture associated with the command based on the first situation. And means for interpreting the movement.43. The method according to claim 42, further comprising: if the movement is not perceived in the situation, the movement is interpreted as not indicating a gesture associated with a command to control the control device. Device.43. The apparatus of claim 42, wherein the movement comprises at least one of slide, rotation, tilt, flex, or tap.43. The apparatus of claim 42, wherein the first situation comprises at least one of position, application, time or environment.43. The apparatus of claim 42, wherein the determination is based on an operational state of the wearable item.43. The apparatus of claim 42, wherein the determination is based on an application status of the wearable item, wherein the application status is selected by the wearable item or control device based on direct user action or implicit conditions. Device.43. The apparatus of claim 42, wherein the determination is based on a set of applicable situations pre-configured by other wearable items.The first situation in which the movement is perceived is used to interpret data perceived by sensors in the wearable item or sensors installed in the wearable item. 43. Apparatus according to claim 42.43. The apparatus of claim 42, wherein the first condition in which the movement is perceived is used to interpret data received from a plurality of sensors mounted in one or more wearable items.43. The apparatus of claim 42, wherein the first condition in which the motion is perceived is used to initiate gesture detection.43. The apparatus of claim 42, wherein the first context in which the movement is perceived is used to determine which step of the process to perform gesture detection needs to be initiated.43. The apparatus of claim 42, further comprising: means for receiving data from a plurality of sensors; and means for prioritizing the data based on a first situation to aid in interpretation of the data.A computer program product for interpreting the movement of a wearable item as a gesture associated with a command, comprising a computer readable medium, wherein the computer readable medium perceives the movement of the wearable item And determining the first situation in which the movement is perceived, and, based on the first situation, the movement as an indication of a gesture associated with a command for controlling a control device. Interpreting, an apparatus comprising executable instructions.A watch, wherein the sensor is configured to perceive movement of the watch, a processing system coupled to the sensor, and the processing system determines a first situation in which the movement is perceived; A transmitter configured to interpret the movement as an indicator of a gesture associated with the command based on a first situation, and configured to transmit the command coupled to the processing system; With a watch.A method for communicating control information by a wearable device, comprising: determining an agreed control gesture set between a first and a second device, wherein said control gesture uses a first device Control sequence for controlling the second device, via wireless transmission, corresponding to at least one of the control gestures that is executable and supportable by the second device To engage.57. The method of claim 56, wherein the control gesture is based on user interaction with the first device.57. The method of claim 56, wherein the agreed control gesture set is a subset of control gestures that can be performed by the first device.57. The method of claim 56, wherein the agreed control gesture set is a subset of control gestures that can be supported by the second device.57. The method of claim 56, wherein the determination is initiated by the first device.61. The method of claim 60, wherein the initiation comprises an advertisement of executable control gestures.57. The method of claim 56, wherein the initiation is initiated by the second device.63. The method of claim 62, wherein the initiation comprises an advertisement of supportable control gestures.Further, providing user feedback, using at least one of tactile (tactile) feedback, visual (visual) feedback, or audible (audible) feedback upon detection of the at least one control gesture. 57. The method of claim 56.57. The method of claim 56, further comprising processing sensor data into an intermediate format for detection of at least one control gesture.57. The method of claim 56, further comprising processing sensor data for detection of at least one control gesture.A device for communicating control information by a wearable device, wherein said control gesture uses a first device, means for determining an agreed control gesture set between the first and second devices And capable of being supported by a second device, corresponding to the at least one control gesture to be performed by the first device, wireless transmission to a control sequence for controlling the second device And means involved.68. The apparatus of claim 67, wherein the control gesture is based on user interaction with the first apparatus.68. The apparatus of claim 67, wherein the agreed set of control gestures is a subset of control gestures executable by the first device.68. The apparatus of claim 67, wherein the agreed control gesture set is a subset of control gestures that can be supported by the second device.68. The apparatus of claim 67, wherein the determination is initiated by the first apparatus.72. The apparatus of claim 71, wherein the initiating means comprises an advertisement of executable control gestures.68. The apparatus of claim 67, wherein the determination is initiated by the second apparatus.74. The apparatus of claim 73, wherein the initiation comprises an advertisement of supportable control gestures.And means for providing user feedback, using tactile feedback, tactile feedback, visual feedback, or audible feedback upon detection of the at least one control gesture. 68. The apparatus of claim 67, comprising:68. The method of claim 67, further comprising: means for processing sensor data in an intermediate format for detection of at least one control gesture.68. The apparatus of claim 67, further comprising means for processing sensor data for detection of at least one control gesture.An apparatus for communicating control information by a wearable device, comprising a processing system, said processing system determining an agreed control gesture set between a first and a second apparatus, wherein said control A gesture is executable using a first device and supportable by a second device, which controls the second device in response to at least one of the control gestures to be performed by the first device An apparatus, which is configured to enter into a control sequence via wireless transmission.79. The apparatus of claim 78, wherein the control gesture is based on user interaction with the first apparatus.79. The apparatus of claim 78, wherein the agreed control gesture set is a subset of control gestures that can be performed by the first device.79. The apparatus of claim 78, wherein the agreed control gesture set is a subset of control gestures that can be supported by the second device.79. The apparatus of claim 78, wherein the determination is initiated by the first apparatus.83. The apparatus of claim 82, wherein the initiation comprises an advertisement of executable control gestures.79. The apparatus of claim 78, wherein the determination is initiated by the second apparatus.85. The apparatus of claim 84, wherein the initiation comprises an advertisement of supportable control gestures.The processing system further uses at least one of providing user feedback, tactile (tactile) feedback, visual (visual) feedback, or audible (audible) feedback upon detection of at least one control gesture. 79. The apparatus of claim 78, comprising:79. The apparatus of claim 78, wherein the processing system is further configured to process sensor data in an intermediate format for detection of at least one control gesture.79. The apparatus of claim 78, wherein the processing system is further configured to process sensor data for detection of at least one control gesture.A watch comprising at least one antenna and a processing system, said processing system determining an agreed control gesture set between the first and second devices, wherein said control gesture is the first A control sequence for controlling the second device, corresponding to the at least one control gesture to be performed by the device and supportable by the second device, to be performed by the first device A watch, which is configured to participate via at least a wireless transmission with said antenna.A computer program product for communicating control information by a wearable device, comprising a computer readable medium, said computer readable medium being a set of agreed control gestures between a first and a second device Determining that the control gesture is executable using a first device and is supportable by a second device, corresponding to the at least one control gesture to be performed by the first device Computer program product comprising instructions executable to engage, via wireless transmission, a control sequence for controlling the second device. |
Method and apparatus for controlling mobile devices and home electronic devicesClaim of Priority under 35 USC 法 119 Patent Application filed on July 23, 2009, assigned to the assignee of the present application and expressly incorporated herein by reference. Claim the priority of Provisional Application No. 61 / 228,119 entitled "Apparatus for Distributed User Interfaces to Mobile and Consumer Electronic Devices".Certain aspects of the present disclosure generally relate to controlling devices via wireless communications.As mobile devices, computing devices, and home electronic devices continue to expand their capabilities, the mechanisms by which users interact with these devices and control their capabilities are becoming increasingly constrained.FIG. 10 illustrates an example wireless communication system, in accordance with certain aspects of the present disclosure.FIG. 7 illustrates various components that may be utilized in a wireless device, in accordance with certain aspects of the present disclosure.FIG. 10 illustrates an exemplary transmitter and an exemplary receiver that may be used within a wireless communication system, in accordance with certain aspects of the present disclosure.FIG. 1 illustrates an example of a body area network (BAN) according to some aspects of the present disclosure.FIG. 16 illustrates an example block diagram of a BAN according to some aspects of the present disclosure.7 is an exemplary distributed user interface (UI) control command flow diagram in accordance with certain aspects of the present disclosure.FIG. 16 illustrates an example state diagram of a distributed UI device according to some aspects of the present disclosure.FIG. 7 illustrates an example control processing flow of a distributed UI device according to some aspects of the present disclosure.FIG. 7 illustrates exemplary operations that may be performed by the wearable device, according to some aspects of the present disclosure.FIG. 10 illustrates exemplary components capable of performing the operations shown in FIG. 9;FIG. 7 illustrates example operations that may be performed by a controlled device, in accordance with certain aspects of the present disclosure.FIG. 11 illustrates exemplary components capable of performing the operations shown in FIG.FIG. 7 illustrates an example operation of the distributed user interface in accordance with certain aspects of the present disclosure.FIG. 12 illustrates exemplary components capable of performing the operations shown in FIG.FIG. 7 illustrates example operations that may be performed by a controlled device, in accordance with certain aspects of the present disclosure.FIG. 13 illustrates exemplary components capable of performing the operations shown in FIG. 12;FIG. 10 illustrates example operations for controlling a device, in accordance with certain aspects of the present disclosure.FIG. 14 illustrates exemplary components capable of performing the operations shown in FIG. 13;FIG. 7 illustrates an example operation of the distributed user interface for interpreting gestures as commands, in accordance with certain aspects of the present disclosure.FIG. 15 illustrates exemplary components capable of performing the operations shown in FIG.Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. However, the disclosure may be embodied in many different forms and should not be construed as limited to any particular structure or function presented throughout the disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein, the scope of the disclosure, whether implemented independently of the other aspects of the disclosure or in combination with the other aspects of the disclosure, may be as used herein. Those skilled in the art should appreciate that it covers any aspect of the disclosure disclosed. For example, an apparatus may be implemented or a method may be practiced, using any number of the aspects described herein. Further, the scope of the present disclosure is to be embodied using other structures, functions, or structures and functions in addition to or in addition to the various aspects of the present disclosure described herein. It shall cover the device or such method. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred embodiments are described, the scope of the present disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of the present disclosure should be broadly applicable to various wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and the following description of the preferred aspects. The mode for carrying out the invention and the drawings are for describing the present disclosure only and not for limitation, and the scope of the present disclosure is defined by the appended claims and their equivalents.The techniques described herein may be used for various broadband wireless communication systems, including communication systems based on orthogonal multiplexing and single carrier transmission. Examples of such communication systems include Orthogonal Frequency Division Multiple Access (OFDMA) systems, Single Carrier Frequency Division Multiple Access (SC-FDMA) systems, Code Division Multiple Access (CDMA), and so on. An OFDMA system utilizes orthogonal frequency division multiplexing (OFDM), which is a modulation technique that partitions the overall system bandwidth into multiple orthogonal subcarriers. These subcarriers are sometimes called tones, bins and so on. In OFDM, each subcarrier may be independently modulated with data. The SC-FDMA system may be interleaved FDMA (IFDMA) for transmission on subcarriers distributed across the system bandwidth, local FDMA (LFDMA) for transmission on a block of adjacent subcarriers, or adjacent Enhanced FDMA (EFDMA) may be utilized for transmission on multiple blocks of subcarriers. In general, modulation symbols are sent in the frequency domain with OFDM and in the time domain with SC-FDMA. A CDMA system utilizes spread spectrum techniques and coding schemes in which each transmitter (ie, user) is assigned a code to allow multiple users to be multiplexed over the same physical channel be able to.One specific example of a communication system based on orthogonal multiplexing is the WiMAX system. WiMAX, which stands for Worldwide Interoperability for Microwave Access, is a standards-based broadband wireless technology that provides high throughput broadband connectivity over long distances. Currently, there are two major applications of WiMAX: fixed WiMAX and mobile WiMAX. Fixed WiMAX applications are, for example, point-to-multipoint enabling broadband access in homes and businesses. Mobile WiMAX provides full mobility of cellular networks at broadband speeds.IEEE 802.16x is an emerging standards organization that defines an air interface for fixed and mobile broadband wireless access (BWA) systems. IEEE 802.16x approved "IEEE P802.16 d / D 5-2004" for fixed BWA systems in May 2004, and "O. IEEE P802. 16 e / D 12 Oct. Released "2005". Currently, the latest version of IEEE 802.16, "IEEE P 802.16 Rev 2 / D 8 December 2008", a draft standard, integrates materials and corrections from IEEE 802.16e. These standards define four different physical layers (PHYs) and one medium access control (MAC) layer. The OFDM and OFDMA physical layers of the four physical layers are most common in the fixed and mobile BWA areas, respectively.The teachings herein may be incorporated into (e.g., implemented in or performed by) the various wired or wireless devices (e.g., nodes). In some aspects, a node implemented in accordance with the teachings herein may comprise an access point or access terminal.An access point ("AP") is a Node B, a radio network controller ("RNC"), an eNode B, a base station controller ("BSC"), a base transceiver station ("BTS"), a base station ("BS") , Transceiver function ("TF"), wireless router, wireless transceiver, basic service set ("BSS"), enhanced service set ("ESS"), wireless base station ("RBS"), or some other terminology , May be implemented as any of them or may be known as any of them.The access terminal ("AT") may comprise an access terminal, subscriber station, subscriber unit, mobile station, remote station, remote terminal, user terminal, user agent, user device, user device, or some other terminology. It may be implemented as any of them or known as any of them. In some implementations, the access terminal comprises a cellular telephone, a cordless telephone, a session initiation protocol ("SIP") telephone, a wireless local loop ("WLL") station, a personal digital assistant ("PDA"), a wireless connectivity function It may comprise a handheld device or any other suitable processing device connected to a wireless modem. Thus, one or more aspects taught herein include a telephone (eg, a cellular phone or smartphone), a computer (eg, a laptop), a portable communication device, a portable computing device (eg, a personal digital assistant), It may be incorporated into an entertainment device (e.g. a music or video device, or a satellite radio), a global positioning system device, or any other suitable device configured to communicate via wireless or wired media. In some aspects, the node is a wireless node. For example, such wireless nodes may provide connectivity for or to a network (eg, a wide area network such as the Internet or cellular network, etc.) via a wired or wireless communication link.FIG. 1 shows an example of a wireless communication system 100 in which aspects of the present disclosure may be employed. Wireless communication system 100 may be a broadband wireless communication system. Wireless communication system 100 may provide communication to a number of cells 102, each cell being serviced by base station 104. Base station 104 may be a fixed station that communicates with user terminal 106. Base station 104 may alternatively be referred to as an access point, Node B, or some other terminology.FIG. 1 shows various user terminals 106 interspersed throughout the system 100. The user terminal 106 may be fixed (ie, stationary) or mobile. User terminal 106 may alternatively be referred to as a remote station, access terminal, terminal, subscriber unit, mobile station, station, user equipment, and so on. The user terminal 106 may be a wireless device such as a cellular telephone, personal digital assistant (PDA), handheld device, wireless modem, laptop computer, personal computer and the like.Various algorithms and methods may be used for transmission in the wireless communication system 100 between the base station 104 and the user terminal 106. For example, signals may be transmitted and received between base station 104 and user terminal 106 in accordance with OFDM / OFDMA techniques. In this case, the wireless communication system 100 may be referred to as an OFDM / OFDMA system. Alternatively, signals may be sent and received between base station 104 and user terminal 106 in accordance with CDMA techniques. In this case, the wireless communication system 100 may be referred to as a CDMA system.The communication link enabling transmission from the base station 104 to the user terminal 106 is called downlink (DL) 108 and the communication link enabling transmission from the user terminal 106 to the base station 104 is uplink (UL) 110 and I sometimes call. Alternatively, the downlink 108 may be referred to as the forward link or forward channel and the uplink 110 may be referred to as the reverse link or reverse channel.Cell 102 may be divided into multiple sectors 112. Sector 112 is a physical coverage area within cell 102. Base station 104 in wireless communication system 100 may utilize an antenna to concentrate the flow of power within a particular sector 112 of cell 102. Such an antenna may be called a directional antenna.FIG. 2 illustrates various components that may be utilized in wireless device 202 that may be employed within wireless communication system 100. Wireless device 202 is an example of a device that may be configured to implement the various methods described herein. Wireless device 202 may be base station 104 or user terminal 106.The wireless device 202 may include a processor 204 that controls the operation of the wireless device 202. Processor 204 may also be referred to as a central processing unit (CPU). Memory 206, which may include both read only memory (ROM) and random access memory (RAM), provides instructions and data to processor 204. A portion of memory 206 may also include non-volatile random access memory (NVRAM). Processor 204 generally performs logic and arithmetic operations based on program instructions stored in memory 206. The instructions in memory 206 are executable to implement the methods described herein.The wireless device 202 may also include a housing 208 that may include a transmitter 210 and a receiver 212 to enable transmission and reception of data between the wireless device 202 and a remote location. Transmitter 210 and receiver 212 may be combined to form transceiver 214. An antenna 216 may be attached to the housing 208 and electrically coupled to the transceiver 214. The wireless device 202 may also include multiple transmitters, multiple receivers, multiple transceivers, and / or multiple antennas (not shown).The wireless device 202 may also include a signal detector 218 that may be used to detect and quantify the level of the signal received by the transceiver 214. Signal detector 218 may detect signals such as total energy, energy per subcarrier per symbol, power spectral density and other signals. The wireless device 202 may also include a digital signal processor (DSP) 220 for use in processing signals.The various components of wireless device 202 may be coupled together by bus system 222, which may include, in addition to data buses, a power bus, a control signal bus, and a status signal bus.FIG. 3 illustrates an example of a transmitter 302 that may be used within a wireless communication system 100 that utilizes OFDM / OFDMA. Portions of transmitter 302 may be implemented in transmitter 210 of wireless device 202. Transmitter 302 may be implemented in base station 104 to transmit data 306 to user terminal 106 on downlink 108. Transmitter 302 may also be implemented in user terminal 106 to transmit data 306 to base station 104 on uplink 110.Data 306 to be transmitted is shown as being provided as an input to a serial to parallel (S / P) converter 308. The S / P converter 308 splits the transmission data into M parallel data streams 310.The N parallel data streams 310 may then be provided as an input to the mapper 312. The mapper 312 may map the N parallel data streams 310 to N constellation points. The mapping may be performed using some modulation constellation, such as bi-phase shift keying (BPSK), 4-phase shift keying (QPSK), 8-phase shift keying (8 PSK), quadrature amplitude modulation (QAM). Thus, mapper 312 may output N parallel symbol streams 316, each corresponding to one of the N orthogonal subcarriers of an inverse fast Fourier transform (IFFT) 320. These N parallel symbol streams 316 may be represented in the frequency domain and converted by the IFFT component 320 into N parallel time domain sample streams 318.Next, I give a brief comment on the term. N parallel modulations in the frequency domain are equal to N modulation symbols in the frequency domain, which is equivalent to N mappings in the frequency domain and N points IFFT, which is one (useful in the time domain). ) OFDM symbol, which is equal to N samples in the time domain. One OFDM symbol in the time domain, NS, is equal to NCP (the number of cyclic prefix (CP) samples per OFDM symbol) + N (the number of useful samples per OFDM symbol).N parallel time domain sample streams 318 may be converted to OFDM / OFDMA symbol streams 322 by a parallel to serial (P / S) converter 324. Cyclic prefix insertion component 326 may insert CPs between consecutive OFDM / OFDMA symbols in OFDM / OFDMA symbol stream 322. The output of CP insertion component 326 may then be upconverted by a radio frequency (RF) front end 328 to the desired transmission frequency band. The antenna 330 may then transmit the obtained signal 332.FIG. 3 also illustrates an example of a receiver 304 that may be used within a wireless device 202 that utilizes OFDM / OFDMA. Portions of receiver 304 may be implemented in receiver 212 of wireless device 202. Receiver 304 may be implemented in user terminal 106 to receive data 306 from base station 104 on downlink 108. Receiver 304 may also be implemented in base station 104 to receive data 306 from user terminal 106 on uplink 110.Transmit signal 332 is illustrated as traveling on wireless channel 334. When signal 332 'is received by antenna 330', received signal 332 'may be down converted to a baseband signal by RF front end 328'. The CP removal component 326 ′ may then remove the CP inserted between the OFDM / OFDMA symbols by the CP insertion component 326.The output of CP removal component 326 'may be provided to S / P converter 324'. The S / P converter 324 'may divide the OFDM / OFDMA symbol stream 322' into N parallel time domain symbol streams 318 ', each corresponding to one of N orthogonal subcarriers. A fast Fourier transform (FFT) component 320 'may transform the N parallel time domain symbol streams 318' into the frequency domain and output N parallel frequency domain symbol streams 316 '.Demapper 312 'may perform the reverse of the symbol mapping operation performed by mapper 312, thereby outputting N parallel data streams 310'. P / S converter 308 'may combine N parallel data streams 310' into a single data stream 306 '. Ideally, this data stream 306 ′ corresponds to the data 306 provided as an input to the transmitter 302. It should be noted that elements 308 ', 310', 312 ', 316', 320 ', 318' and 324 'may all be found in the baseband processor 340'.FIG. 4 illustrates an example of a body area network (BAN) 400 that may correspond to the wireless system 100 shown in FIG. BAN represents a promising concept for home applications. Some aspects of the present disclosure describe how a BAN may provide a platform for controlling various devices through a distributed user interface formed by one or more wearable devices.The BAN may consist of various devices worn on the body 400 (or wearable). For example, the BAN may communicate wirelessly to form a distributed user interface (UI) that controls the device 410, such as a mobile device, computing device, or consumer electronic (CE) device (eg, media player), etc. And watch 406 may be included.As described in more detail below, certain gestures performed using ring 404 (such as rotation, slide, or tilt) may be converted as commands to control device 410. As a simple example, rotation of ring 404 may be used to control the volume of audio signals (eg, for song or voice calls) output from device 410 to headset 402 or a speaker (not shown) obtain.In some cases, feedback confirmation commands may be provided that are implemented using gestures made using ring 404. For example, haptic or audible feedback may be provided as vibrations (or mild electrical stimulation) via ring 404, watch 406, or headset 402. Other wearable devices (or possibly portable devices) that can operate to control the device include necklaces, pendants, cufflinks, buttons, bracelets and the like.Such devices may also be added to dedicated UI devices, such as active surface areas (ASA) that are sensitive to and respond to touches or gestures. It should be noted that the ASA may be incorporated as part of the display (eg, as a watch, a mobile phone, or a touch screen on a PDA). These objects can be manipulated in several different ways that result in different types of outputs for each action. An example of such an action is shown in the following table.By linking and coupling one or more of the objects described above to a mobile device, computing device or CE device, these actions and corresponding outputs are made available to the user, these devices and devices It can be transformed (or mapped) into a set of control commands that manage the interaction with the functions and applications that it provides.FIG. 5 is an exemplary block diagram of a system that may be controlled with respect to a certain set of features by a gesture (eg, spatial gesture movement) of one or more distributed UI devices according to some aspects of the present disclosure. Show.The input (dedicated) devices I1 510 and I2 520 may be distinguished between input devices that may also provide information, such as ID1 530. Finally, the system may also include a pure information display device D1 540 and an actual controller / controlled device CD1 550. The terms "controller" and "controlled device" as used herein may be used interchangeably. Of course, various embodiments may have any combination of such devices, subsets of which are employed.In the particular example shown, I1 510 may be able to convert user gestures into interpretable input to controller CD1 550 (via wireless connection C1). However, I2 520 may also be capable of such conversion, I2 520 providing such input (via wireless connection C3) to controller CD1 550, as well as (via wireless connection C5) It may be possible to provide such input directly to the display device D1 540.CD1 550 may use such input as a control for its behavior, and D1 540 may provide more detailed feedback to user actions (e.g., based on input from CD1 550 via wireless connection C4). One example may be to scroll through the list (displayed at D1 540) through repetitive gestures (eg, tapping or rotation) with I2 520. A further example of an input device is also in the form of an integrated input / display device ID1 530, which can provide the option of direct user feedback (both via a two-way wireless connection C2).Thus, the techniques presented herein may be involved in the operations performed by the various elements operating together. Such operations may include controlled device (eg, mobile / computing / CE device) selection of functions / applications and appropriate control channels for one or more distributed UI devices to control functions / applications. Controlled device based selection of power modes and sleep cycle management, definition of gesture and motion command set for specific function or application, detection of motion and / or gesture corresponding to command set, haptic feedback, audible Providing feedback or visual feedback to the user.FIG. 6 is an exemplary distributed user interface (UI) control command flow diagram in accordance with certain aspects of the present disclosure. In the example, the application identifies, at step 602, the particular input / output device to associate based on the activity. Note that this step may be based on user initiated or other events initiated by the event, such as machine-to-machine communication, time events, etc. Given the control requirements of a particular activity, an applicable subset of available input and / or output devices may be selected, and at step 604, an appropriate control channel is established. This operation may also include security related proof setup for control channel authentication and confidentiality.Following this, at step 606, available gesture transformations and their meanings may be negotiated between the input device and the controller, and the available output devices are configured for their corresponding role. Further, at step 614, parameters for the power mode and the sleep cycle may be configured based on the activity requirements. After such a configuration is performed, the input device converts the user's gestures to actual control instructions to manage the controller's behavior at step 608, and on the display or other feedback device at step 612. It may be possible to optionally trigger user feedback. Finally, the control channel set may be reconfigured or deactivated at step 610 based directly on user gestures or other high level application control.It should be noted that gesture detection, transformation, and interpretation may be distributed in different ways between the input device and the controller based on the capabilities and timing requirements of each device. In other words, rather than actually performing conversion and / or interpretation at the input device, the raw sensor data may be sent to another device (and / or controller) for conversion and / or interpretation.Various gestures can be used as human triggers corresponding to various commands. An almost unlimited range of possible gestures may be used to trigger a command. However, given the general accessory form factor, three major movements can be exploited: rotation, tilt, and longitudinal shift. Rotation is a very intuitive movement and can be applied to many items with some circular symmetry aspect, such as a ring rotating around a finger, a twisting earring, or a rotating watch bezel. Tilt is also applied to some items, such as a ring that is tilted against a finger or a bracelet that is tilted against an arm. Finally, a longitudinal shift may be applied, for example, by moving the ring along the finger, moving the earing up and down, or moving the watch along the arm. Rotational gestures are suitable for selective input (eg, adjusting the scrolling of a list of items or adjusting the volume of music playback), tilt and shift are confirmation of an input or skipping to the next music track, such as selection of a specific item Is more preferable.As described in more detail below, in some cases, inputs from multiple devices (or the context in which the inputs were received) may be considered before the gesture is considered to be detected. For example, when the user rotates his wrist, it may produce raw sensor data indicative of rotation from the ring as if the ring itself had rotated around a finger. Thus, if the watch is being rotated, sensor data from the watch may be taken into account and the rotation of the ring may not be interpreted as corresponding to a command. Alternatively, sensor data indicating relative movement relative to the body may also be considered, and without relative movement between the ring and the body, rotation may not be interpreted as corresponding to a command.Similarly, various context information may also be used. If the device is not powered on or playing, the various gestures may not be interpreted as commands but may simply be ignored.An exemplary user feedback mechanism that may be used in accordance with various aspects of the present disclosure will now be described. User feedback of actions may be implicit or explicit. An implicit example may be an auxiliary display that displays the list being scrolled. The highlighted item change may then implicitly indicate user action. An explicit example may be an input use case where the result of the action is not immediately apparent. As an example, if the input device is used to confirm that a telephone number has been dialed, then the feedback of the subsequent ringing tone may be for a good user experience (as there may be a significant delay before the ringing tone) It is not enough. An example of immediate feedback is display change or feedback by the input device itself.With displayable input devices, achieving user feedback may be relatively easy. However, in non-display devices, various methods may also be identified. For example, the device may employ mild electrical stimulation (e.g., surface charge slightly above general human perception levels) to provide haptic feedback, or may employ direct motion feedback such as vibration. It should be noted that, to save power, the feedback may consist of only a single actuation.The power efficiency of the device can be an important consideration given the many size constraints of the applicable form factor (eg, given battery space limitations). An exemplary state machine of power management function 700 is shown in FIG.To conserve power, the device may generally be maintained in the off state 702 when not in use. The device then enters a wait state 704, configured as part of one or more control channels. In this state, the device's mechanism for gesture detection and interpretation, mechanisms for wake and sleep triggers, and mechanisms for connectivity to other devices may be configured. Based on the activation trigger, the device may enter an on state 706 where it attempts to actively detect the gesture.When a gesture is detected, the device may enter an active state 708 where any interpretation algorithm may be applied. If correctly interpreted, the corresponding information is sent. The transition to the on state 706 may be gesture dependent. For continuous selective gestures, such as rotation, the transition may be based on the minimum time of no movement. For selective gestures, this transition may be applied immediately after the user action.The transition from the on state 706 to the wait state 704 may be based on some configured parameters such as an inactivity timer or a dedicated user action. The transition to the off state 702 is based on the deselection of the device from the current control channel set. This may be based on higher level application actions or instant gestures to turn off the device.As described above, one exemplary embodiment of a distributed UI involves a ring for user input and a watch display utilized as a remote UI for a cellular telephone or other device. The following table details the various possible roles of the individual components in such an embodiment.In another exemplary embodiment, a ring and a watch display are employed as remote UIs for mobile music players. The following table details the possible roles of the individual components in such an embodiment.FIG. 8 illustrates an exemplary process flow 800 for detecting and interpreting gestures based on sensor data from a device. As mentioned above, various operations may be distributed among different devices. At step 802, the sensor is activated and at step 804, raw sensor data is acquired. Sensor data is processed at step 806 and interpreted at step 808. The interpreted data may, for example, be combined with other sensor data at step 810 (e.g., to distinguish between rotation of the wrist and rotation of the actual ring on the finger). Finally, at step 812, based on the composite data, a determination may be made as to whether the gesture corresponds to a control command.FIG. 9 illustrates an example operation 900 that may be performed by a wearable device, in accordance with certain aspects of the present disclosure. As shown, the operation may include, at step 902, determining a set of agreed control gestures between the first device and the second device, wherein the control gestures are the first It can be implemented using the device and can be supported by the second device. For example, the audio player, the second device may advertise to support wearable devices, rotational volume control of the first device, song selection by tilting or tapping, etc. In response, an indication of one or more control gestures that may be performed utilizing the first device may be transmitted. The first device then participates in the control sequence for controlling the second device in step 904 via wireless transmission corresponding to at least one of the control gestures to be performed using the first device. It can.According to some aspects, the wearable device may provide user feedback via one or more mechanisms of haptic feedback, actuation or vibration, visual, or audible feedback when a gesture is detected. According to various aspects, the wearable device may transmit raw sensor data to the controlled device and may perform data processing needed to perform gesture detection. If successfully detected, the controlled device can send confirmation to the wearable device, enabling a feedback mechanism.According to some aspects, the wearable device may process the raw sensor data into an intermediate format to be sent to the controlled device, the controlled device performing a gesture detection process. If successfully detected, the controlled device can send confirmation to the wearable device, enabling a feedback mechanism.According to some aspects, the wearable device may process sensor data and perform gesture detection. The wearable device may notify the controlled device of the detected gesture and may provide user feedback.FIG. 10 illustrates communication of control information by the wearable device in accordance with certain aspects of the present disclosure. As shown, the operation determines, in step 1002, a first movement according to the first degree of freedom and a second movement according to the second degree of freedom as being related to the gesture; At step 1004, based on the first movement, generating a first set of gestures possibly executed or possibly performed based on the first movement; and at step 1006, based on the second movement. Generating a second set of gestures that may or may not have been executed, and in step 1008 commands intended for a first movement from the first and second sets And indicating, at step 1010, transmitting information based on the inference. As mentioned above, various forms of feedback may be provided by the controlled device.FIG. 11 illustrates an exemplary operation of the distributed user interface for controlling a controlled device with a wearable device on a portion of a body, in accordance with certain aspects of the present disclosure. As shown, the operation includes, at step 1102, detecting relative movement between the wearable device and the portion of the body, and at step 1104, relative movement for use in controlling the controlled device. Generating an indication. As mentioned above, the wearable device may transmit raw data indicative of relative movement or may transmit another indication of movement.According to some aspects, the wearable device is a sensor capable of detecting contact, a sensor capable of detecting lateral movement of the two surfaces relative to each other, detecting spatial movement of orientation or rotation Sensor (accelerometer), sensor capable of detecting spatial movement of the tilt, sensor capable of lateral or longitudinal structural force (flexure), or any combination of one or more or any combination thereof May be included.According to some aspects, the wearable device may synthesize sensor data of any of its sensors and perform a process of gesture detection.According to some aspects, the controlled device may receive sensor data received from the plurality of sensors in the first wearable device or the plurality of wearable devices to perform the process of gesture detection. Data received from sensors embedded in may be combined.According to some aspects, the first wearable device and the second wearable device may include data from a plurality of sensors embedded in the plurality of wearable devices to perform a process of gesture detection. Can communicate directly to replace theFIG. 12 illustrates exemplary operations that may be performed to control a controlled device using a wearable device on a portion of a body, in accordance with certain aspects of the present disclosure. As shown, the operation receives in step 1202 a message indicating relative movement between the wearable device and the portion of the body, and in step 1204 a controlled based on commands related to the relative movement. Generating an action to control the device.As mentioned above, the wearable device may interpret raw data as gestures, interpret gestures as commands (based on transformations), and / or send only raw data. Furthermore, as also mentioned above, gestures can be interpreted based on other sensor data or context.FIG. 13 illustrates example operations 1300 for controlling a device, in accordance with certain aspects of the present disclosure. As shown, the operation dynamically sets up a mapping that defines how the gesture performed by the movement of the at least one wearable item is interpreted as one or more commands in step 1302 Selecting and determining whether the gesture has a mapping in the set of mappings in step 1304, and converting the gesture into a command for the device based on the determination in step 1306 May be included. As mentioned above, various operations may be distributed among multiple devices.FIG. 14 illustrates an example operation 1400 for interpreting movement of a wearable item as a gesture associated with a command to control a controlled device, in accordance with certain aspects of the present disclosure. As shown, in operation 1402, sensing movement of the wearable item comprising at least one of sliding, rotating, tilting, bending, tapping and the movement is sensed in step 1404 Determining the first context, and interpreting the movement at step 1406 as indicating a gesture related to the command based on the first context. As mentioned above, various operations may be distributed among multiple devices. The context may correspond to various states and / or sensor data.According to some aspects, a first context in which movement is sensed is defined by an operational state (eg, on / off) of the wearable device. According to some aspects, the first context in which movement is sensed is defined by an application context of a wearable device, said application context being a wearable device or subject based on a direct user action or implicit state. It may be selected by the control device. According to some aspects, a first context in which movement is sensed is defined or selected from a set of applicable contexts previously configured by movement of another wearable item. According to some aspects, the first context in which movement is sensed is a plurality of sensors, or plurality, within the first wearable device (eg, to distinguish between a rotating ring and a rotating wrist) The device is used to interpret data sensed by a plurality of sensors embedded in the wearable device. According to some aspects, the first context in which movement is sensed is used by the controlled device to interpret data received from a plurality of sensors embedded in the one or more wearable devices. Be done. According to some aspects, the first context in which movement is sensed may be by the wearable device to determine if any steps of the process for performing gesture detection need to be initiated. used. According to some aspects, the first context in which movement is sensed may be controlled by the controlled device to determine if any steps of the process for performing gesture detection need to be initiated. used.According to some aspects, data received from a plurality of sensors may be prioritized based on a first context in which movement is sensed to aid in composition and interpretation of the data.The various operations of the method described above may be performed by any suitable means capable of performing the corresponding function. The means are, without limitation, various hardware and / or software components and / or (one or more) including circuits, application specific integrated circuits (ASICs), or processors. Can include multiple) modules. In general, where there are operations shown in the figures, those operations may have means-plus-function components of corresponding counterparts with similar numbers. For example, operations 900, 1000, 1100, 1200, 1300 and 1400 shown in FIGS. 9-14 may correspond to circuit blocks 900A, 1000A, 1100A, 1200A, 1300A and 1400A shown in FIGS. 9A-14A.As used herein, the term "determining" encompasses a wide variety of actions. For example, "determining" may include calculating, computing, processing, deriving, examining, searching (eg, searching with a table, database, or another data structure), confirmation, and the like. Also, “determining” can include receiving (eg, receiving information), accessing (eg, accessing data in a memory) and the like. Also, "determining" may include resolving, selecting, choosing, establishing and the like.The various operations of the method described above perform those operations, such as various hardware and / or software components (s), circuits, and / or modules (s), etc. It may be implemented by any suitable means capable of. In general, any operation shown in the figures may be performed by corresponding functional means capable of performing that operation.The various exemplary logic blocks, modules, and circuits described in connection with the present disclosure may be general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate array signals (FPGAs) or other programmable It may be implemented or implemented using logic devices (PLDs), discrete gates or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. The processor may also be a combination of computing devices, eg, a combination of DSP and microprocessor, multiple microprocessors, multiple DSP cores, one or more microprocessors in conjunction with one or more DSP cores, or any other May be implemented as other such configurations.The steps of a method or algorithm described in connection with the present disclosure may be implemented directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that can be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disks, removable disks, CD-ROMs, and the like. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and / or actions may be interchanged with one another without departing from the scope of the claims. In other words, the order and / or use of particular steps and / or actions may be changed without departing from the scope of the claims, unless a particular order of steps or actions is specified.The functions described may be implemented in hardware, software, firmware, or a combination thereof. If implemented in software, the functions may be stored on a computer readable medium as one or more instructions. A storage media may be any available media that can be accessed by a computer. By way of example and not limitation, such computer readable media may be RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage device, or any desired program in the form of instructions or data structures. It may comprise any other medium that can be used to carry or store code and can be accessed by a computer. As used herein, disks and discs are compact discs (CDs), laser discs (discs), optical discs (discs), digital versatile discs (discs), DVDs, floppy discs (DVDs) Disks include Blu disk (Rc) disks and Blu-ray (R) disks, which usually reproduce data magnetically, and disks are optical with laser data. To play.Thus, some aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may be computer readable on which is stored (and / or encoded) instructions that are executable by one or more processors to perform the operations described herein. It may comprise a medium. In some aspects, a computer program product may include packaging material.Also, software or instructions may be transmitted via a transmission medium. For example, software may use coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, wireless, and microwave from a website, server, or other remote source When transmitted, coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwave are included in the definition of transmission medium.Further, modules and / or other suitable means for performing the methods and techniques described herein may be downloaded and / or otherwise obtained by the access terminal and / or access point, where applicable. I hope you understand. For example, such a device may be coupled to a server to enable the transfer of means for performing the methods described herein. Alternatively, the various methods described herein may be provided by storage means (eg, RAM, ROM, physical storage medium such as a compact disc (CD) or floppy disc, etc.), whereby an access terminal and The access point may acquire these various methods upon coupling or supplying its storage means to the device. Additionally, any other suitable technique for providing the devices with the methods and techniques described herein may be utilized.It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.A wireless device in the present disclosure may include various components that perform functions based on signals transmitted by or received at the wireless device. Wireless devices may also refer to wearable wireless devices. In some aspects, the wearable wireless device may comprise a wireless headset or a wireless watch. For example, the wireless headset may include a transducer adapted to provide an audio output based on data received via the receiver. The wireless watch may include a user interface adapted to provide an indication based on data received via the receiver. The wireless sensing device may include a sensor adapted to provide data to be transmitted by the transmitter.A wireless device may communicate via one or more wireless communication links that are based on or otherwise support any suitable wireless communication technology. For example, in some aspects a wireless device may be associated with a network. In some aspects, the network is implemented using ultra-wideband technology or some other suitable technology, a personal area network (eg, supporting wireless coverage areas on the order of 30 meters), or (eg, It can be equipped with a body area network (supporting a wireless coverage area of the order of 10 meters). In some aspects, the network may comprise a local area network or a wide area network. The wireless device may support or use one or more of various wireless communication technologies, protocols, or standards, such as, for example, CDMA, TDMA, OFDM, OFDMA, WiMAX, Wi-Fi, and the like. Similarly, the wireless device may support or use one or more of various corresponding modulation or multiplexing schemes. Thus, the wireless device may include one or more wireless communication links using the above or other wireless communication technologies and include appropriate components (eg, an air interface) to communicate therewith. . For example, the device may include various components (eg, a signal generator and a signal processor) that allow communication over a wireless medium (eg, transmitters 210 or 302 and associated components). A wireless transceiver with receiver 212 or 304) may be provided.The teachings herein may be incorporated into (e.g., implemented in or executed by) various devices (e.g., devices). For example, one or more aspects taught herein may be a portable media device including a phone (eg, a cellular phone), a personal digital assistant ("PDA") or so-called smart phone, an entertainment device (eg, a music and video player) ) Headsets (eg headphones, earpieces etc), microphones, medical sensing devices (eg biometric sensors, heart rate monitors, pedometers, ECG devices, smart bandages etc), user input / output devices (eg Monitors that may receive data from devices, light switches, keyboards, mice, etc.), environmental sensing devices (eg, tire pressure monitors), medical or environmental sensing devices (eg, desktops, mobile computers, etc.) Vice, point-of-care device, a hearing aid, may be incorporated into the set-top box or any other suitable device. The monitoring device may also access data from various sensing devices via a connection with the network.These devices may have various power and data requirements. In some aspects, the teachings herein may be adapted for use in low power applications (e.g., using impulse based signaling schemes and low duty cycle modes), e.g. Can support various data rates, including relatively high data rates.In some aspects, the wireless device may comprise an access device (eg, an access point) for a communication system. Such access devices may provide connectivity to another network (eg, a wide area network, such as, for example, the Internet or a cellular network) via, for example, a wired or wireless communication link. Thus, the access device may allow another device (e.g., a wireless station) to access other networks or some other functionality. Furthermore, it should be appreciated that one or both of the devices may be portable and, in some cases, relatively non-portable. Also, it should be appreciated that the wireless device may also be capable of transmitting and / or receiving information in a non-wireless manner (eg, via a wired connection) via an appropriate communication interface.The previous description is provided to enable any person skilled in the art to fully understand the full scope of the present disclosure. Modifications to the various configurations disclosed herein will be readily apparent to those skilled in the art. Thus, the claims should not be limited to the various aspects of the disclosure set forth herein, but should be given the full scope consistent with the wording of the claims, and to the singular elements. References are not intended to mean "one or more" but to mean "one or more", unless so specified. The term "some" refers to "one or more" unless specifically stated otherwise. A claim reciting at least one of the combinations of elements (e.g. "at least one of A, B or C") may be one or more of the recited elements (e.g. A, or B, or C, or any combination thereof). All structural and functional equivalents of the elements of the various embodiments described throughout the disclosure, which will be known to or known to those skilled in the art, are expressly incorporated herein by reference. It is intended to be included within the scope of the claims. Furthermore, nothing disclosed herein is publicly available, regardless of whether such disclosure is explicitly recited in the claims. Any claim element is specified using the word "step" unless that element is specifically stated using the word "means" or in the case of a method claim. Unless it is, it should not be interpreted based on the provisions of 35 USC 第 112 paragraph 6. |
Conductive channel technology is disclosed. In one example, a memory component can include a source line, a conductive channel having first and second conductive layers electrically coupled to the source line and memory cells adjacent to the conductive channel. In one aspect, channel conductivity and reliability are improved over a single layer conductive channel formation scheme by preventing unwanted oxide formation, increasing the interface contact area, and by modulating material grain size and boundaries via multiple thin channel integration scheme. Associated systems and methods are alsodisclosed. |
1.A memory component comprising:Source linea conductive channel having a first conductive layer and a second conductive layer electrically coupled to the source line;a memory cell that is adjacent to the conductive channel.2.The memory component of claim 1 wherein said first conductive layer is spaced apart from said source line.3.The memory component of claim 2, wherein a portion of the second conductive layer is disposed between the source line and the first conductive layer and with the source line and the first conductive layer Interface combination.4.The memory component of claim 3 wherein the interface of the second conductive layer and the source line is substantially free of oxide material.5.The memory component of claim 3 wherein the interface of the second conductive layer and the source line is planar.6.The memory component of claim 5 wherein the interface of the second conductive layer and the source line has a diameter greater than or equal to 25 nm.7.The memory component of claim 2 wherein a portion of the second conductive layer surrounds a portion of the first conductive layer.8.The memory component of claim 1 wherein the first conductive layer and the second conductive layer each comprise a doped polysilicon material.9.The memory component of claim 8 wherein said doped polysilicon material of said first conductive layer and said second conductive layer are different.10.The memory component of claim 9 wherein said first conductive layer is P-doped or N-doped, and said second conductive layer is another of P-type doped or N-type doped One.11.The memory component of claim 1 further comprising an insulating material disposed within said conductive channel.12.The memory component of claim 1 further comprising a dielectric layer adjacent to said conductive channel.13.The memory component of claim 12 wherein the dielectric layer forms a tunnel dielectric.14.The memory component of claim 1 wherein each memory cell comprises:a tunnel dielectric adjacent to the conductive channel;a charge storage structure adjacent to the tunnel dielectric;Control grid;a blocking dielectric between the charge storage structure and the control gate.15.The memory component of claim 1 further comprising a source select gate adjacent the conductive channel.16.A memory device comprising:Substrate;A memory component according to claim 1 operatively coupled to the substrate.17.The memory device of claim 16 wherein each memory unit comprises:a tunnel dielectric adjacent to the conductive channel;a charge storage structure adjacent to the tunnel dielectric;Control grid;a blocking dielectric between the charge storage structure and the control gate.18.The memory device of claim 16 further comprising a source select gate adjacent said conductive channel.19.A method for fabricating a memory component, comprising:Forming a first conductive layer on the dielectric layer, the dielectric layer having a bottom portion adjacent to the source line;Exposing the source line through an opening in a bottom portion of the dielectric layer;A second conductive layer is formed on the first conductive layer and the exposed portion of the source line such that the first conductive layer and the second conductive layer are electrically coupled to the source line.20.The method of claim 19 wherein exposing the source line comprises:Forming an opening through a bottom portion of the first conductive layer;A bottom portion of the dielectric layer proximate the opening is removed.21.The method of claim 20 wherein forming an opening through the bottom portion of the first conductive layer comprises etching through a bottom portion of the first conductive layer.22.The method of claim 21 further comprising protecting the upper portion of the first conductive layer from etching.23.The method of claim 22 wherein protecting the upper portion of the first conductive layer comprises forming a sacrificial layer on the first conductive layer.24.The method of claim 23 wherein exposing the source line further comprises forming an opening through a bottom portion of the sacrificial layer to expose a bottom portion of the dielectric layer.25.The method of claim 23, further comprising removing the sacrificial layer prior to forming the second conductive layer on the first conductive layer.26.The method of claim 20, whereinRemoving the bottom portion of the dielectric layer proximate the opening includes forming a recess between the first conductive layer and the source line;Forming the second conductive layer on the first conductive layer such that the first conductive layer and the second conductive layer are electrically coupled to the source line includes: forming the second conductive in the recess a layer such that the second conductive layer is interfaced with the source line and the first conductive layer.27.The method of claim 19 wherein the first conductive layer and the second conductive layer form a conductive channel.28.The method of claim 27 wherein said conductive channel is vertically oriented.29.The method of claim 27 further comprising forming a source select gate adjacent to said conductive channel. |
Conductive channel and source line couplingTechnical fieldThe embodiments described herein relate generally to semiconductor electronic circuits and, more particularly, to conductive channel and source line coupling.Background techniqueSemiconductor materials (eg, polysilicon) are used to form electrical conduits or channels in various electronic devices, such as devices employing complementary metal oxide semiconductor (CMOS) materials. CMOS technology is used in many electronic devices and components, including microprocessors, microcontrollers, computer memory, and digital logic circuits.Various computer memory types, such as static random access memory (SRAM) and flash memory (eg, NOR, NAND, and charge trap), utilize CMOS materials and have an architecture that electrically couples source lines to memory cell arrays. Typically, the memory cells in the flash array are arranged such that the control gates of each of the memory cells in a row of the array are connected to form an access line such as a word line. The columns of the array include a source-to-drain connected memory cell string between a pair of select lines, a source select line and a drain select line.The flash array can be a two-dimensional configuration or a three-dimensional (3D) configuration (eg, a stacked memory array including pillars of stacked memory elements, such as a vertical NAND string). The source select line includes a source select gate at each intersection between the memory cell string and the source select line, and the drain select line includes each cross between the memory cell string and the drain select line The drain select gate at the point. Each source select gate is connected to a source line, and each drain select gate is connected to a data line, such as a column bit line. Typically, the source and data lines are formed from polysilicon, and the memory cells are connected via a polysilicon channel that is electrically coupled to the source and data lines.DRAWINGSThe features and advantages of the present invention will be apparent fromFigure 1 shows a portion of a 3D NAND memory component in accordance with an example;2A shows a top view of a memory column and memory cell of the 3D NAND memory component of FIG. 1;2B is a side elevational view of the memory column and memory cell of the 3D NAND memory component of FIG. 1;3 is a detailed view of a conductive channel electrically coupled to a source line, according to an example;4A-4D illustrate a method for fabricating a flash memory component, according to an example;5 is a flow chart of a method for fabricating a flash memory component, according to an example;6 is a schematic diagram of an exemplary memory device;7 is a schematic diagram of an exemplary computing system.Reference will now be made to the exemplary embodiments shown, and the specific language will be described herein. However, it is to be understood that the scope of the disclosure or specific embodiments of the invention are not intended to be limited thereby.Detailed waysIt is to be understood that the specific structures, process steps or materials disclosed herein are not intended to It should also be understood that the terminology used herein is for the purpose of the description The same reference numerals in the different drawings denote the same elements. The figures are provided for the purpose of clearly illustrating the steps and operations, and do not necessarily indicate a particular order or order. All technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs, unless otherwise defined.As used in this written description, the singular forms "", "," Thus, for example, reference to "a layer" includes a plurality of such layers.In the present disclosure, "including", "comprising", "containing" and "having" and the like may have the meanings given to them in the U.S. Patent Law, and may mean "include" and the like and are generally interpreted as open-ended terms. The terms "consisting of" or "consisting of" are closed terms and include only those components, structures, steps, and the like that are specifically listed in connection with these terms and in accordance with U.S. Patent. "Consisting essentially of" or "consisting essentially of" has the meaning commonly given to them by US patent law. In particular, such terms are generally closed terms, and are intended to include additional items, materials, components, steps, or components that do not materially affect the items used in conjunction with them. Basic and novel features or functions. For example, even if it is not explicitly listed in the list of items after such a term, if it exists under the language of "consisting essentially of", trace elements are present in the composition, but do not affect the nature or characteristics of the composition. Will be allowed. When using open-ended terms in a written description, such as "contains" or "contains," it should be understood that direct support should also be provided for a language that is "consisting essentially of" and that is "consisting of", as Expressed clearly, and vice versa.The terms "first", "second", "third", "fourth", etc., in the specification and claims, are used to distinguish similar elements, and are not necessarily used to describe a particular order. Or chronological order. It is to be understood that the terms so used are interchangeable, as appropriate, such that the embodiments described herein are capable of operation, for example, in a different order than illustrated or otherwise described. Similarly, if a method is described herein as including a series of steps, the order of the steps presented herein is not necessarily the only order in which the steps can be performed, and some of the steps may be omitted and/or may not be described herein. Some other steps are added to the method.The terms "left", "right", "front", "back", "top", "bottom", "above", "below", if any, in the specification and claims "For descriptive purposes, it is not necessarily used to describe a permanent relative position." It is to be understood that the terms so used are interchangeable, where appropriate, such that the embodiments described herein can operate, for example, in other orientations that are different from those illustrated or otherwise described.The term "coupled," as used herein, is defined to be connected directly or indirectly, either electrically or electrically. "Directly coupled" structures or elements are in physical contact with each other. Objects described herein as "adjacent" to each other may be in physical contact with one another, in close proximity to each other, or in the same overall range or region as the ones are appropriate for the context in which the phrase is used.As used herein, such as "increased," "reduced," "better," "worse," "higher," "lower," "enhanced," "maximized. The terms "substantially" and "minimized" refer to significantly different properties of a device, component or activity that are distinct from other comparable devices, components or activities, or that are distinctly different from different iterations or embodiments of the same device. Nature refers to properties well known in the art. For example, a data region with an "increased" risk of corruption may refer to a region of a memory device that is more likely to have a write error than other regions in the same memory device. Many factors contribute to this increased risk, including location, manufacturing process, number of programming pulses applied to the area, and so on.As used herein, the term "substantially" refers to the degree or degree of complete or near completeness of an action, feature, property, state, structure, item, or result. For example, a "substantially" closed object means that the object is either completely closed or almost completely closed. The exact allowable degree of deviation from absolute completeness may in some cases depend on the specific situation. However, in general, the “complete” approach will achieve the same overall effect as if it were absolute and total “complete”. The use of "substantially" is equally applicable when used in a negative sense to refer to a complete or near complete absence of an action, characteristic, property, state, structure, item, or result. For example, a composition that is "substantially free" of particles will be completely free of particles, or almost completely free of particles, with the same effect as it is completely free of particles. That is, a composition that is "substantially free" of ingredients or elements may still actually contain such items as long as they have no significant effect.As used herein, the term "about" is used to provide flexibility to a numerical range endpoint by assuming that a given value may be "slightly above" or "slightly below" the endpoint.As used herein, a plurality of items, structural elements, constituent elements and/or materials may be presented in a common list for convenience. However, these lists should be interpreted as each item in the list being individually identified as a separate and unique item. Therefore, any individual item in such a list should not be interpreted as any other item that is actually equivalent to the same list based solely on their existence in a common group without the opposite representation.Concentrations, amounts, sizes, and other numerical data may be represented or presented herein in a range format. It should be understood that such a range format is used for convenience and conciseness only, and therefore should be construed as a As clearly stated each value and sub-range. As an illustration, a numerical range of "about 1 to about 5" should be interpreted to include not only the values of about 1 to about 5 that are explicitly recited, but also the individual values and sub-ranges within the indicated range. Therefore, included in this numerical range are individual values such as 2, 3, and 4, and sub-ranges such as from 1-3, from 2-4, and from 3-5, and individual 1, 2, 3, 4, and 5.This same principle applies to the range in which only one value is recorded as the minimum or maximum value. In addition, this interpretation should apply regardless of the nature of the feature or range being described.A reference to "an example" throughout this specification means that a particular feature, structure, or characteristic described in connection with the example is included in at least one embodiment. Thus, the appearance of the phrase "in the embodiment" The appearances of the phrase "in one embodiment" or "an"Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In this specification, many specific details are provided, such as examples of layouts, distances, network examples, and the like. However, one skilled in the relevant art will recognize that many variations are possible without one or more of the specific details, or with other methods, components, arrangements, measurements, and the like. In other instances, well-known structures, materials, or operations are not shown or described in detail, but are considered to be within the scope of the present disclosure.Exemplary embodimentThe initial overview of the technical embodiments is provided below, and then the specific technical embodiments are described in more detail. This initial overview is intended to help the reader to understand the technology more quickly, but is not intended to identify key or essential features of the technology, and is not intended to limit the scope of the claimed subject matter. Moreover, although various embodiments and embodiments of the invention have been utilized and exemplified herein with respect to flash memory, particularly NAND and 3D NAND memory devices, it should be understood that the general technical aspects and inventive principles shown are equally applicable to having similar components. Other features, features, materials, or other electronic devices of operation (eg, CMOS devices having conductive channels that electrically couple the components to the source lines).A typical electrical coupling (ie, a conductive channel) between a string of flash cells and a power line has an oxide material at the interface, which results in an undesirably reduced current (eg, string current) and reliable resistance. This oxide material is produced by damage to the source line during processing to establish electrical coupling. In addition, the electrical interface between the conductive channel and the source line produced by the typical method is relatively small, thus limiting the current carrying capacity of the interface. In addition, increasing the thickness of the conductive channel also increases the crystal grain size of the channel material, which degrades performance. The thickness of a typical conductive channel is limited by crystal grain size and boundary constraints to meet performance specifications and is therefore limited by current carrying capacity. Accordingly, memory components are provided that provide improved conductive channel performance and reliability by addressing the electrical interface with the source lines and the structure of the conductive channels.One exemplary mechanism by which the performance of the conductive channel can be improved is by improving the current carrying capacity and/or characteristics of the conductive channel. In one aspect, undesirable oxide formation at the interface between the conductive channel and the source line due to source line damage can be minimized or eliminated, thereby providing performance and reliability benefits. In one example, a memory component can include a source line, a conductive channel having first and second conductive layers electrically coupled to the source line, and a memory cell adjacent the conductive channel. Related systems and methods are also disclosed.Referring to Figure 1, a portion of a 3D NAND memory component 100 is schematically illustrated. Typically, the portion of the memory component includes a pillar (ie, a conductive or semi-conductive channel) 110 and memory cells 120a-n (ie, memory cell string 126) positioned adjacent to the conductive channel 110. Any suitable number of memory cells can be included. Conductive channel 110 can be made of any suitable material (eg, polysilicon) such that the conductive channel can act as a channel region of memory cells 120a-n that can be coupled in series. For example, a channel may be formed in the (semi)conductive channel 110 during operation of one or more memory cells 120a-n of the string. The strings of conductive channel 110 and memory cells 120a-n can be oriented vertically, such as in a three dimensional memory array. For example, memory cell 120a is located at a vertical level (eg, near the top of the memory array) above the vertical level at which memory cell 120n is located (eg, near the bottom of the memory array). Typically, the conductive channel 110 will have a generally cylindrical configuration and the structure of each memory cell 120a-n will be disposed in a concentric annular structure that is radially outward from the conductive channel. Memory cells 120a-n can have any suitable structure. A memory cell structure is provided for background description and as an example. Accordingly, it should be appreciated that suitable memory cell structures can be different than the memory cell structures disclosed herein.Each of the memory cells 120a-n in this example can have a charge storage structure (eg, it can be a conductive floating gate, such as a floating gate metal oxide semiconductor transistor (FGMOSFET), a dielectric charge trap, etc.). For example, as shown in FIGS. 2A and 2B, which illustrate top and side views, respectively, of conductive channel 110 and representative memory cell 120, memory cell 120 can have a charge storage structure 121. Each of the memory cells 120a-n can also have a tunnel dielectric between its charge storage structure and the conductive channel 110. For example, memory cell 120 can have a tunnel dielectric 122 between charge storage structure 121 and conductive channel 110. Additionally, each memory cell 120a-n can have a control gate (eg, as part of an access line such as a word line or coupled to an access line such as a word line). For example, memory unit 120 can include control gate 130. Each memory cell can have one or more dielectric materials or dielectric layers between its charge storage structure and the control gate. For example, memory cell 120 can include a dielectric layer (eg, an inter-poly dielectric (IPD) layer) 123-125 between charge storage structure 121 and control gate 130.Each memory cell 120 can have a charge storage structure 121, such as a floating gate that can be a conductor (eg, polysilicon), a charge trap that can be a dielectric, and the like. Non-limiting examples of conductive or semiconductive materials suitable for use with floating gates include polysilicon, silicate or non-silicate metals such as Ru, Pt, Ge, etc., where the metal is continuous or discontinuous. Non-limiting examples of dielectrics suitable for charge traps include nitrides, silicon-rich dielectrics, or SiON/Si3N4.With further reference to FIG. 1, dielectric 140 can be between successive adjacent memory cells 120a-n in string 126. For example, the dielectric 140 can be between at least the floating gates 121, the dielectrics 123-125, and the control gates 130 of successively adjacent memory cells 120a-n. Dielectric 141 may be interposed between one end of string 126 (eg, memory cell 120a) and select gate 111, and dielectric 142 may be interposed between opposite ends of string 126 (eg, memory cell 120n) and select gate 112, As shown in Figure 1.In some embodiments where the charge storage structure 121 is a charge trap, the tunnel dielectric 122, the charge storage structure 121, and the dielectrics 123-125 can form a continuous structure that can be shared by two or more of the memory cells 120a-n (eg, It may be common to two or more of the memory cells 120a-n). For example, such a structure can be shared or shared by all of the memory cells 120a-n. The tunnel dielectric for a charge trap based device can be multiple layers (eg, oxide/nitride/oxide (O/N/O)) rather than a typical single dielectric layer of a floating gate tunnel dielectric.In some embodiments, string 126 can be interposed between and coupled in series with "dummy" memory cells (not shown) to form a string of memory cells comprising a string 126 and a "dummy" memory cell. For example, one or more "dummy" memory cells can be interposed between and coupled in series with memory cell 120a and select gate 111 of string 126, and/or one or more "dummy" memory cells can be interposed And in series with and coupled to memory cell 120n and select gate 112 of string 126. Each "dummy" memory cell can be configured in a similar manner to memory cells 120a-n and can have the same components as memory cells 120a-n. In some embodiments, a set of dummy memory cells can be substituted for the select gates or can be added to the dummy memory cells.Each memory cell 120a-n of string 126 can be coupled to a select gate (e.g., drain select gate) 111 adjacent to conductive channel 110 and a select gate adjacent (e.g., contact) conductive channel 110 (eg, source select gate) 112 (eg, in series) coupled, and may be located at a select gate (eg, drain select gate) 111 of conductive channel 110 and adjacent (eg, contact) a conductive channel Between the select gates (eg, source select gates) 112 of 110. Conductive channel 110 is electrically coupled to a data line (eg, bit line 116), indicated at 117. Accordingly, select gate 111 can selectively couple string 126 to a data line (eg, bit line 116). Additionally, conductive channel 110 is electrically coupled to source line 118, indicated at 119. Accordingly, select gate 112 can selectively couple string 126 to source line 118. For example, select gate 111 can be coupled in series with memory cell 120a, and select gate 112 can be coupled in series with memory cell 120n. Select gates 111 and 112 may each include a gate dielectric 113 adjacent to conductive channel 110 and a control gate 114 adjacent to respective gate dielectric 113.FIG. 1 schematically illustrates various components and structures of memory component 100, thus lacking some detail regarding conductive channel 110 and electrical coupling between conductive channel 110 and source line 118. Additional details are provided in FIG. 3 showing a detailed view of the conductive channel 210 electrically coupled to source line 218 (similar to 119 in FIG. 1) in accordance with an example of the present disclosure. As shown in FIG. 3, the conductive channel 210 can have a plurality of conductive layers 250, 251 that are electrically coupled to the source line 218. Source line 218 can comprise any suitable electrically conductive material, such as a doped polysilicon material. Conductive layers 250, 251 can be constructed of the same or different materials, as discussed in more detail below.At least one of the conductive layers 250, 251 can be in direct contact (eg, interface electrical contact) with the source line 218. As shown, the conductive layer 250 can be physically separated or spaced apart from the source line 218 (as indicated by arrow 252) without direct contact or interface with the source line 218. However, a portion 254 of the conductive layer 251 can be disposed between the source line 218 and the conductive layer 250. With a conductive layer 251 disposed in the space 252 between the conductive layer 250 and the source line 218, a portion 253 of the conductive layer 250 can be between portions 254, 254' of the conductive layer 251 (ie, "by being sandwiched" Or surrounded by it). That is, a portion 254' of the conductive layer 251 may be disposed over the portion 253 of the conductive layer 250, and a portion 254 of the conductive layer 251 may be disposed under the portion 253 of the conductive layer 250. Portion 254 of conductive layer 251 between source line 218 and conductive layer 250 may be in direct contact (ie, direct physical contact) or interface bonding with source line 218 at 255 and with conductive layer 250 at 256. The interface 255 of the conductive layer 251 and the source line 218 may be planar or planar (eg, there are no surface irregularities such as pits, depressions, bumps, etc. that may be caused by certain etching processes).As further shown in FIG. 3, dielectric layer 260 can be adjacent to conductive channel 210. Dielectric layer 260 can form a tunnel dielectric for memory cells adjacent to channel 210 and/or a gate dielectric for select gates. Dielectric layer 260 can comprise any suitable dielectric material, such as an oxide material (e.g., silicon oxide). Additionally, insulating material 261 may be disposed within conductive channel 210 (eg, filling a space or void inside conductive channel 210). The insulating material 261 can be any suitable insulating material such as silicon oxide.As described above, the conductive layers 250, 251 may be composed of the same or different materials. Thus, the conductive layers 250, 251 can be individually configured to achieve the desired performance goals. In some embodiments, the conductive layers 250, 251 can each comprise a polysilicon material, which can be undoped, similar or identically doped, or in relation to each other with respect to dopant type and/or dopant concentration Differently doped. For example, one conductive layer may be doped and the other conductive layer is not doped, all conductive layers may be doped with the same N-type or P-type doping, or one conductive layer may be doped with N-type doping The other conductive layer is doped with P-type doping. A similar example can be applied to the dopant concentration. In one aspect, selecting different doping combinations can be configured to cause which conductive layer will be conductive (ie, flowing or by limiting the charge carrier depth and position via an electrostatic barrier as in the P-type and N-type junctions) Conductive) The controllability of electrons. This can help reduce scattering through the channel interface and improve current (eg, string current) by balancing crystal grain size and grain boundaries, which can also result in increased reliability. Ge or SiGe may also be a suitable material for the conductive layer of conductive channel 210.While two conductive layers are shown in conductive channel 210, it should be appreciated that channel 210 can include any suitable number of conductive layers. For example, the conductive channel can have three conductive layers, wherein the layers are arranged in alternating doping types (eg, PNP or NPN), which can be used to provide a PN junction effect, thereby correlating to depth in a manner that minimizes interface scattering. And the position (eg, at the tunnel oxide, toward the tunnel oxide, or away from the tunnel oxide) controls the flow of charge carriers through the conductive layer. The number of conductive layers can be expanded as needed for electron carrier confinement. Thus, as disclosed herein, the conductive channel 210 can be configured to function as a multi-channel (eg, dual or triple channel) by varying the number of conductive layers.In one aspect, the conductive layers 250, 251 of the conductive channel 210 can provide control of an individual conductive layer amorphous/crystalline material phase (eg, polycrystalline silicon die) that can be configured to provide desired properties, such as through Material volume properties vary to maximize string current and minimize interfacial scattering at adjacent layers. The crystal grain size of the material (eg, polysilicon) can be individually controlled for the conductive layers 250, 251 of the conductive channel 210. According to an embodiment, it is desirable for a thicker conductive channel to provide a higher current (eg, string current). However, thicker materials have performance in a single thick layer due to the larger crystalline grain size that occurs during channel formation, which limits channel thickening (eg, program Vt sigma (PVS) and subthreshold swings ( SS) degradation is detrimental. PVS is an electrical metric for programming speed changes due to various factors such as cell doping type/concentration, charge tunneling, its structural configuration, and the like. SS is a measure of transistor on/off switching performance based on the exponential behavior of the drain current as a function of gate voltage. With a plurality of conductive layers 250, 251, the crystal grain size can be maintained within acceptable limits while effectively thickening the conductive channel 210 to provide a higher current at a given thickness without degradation. This benefit can be multiplied by the number of conductive layers in the conductive channel.Thus, material type, dopant concentration, dopant combination, and/or material crystal size/structure can be controlled in channel 210 to provide performance benefits. The conductive channel can include any suitable number of conductive layers having any suitable material properties or properties in any combination.In one aspect, the interface 255 between the source line 218 and the conductive layer 251 can have a relatively large diameter 272 that can improve the electrical contact area and reduce electrical resistance. In one example, the diameter 272 can be greater than or equal to 25 nm.4A-4D illustrate aspects of an exemplary method or process for fabricating a conductive channel as disclosed herein. In particular, the figures show the electrical coupling of the channel to the source line and the formation of a channel with multiple layers, as shown in FIG. FIG. 4A shows conductive layer 250 formed on dielectric layer 260. Dielectric layer 260 can have a bottom portion 262 that is adjacent to source line 218. Conductive layer 250 can be formed on dielectric layer 260 by any suitable technique or process, such as a deposition process. As shown in FIG. 3, the conductive layer 250 in its final form will be part of the conductive channel 210, and the dielectric layer 260 in its final form will serve as a tunnel and/or gate dielectric. Therefore, the conductive layer 250 is not used as a sacrificial layer, but is used together with the subsequently deposited conductive layer 251 to form the conductive channel 210. By holding the conductive layer 250 on the dielectric layer 260 throughout the disclosed manufacturing process, the dielectric layer 260 can be protected, and thus from exposure to deleterious etch chemistries, which can cause deleterious surface property changes, Harmful surface property changes reduce the performance and reliability of the tunnel and/or gate dielectric. That is, the tunnel and/or gate dielectric can maintain its quality at deposition (exposed only to the deposition of conductive layer 250) and thus remain unaffected and compromised by subsequent processing.4B-4D illustrate how the source line 218 can be exposed through the bottom portion 262 of the dielectric layer 260 to electrically couple the channel 250 to the source line 218 as shown in FIG. In one aspect, the source line 218 can be exposed by forming an opening 257 in the conductive layer 250 (see FIG. 4C). In some embodiments, the opening 257 can be formed by etching, which can expose the material of the conductive layer 250 to undesirable damage. FIG. 4B illustrates a process in which the upper portion 258 of the conductive layer 250 can be protected from etching while the opening 257 is passed through the conductive layer 250. For example, the upper portion 258 of the conductive layer 250 can be protected by forming a sacrificial layer 270 on the conductive layer 250. The sacrificial layer 270 can comprise any suitable material, such as an oxide material (eg, silicon oxide), a nitride material (eg, SiN), and the like. The sacrificial layer 270 can be formed on the conductive layer 250 by any suitable technique or process, such as a deposition or growth process.Where the upper portion 258 of the conductive layer 250 is protected by the sacrificial layer 270, the opening 257 may be formed through the bottom portion 271 of the sacrificial layer 270 and through the bottom portion 259 of the conductive layer 250 to expose the bottom portion 262 of the dielectric layer 260, As shown in Figure 4C. Opening 257 may be formed through bottom portion 271 of sacrificial layer 270 and through bottom portion 259 of conductive layer 250 by any suitable technique or process, such as etching (eg, dry and/or wet methods) Etching), as mentioned above. In one embodiment, the opening 257 may be formed toward the conductive layer 251 through the dry via etch through the sacrificial layer 270 and the conductive layer 250, the etch selectively stopping over the source line 218 (ie, overlying the source line) 218) at dielectric layer 260. Thus, the bottom portion 262 of the dielectric layer 260 can protect the source line 218 from dry etching.The bottom portion 262 of the dielectric layer 260 proximate the opening 257 can then be removed to expose the source line 218, as shown in Figure 4D. The exposed source line 218 can form a recess 263 between the conductive layer 250 and the source line 218. The recess 263 can provide a large exposure on the source line 218 for the contact interface 255 (see FIG. 3) with the dielectric layer 251, thereby improving the electrical contact area and reducing the electrical resistance. In one aspect, the sacrificial layer 270 can be removed from the conductive layer 250 in the same process that exposes the source line 218 and forms the recess 263. The bottom portion 262 of the dielectric layer 260 and the sacrificial layer 270 can be removed by any suitable technique or process, such as etching (eg, dry and/or wet etching). In one embodiment, the bottom portion 262 of the dielectric layer 260 and the sacrificial layer 270 may be passed through, for example, hydrofluoric acid (HF) (eg, for silicon oxide), thermal HF, buffered oxide edge (BOE) (eg, for oxidation) Silicon), thermal phosphorus (eg, if a SiN sacrificial layer is used), etc. are removed by a wet etch process. Therefore, the relatively small opening 257 resulting from the dry etching due to the presence of the sacrificial layer 270 can be effectively expanded by the wet etching forming the recess 263 to more expose the source line 218. In one aspect, the etch process can be selected and configured such that the source line 218 remains undamaged such that the exposed portions (ie, surfaces) of the source line 218 are not damaged (eg, planar or flat, without surface irregularities Sex, such as pits, depressions, bumps, etc., which may be caused by etching. This prevents the formation of oxides that cannot be removed at the interface 255 between the source line 218 and the conductive layer 251 (see FIG. 3), which can avoid resistance and associated current reduction caused by such oxides. small. Thus, a hybrid dry process and a wet etch process can be used, as discussed with respect to FIGS. 4C and 4D, to expose the source line 218 through the conductive layer 250 and the dielectric layer 260. In some embodiments, any other type of etch that has directionality during etching can be used in place of the dry etch step.With the source line 218 exposed, a conductive layer 251 can be formed over the conductive layer 250 such that the conductive layers 250, 251 are electrically coupled to the source line 218 (eg, via interfaces 255 and 256) to achieve the structure shown in FIG. The final configuration. This may include forming the conductive layer 251 in the recess 263 such that the conductive layer 251 is interfaced with the source line 218 and the conductive layer 250. The recesses 263 that may be formed by a hybrid dry and wet etch process may enable portions 254, 254' of the conductive layer 251 to sandwich or surround portions 253 of the conductive layer 250. Conductive layer 251 can thus electrically connect conductive layer 250 physically separated from source line 218 to source line 218. Conductive layer 251 can be formed on conductive layer 250 by any suitable technique or process, such as a deposition process and/or an epitaxial growth process. In one aspect, the bottom thickness 273 of the conductive layer 251 can be thicker than the sidewall thickness 274 of the conductive channel 210. The exposed surface of the source line 218 for the interface 255 can help form a very thick conductive layer 251 material deposition at the bottom of the conductive channel 210, which is similar and possibly due to the material of the source line 218 by the etching process. An epitaxial growth caused by surface conditioning or modification, thus providing a very clean and uniform reaction binding site rearrangement. After forming the final conductive layer (eg, conductive layer 251), if the conductive channel is hollow, the conductive channel can optionally be filled with a suitable insulating material, such as silicon oxide.Although the structures and methods of FIGS. 3-4D are described above in the context of electrical coupling between conductive and source lines, it should be appreciated that these structures and methods can be applied elsewhere, such as electrically coupled memory strings. A plurality of vertically stacked "decks" (eg, coupling one post or conductive channel to another). Such a multilayer stacking plate can be formed using "plugs" between the plates (eg, a conductive material such as polysilicon). This technique facilitates the formation and electrical coupling of the conductive channel of the board to the plug with little or no damage to the plug.Referring again to FIG. 1, the formation of conductive channel 110 and the coupling of channel 110 to source line 118 will typically occur after memory cells 120a-n are formed. The memory unit can be formed by any suitable method. For example, the column openings can be formed by etching through a plurality of alternating layers or levels of conductive and dielectric materials. The conductive layer can comprise any suitable electrically conductive material, such as polysilicon, which can be doped electrically conductively (eg, doped to N+ type conductivity). The dielectric layer can comprise any suitable dielectric material such as an oxide (e.g., silicon oxide), an oxynitride (e.g., silicon oxynitride), and the like. To form a memory cell adjacent to the conductive channel, a column opening can be used to perform a series of processes, including etching and deposition processes, to approximate the memory cell location. Memory cell structures that may be formed include charge storage structures (eg, floating gates), control gates, tunnel dielectrics, blocking dielectrics, and the like.A method or process of fabricating a flash memory component or unit is summarized in the flow chart of FIG. As shown in block 301, a first conductive layer can be formed over the dielectric layer, the dielectric layer having a bottom portion proximate to the source line. The source line can be exposed through an opening in the bottom portion of the dielectric layer, as shown by block 302. A second conductive layer can be formed over the first conductive layer and the exposed portion of the source line such that the first conductive layer and the second conductive layer are electrically coupled to the source line, as shown in block 303.Again, while the present disclosure is primarily provided in the context of a 3D NAND flash memory device, it should be understood that certain aspects of the technology are also applicable to any device that utilizes a semiconductor material (eg, polysilicon) to form an electrical channel or conduit. In particular, this technique is applicable to many devices including CMOS components.FIG. 6 is a schematic diagram of a memory device 480 in accordance with an example of the present disclosure. The memory device can include a substrate 481 and a memory component 400 disclosed herein operatively coupled to the substrate 481. In one aspect, memory device 480 can include any suitable electronic component 482, such as a CPU, GPU, memory controller, video decoder, audio decoder, video encoder, camera processor, system memory, and/or modem.FIG. 7 shows an exemplary computing system 590. Computing system 590 can include a memory device 580 as disclosed herein coupled to motherboard 591. In one aspect, computing system 590 can also include a processor 592, a memory device 593, a radio 594, a heat sink 595, a port 596, a slot, or any other suitable device or component that can be operatively coupled to the motherboard 591. Computing system 590 can include any type of computing system, such as a desktop computer, laptop, tablet, smart phone, wearable device, server, and the like. Other embodiments need not include all of the features specified in FIG. 7, and may include alternative features not specified in FIG.Circuitry used in electronic components or devices (eg, dies) of a memory device can include hardware, firmware, program code, executable code, computer instructions, and/or software. Electronic components and devices may include non-transitory computer readable storage media, which may be computer readable storage media that do not include signals. Where the program code is executed on a programmable computer, the computing device described herein can include a processor, a processor readable storage medium (including volatile and nonvolatile memory and/or storage elements), at least one Input device, and at least one output device. Volatile and non-volatile memory and/or storage elements can be RAM, EPROM, flash drives, optical drives, magnetic hard drives, solid state drives, or other media for storing electronic data. The nodes and wireless devices may also include transceiver modules, counter modules, processing modules, and/or clock modules or timer modules. One or more programs that can implement or utilize any of the techniques described herein can use an application programming interface (API), reusable controls, and the like. Such programs can be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program can be implemented in assembly or machine language if desired. In any case, the language can be a compiled or interpreted language combined with a hardware implementation.ExampleThe following examples relate to further embodiments.In one example, a memory component is provided that includes a source line, a conductive channel having first and second conductive layers electrically coupled to the source line, and a memory cell adjacent the conductive channel.In one example of a memory component, the first conductive layer is spaced apart from the source line.In one example of the memory component, a portion of the second conductive layer is disposed between the source line and the first conductive layer and interfaces with the source line and the first conductive layer.In one example of the memory component, the interface of the second conductive layer and the source line is substantially free of oxide material.In one example of a memory component, the interface of the second conductive layer and the source line is planar.In one example of the memory component, the interface of the second conductive layer and the source line has a diameter greater than or equal to 25 nm.In one example of the memory component, a portion of the second conductive layer surrounds a portion of the first conductive layer.In one example of the memory component, the first conductive layer and the second conductive layer each comprise a doped polysilicon material.In one example of the memory component, the doped polysilicon material of the first conductive layer and the second conductive layer are different.In one example of the memory component, the first conductive layer is P-type doped or N-type doped, and the second conductive layer is the other of P-type doping or N-type doping.In one example of the memory component, at least one of the first conductive layer and the second conductive layer comprises Ge, SiGe, or a combination thereof.In one example of a memory component, the source line comprises a doped polysilicon material.In one example of a memory component, the conductive channel is vertically oriented.In one example, the memory component includes an insulating material disposed within the conductive channel.In one example, the memory component includes a dielectric layer adjacent to the conductive channel.In one example of a memory component, the dielectric layer forms a tunnel dielectric.In one example of a memory component, each memory cell includes a charge storage structure.In one example of a memory component, the charge storage structure comprises a floating gate.In one example of a memory component, the charge storage structure includes a charge trap.In one example of a memory component, each memory cell includes a control gate.In one example of a memory component, each memory cell includes a tunnel dielectric adjacent the conductive channel, a charge storage structure adjacent the tunnel dielectric, a control gate, and a blocking dielectric between the charge storage structure and the control gate .In one example, the memory component includes a source select gate adjacent the conductive channel.In one example of a memory component, the memory component is a flash memory component.In one example of a memory component, the flash component is a NAND memory component or a NOR memory component.In one example, a memory device is provided that includes a substrate and a memory component operatively coupled to the substrate, the memory component having a source line and first and second conductive layers having electrical coupling to the source line Conductive channel.In one example of a memory device, the first conductive layer is spaced apart from the source line.In one example of the memory device, a portion of the second conductive layer is disposed between the source line and the first conductive layer and interfaces with the source line and the first conductive layer.In one example of a memory device, the interface of the second conductive layer and the source line is planar.In one example of a memory device, the interface of the second conductive layer and the source line has a diameter greater than or equal to 25 nm.In one example of a memory device, a portion of the second conductive layer surrounds a portion of the first conductive layer.In one example of a memory device, the first conductive layer and the second conductive layer each comprise a doped polysilicon material.In one example of a memory device, the doped polysilicon material of the first conductive layer and the second conductive layer are different.In one example of the memory device, the first conductive layer is P-type doped or N-type doped, and the second conductive layer is the other of P-type doping or N-type doping.In one example of the memory device, at least one of the first conductive layer and the second conductive layer comprises Ge, SiGe, or a combination thereof.In one example of a memory device, the source line comprises a doped polysilicon material.In one example of a memory device, the conductive channel is vertically oriented.In one example, the memory device includes an insulating material disposed within the conductive channel.In one example, the memory device includes a dielectric layer adjacent to the conductive channel.In one example of a memory device, the dielectric layer forms a tunnel dielectric.In one example, a memory device includes a memory cell adjacent to a conductive channel.In one example of a memory device, each memory cell includes a charge storage structure.In one example of a memory device, the charge storage structure includes a floating gate.In one example of a memory device, each memory cell includes a control gate.In one example of a memory device, each memory cell includes a tunnel dielectric adjacent the conductive channel, a charge storage structure adjacent the tunnel dielectric, a control gate, and a blocking dielectric between the charge storage structure and the control gate .In one example, the memory device includes a source select gate adjacent to the conductive channel.In one example, the memory device includes a CPU, a GPU, a memory controller, a video decoder, an audio decoder, a video encoder, a camera processor, a system memory, a modem, or a combination thereof.In one example, a computing system is provided that includes a motherboard and a memory device operatively coupled to the motherboard. The memory device includes a substrate and a memory component operatively coupled to the substrate, the memory component having a source line and a conductive channel having first and second conductive layers electrically coupled to the source line.In one example of a computing system, the computing system includes a desktop computer, a laptop, a tablet, a smart phone, a wearable device, a server, or a combination thereof.In one example of a computing system, a computing system includes a processor, a memory device, a heat sink, a radio, a slot, a port, or a combination thereof that is operably coupled to a motherboard.In one example, a method for fabricating a memory component is provided, comprising: forming a first conductive layer on a dielectric layer, the dielectric layer having a bottom portion proximate to a source line; passing through a bottom portion of the dielectric layer Opening the source line; and forming a second conductive layer on the first conductive layer and the exposed portion of the source line such that the first conductive layer and the second conductive layer are electrically coupled to the Source line.In one example of a method for fabricating a memory component, exposing the source line includes forming an opening through a bottom portion of the first conductive layer and removing a bottom portion of the dielectric layer proximate the opening.In one example of a method for fabricating a memory component, forming an opening through a bottom portion of the first conductive layer includes etching a bottom portion through the first conductive layer.In one example of a method for fabricating a memory component, etching includes dry etching.In one example, a method for fabricating a memory component includes protecting an upper portion of the first conductive layer from etching.In one example of a method for fabricating a memory component, protecting an upper portion of the first conductive layer includes forming a sacrificial layer on the first conductive layer.In one example of a method for fabricating a memory component, exposing the source line further includes forming an opening through a bottom portion of the sacrificial layer to expose a bottom portion of the dielectric layer.In one example, a method for fabricating a memory component includes removing a sacrificial layer prior to forming a second conductive layer on the first conductive layer.In one example of a method for fabricating a memory component, the sacrificial layer comprises an oxide material.In one example of a method for fabricating a memory component, removing a bottom portion of the dielectric layer proximate the opening includes etching a bottom portion of the dielectric layer.In one example of a method for fabricating a memory component, etching includes wet etching.In one example of a method for fabricating a memory component, wet etching includes hydrofluoric acid etching.In one example of a method for fabricating a memory component, removing a bottom portion of the dielectric layer proximate the opening includes forming a recess between the first conductive layer and the source line, and forming a second conductive layer on the first conductive layer Having the first conductive layer and the second conductive layer electrically coupled to the source line includes forming the second conductive layer in the recess such that the second conductive layer and the source line and the The first conductive layer is interface bonded.In one example of a method for fabricating a memory component, forming the first conductive layer on the dielectric layer includes depositing a first conductive material on the dielectric layer.In one example of a method for fabricating a memory component, forming a second conductive layer on the first conductive layer includes depositing a second conductive material on the first conductive layer.In one example of a method for fabricating a memory component, the exposed portion of the source line has a flat surface.In one example of a method for fabricating a memory component, the exposed portion of the source line has a diameter greater than or equal to 25 nm.In one example of a method for fabricating a memory component, the first conductive layer and the second conductive layer each comprise a doped polysilicon material.In one example of a method for fabricating a memory component, the doped polysilicon material of the first conductive layer and the second conductive layer are different.In one example of a method for fabricating a memory component, the first conductive layer is P-type doped or N-type doped, and the second conductive layer is the other of P-type doping or N-type doping.In one example of a method for fabricating a memory component, at least one of the first conductive layer and the second conductive layer comprises Ge, SiGe, or a combination thereof.In one example of a method for fabricating a memory component, the source line comprises a doped polysilicon material.In one example of a method for fabricating a memory component, the first conductive layer and the second conductive layer form a conductive channel.In one example of a method for fabricating a memory component, the conductive channel is vertically oriented.In one example, a method for fabricating a memory component includes disposing an insulating material within a conductive channel.In one example, a method for fabricating a memory component includes forming a memory cell adjacent to a conductive channel.In one example of a method for fabricating a memory component, forming a memory cell includes forming a charge storage structure.In one example of a method for fabricating a memory component, the charge storage structure is a floating gate.In one example of a method for fabricating a memory component, forming a memory cell includes forming a control gate.In one example of a method for fabricating a memory component, the dielectric layer forms a tunnel dielectric.In one example of a method for fabricating a memory component, forming a memory cell includes forming a control gate, forming a blocking dielectric adjacent the control gate, and forming a charge storage structure adjacent the blocking dielectric, wherein the charge storage structure is adjacent to the tunnel dielectric .In one example, a method for fabricating a memory component includes forming a source select gate adjacent to the conductive channel.In one example, a conductive channel is provided that includes a plurality of layers of doped conductive material.In one example of a conductive channel, each of the plurality of doped conductive material layers comprises a doped polysilicon material.In one example of a conductive channel, the doped polysilicon material of at least two of the plurality of doped conductive material layers is different.In one example of a conductive channel, one doped conductive material layer is P-type doped or N-type doped, and the other doped conductive material layer is the other of P-type doped or N-type doped.In one example of the conductive channel, at least one of the plurality of doped conductive material layers comprises Ge, SiGe, or a combination thereof.In one example, a method of forming a conductive channel to electrically connect a memory cell string to a source line is provided, the method comprising: obtaining a first doped conductive material layer; and forming a first doped conductive material layer A second doped conductive material layer is formed thereon.In one example of a method of forming a conductive channel, forming a second doped conductive material layer on the first doped conductive material layer includes depositing a second doped conductive material layer on the first doped conductive material layer.In one example of a method of forming a conductive channel, the first doped conductive material layer and the second doped conductive material layer each comprise a doped polysilicon material.In one example of a method of forming a conductive channel, the doped polysilicon material of the first doped conductive material layer and the second doped conductive material layer are different.In one example of a method of forming a conductive channel, the first doped conductive material layer is P-type doped or N-type doped, and the second doped conductive material layer is in P-type doped or N-type doped The other.In one example of a method of forming a conductive channel, at least one of the first doped conductive material layer and the second doped conductive material layer comprises Ge, SiGe, or a combination thereof.While the foregoing examples illustrate the specific embodiments of the specific application, it will be apparent to those skilled in the art Principles and concepts. |
Some embodiments include an integrated assembly having an active region which contains semiconductor material. The active region includes first, second and third source/drain regions within the semiconductor material, includes a first channel region within the semiconductor material and between the first and second source/drain regions, and includes a second channel region within the semiconductor material and between the second and third source/drain regions. The semiconductor material includes at least one element selected from Group 13 of the periodic table. A digit line is electrically coupled with the second source/drain region. A first transistor gate is operatively proximate the first channel region. A second transistor gate is operatively proximate the second channel region. A first storage-element is electrically coupled with the first source/drain region. A second storage-element is electrically coupled with the third source/drain region. Some embodiments include methods of forming integrated assemblies. |
CLAIMS l/we claim,1. An integrated assembly, comprising: a digit line comprising a first conductive material and extending substantially horizontally; an interconnect extending upwardly from the digit line and comprising a second conductive material; an active region over the interconnect and extending substantially horizontally; the active region comprising semiconductor material, including first and second source/drain regions within the semiconductor material, and including a channel region within the semiconductor material and between the first and second source/drain regions; the interconnect electrically coupling the second source/drain region with the digit line; a transistor gate operatively proximate the channel region; and a storage-element electrically coupled with the first source/drain region.2. The integrated assembly of claim 1 wherein the second conductive material is compositionally different from the first conductive material.3. The integrated assembly of claim 2 wherein the first and second conductive materials are metal-containing materials.4. The integrated assembly of claim 2 wherein the first conductive material comprises tungsten, and wherein the second conductive material comprises a metal nitride directly against the tungsten.5. The integrated assembly of claim 1 wherein:
the channel region is a first channel region, the transistor gate is a first transistor gate, and the storage-element is a first storage- element; the active region includes a second channel region on an opposing side of the second source/drain region from the first channel region, and includes a third source/drain region on an opposing side of the second channel region from the second source/drain region: wherein a second transistor gate is operatively proximate the second channel region; and wherein a second storage-element is electrically coupled with the third source/drain region.6. The integrated assembly of claim 5 wherein the first and second storage-elements include one or more of capacitors, resistive- memory devices, conductive-bridging devices, phase-change-memory devices and programmable metallization cells.7. The integrated assembly of claim 5 wherein the first and second storage-elements are capacitors.8. The integrated assembly of claim 1 wherein the semiconductor material comprises at least one element from Group 13 of the periodic table in combination with at least one element from Group 15 of the periodic table.9. The integrated assembly of claim 8 wherein the semiconductor material comprises one or more of GaP, AIAs, GaAs, AIP, InP, AlSb, GaAIAs, GalnAs and GalnP; where the chemical formulas indicate primary constituents rather than specific stoichiometries.10. The integrated assembly of claim 1 wherein the semiconductor material comprises a metal selected from the group consisting of aluminum, gallium, indium, thallium, tin, cadmium, zinc
and mixtures thereof, in combination with one or more of oxygen, sulfur, selenium and tellurium.11. The integrated assembly of claim 1 wherein the semiconductor material comprises at least one element from Group 13 of the periodic table in combination with at least one element from Group 16 of the periodic table.12. The integrated assembly of claim 11 wherein the semiconductor material comprises: at least one element selected from the group consisting of gallium, indium and mixtures thereof; and at least one element selected from the group consisting of oxygen, sulfur, selenium, tellurium and mixtures thereof.13. The integrated assembly of claim 1 wherein the digit line is supported by a base having a planar, horizontal upper surface; and wherein the substantially horizontally extending digit line extends along a direction which is within 10 ° of parallel to the horizontal upper surface.14. An integrated assembly, comprising: an active region comprising semiconductor material; the active region including first, second and third source/drain regions within the semiconductor material, including a first channel region within the semiconductor material and between the first and second source/drain regions, and including a second channel region within the semiconductor material and between the second and third source/drain regions; the semiconductor material including at least one element selected from Group 13 of the periodic table; a digit line electrically coupled with the second source/drain region; a first transistor gate operatively proximate the first channel region;
a second transistor gate operatively proximate the second channel region; a first storage-element electrically coupled with the first source/drain region; and a second storage-element electrically coupled with the third source/drain region.15. The integrated assembly of claim 14 wherein the semiconductor material further includes at least one element selected from Group 15 of the periodic table.16. The integrated assembly of claim 15 wherein the semiconductor material comprises one or more of GaP, AIAs, GaAs, AIP, InP, AlSb, GaAIAs, GalnAs and GalnP; where the chemical formulas indicate primary constituents rather than specific stoichiometries.17. The integrated assembly of claim 14 wherein the semiconductor material further includes at least one element selected from Group 16 of the periodic table.18. The integrated assembly of claim 17 wherein the semiconductor material comprises: at least one element selected from the group consisting of gallium, indium and mixtures thereof; and at least one element selected from the group consisting of oxygen, sulfur, selenium, tellurium and mixtures thereof.19. The integrated assembly of claim 14 wherein: the digit line is under the second source/drain region; the digit line is electrically coupled with sense-amplifier-circuitry; the first and second transistor gates are electrically coupled with first and second wordlines, respectively;
the wordlines are electrically coupled with wordline-driver- circuitry; and the first and second storage-elements are within first and second memory cells of a memory array.20. The integrated assembly of claim 19 wherein: the first and second wordlines are two of many substantially identical wordlines; the digit line is one of many substantially identical digit lines; the first and second memory cells are two of many substantially identical memory cells, with each of the memory cells being uniquely addressed through one of the digit lines in combination with one of the wordlines; and conductive shield lines are between the digit lines.21. The integrated assembly of claim 19 wherein the first and second storage-elements are capacitors.22. The integrated assembly of claim 19 wherein: the sense-amplifier-circuitry and the wordline-driver-circuitry are within a first tier of a vertically-stacked arrangement of tiers; the memory array is within a second tier of the vertically-stacked arrangement of tiers; and the second tier is over the first tier.23. The integrated assembly of claim 14 wherein the first and second transistor gates entirely surround outer peripheries of the first and second channel regions.24. The integrated assembly of claim 14 wherein the first and second transistor gates do not entirely surround outer peripheries of the first and second channel regions.25. A method of forming an integrated assembly, comprising:
forming spaced-apart digit lines extending along a first direction; forming conductive interconnect material over the digit lines: patterning the conductive interconnect material into spaced-apart contacts electrically coupled with the digit lines: forming semiconductor material over the spaced-apart contacts; patterning the semiconductor material into active regions, the active regions being in one-to-one correspondence with the contacts; each active region having a central region over an associated one of the contacts, and having a pair of distal regions horizontally offset from the central region; forming outer source/drain regions within the distal regions of the active regions, and forming inner source/drain regions within the central regions of the active regions; channel regions being between the inner and outer source/drain regions; the inner source/drain regions being electrically coupled to the digit lines through the contacts; forming wordlines to extend along a second direction, the second direction crossing the first direction; the wordlines comprising transistor gates along the channel regions; and forming storage-elements to be electrically coupled with the outer source/drain regions.26. The method of claim 25 wherein the digit lines comprise a first metal-containing material; wherein the conductive interconnect material comprises a second metal-containing material; and wherein the second metal-containing material is compositionally different from the first metal-containing material.27. The method of claim 25 wherein the second direction is substantially orthogonal to the first direction.28. The method of claim 25 wherein the forming of the spaced- apart digit lines utilizes damascene processing.29. The method of claim 25 wherein the semiconductor material includes at least one element selected from Group 13 of the periodic table.30. The integrated assembly of claim 29 wherein the semiconductor material further includes at least one element selected from Group 15 of the periodic table.31. The integrated assembly of claim 30 wherein the semiconductor material comprises one or more of GaP, AIAs, GaAs, AIP, InP, AlSb, GaAIAs, GalnAs and GalnP; where the chemical formulas indicate primary constituents rather than specific stoichiometries.32. The integrated assembly of claim 29 wherein the semiconductor material further includes at least one element selected from Group 16 of the periodic table.33. The integrated assembly of claim 32 wherein the semiconductor material comprises: at least one element selected from the group consisting of gallium, indium and mixtures thereof; and at least one element selected from the group consisting of oxygen, sulfur, selenium, tellurium and mixtures thereof. |
INTEGRATED ASSEMBLIES, AND METHODS OF FORMING INTEGRATED ASSEMBLIESRELATED PATENT DATAThis application is related to U.S. Patent Application Serial No. 16/666,709 filed October 29, 2019, entitled “Integrated Assemblies, and Methods of Forming Integrated Assemblies”, the entirety of which is incorporated by reference herein.TECHNICAL FIELDIntegrated assemblies (e.g., integrated memory), and methods of forming integrated assemblies.BACKGROUNDSemiconductor materials may be incorporated into integrated assemblies. For instance, the semiconductor materials may be utilized as active regions which comprise channel regions and source/drain regions of transistors. The transistors may be utilized as access devices in memory arrays, or in other applications.It would be desirable to develop improved active region arrangements suitable for utilization in integrated assemblies, and to develop integrated components utilizing the improved arrangements. It would also be desirable to develop improved memory-cell configurations, and improved memory-array configurations.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1 and 1A are a diagrammatic top-down view and a diagrammatic cross-sectional side view, respectively, of a region of an example integrated assembly at an example process stage of an example method for forming an example memory array. The cross- sectional view of FIG. 1A is along the line A-A of FIG. 1.FIGS. 2 and 2A are a diagrammatic top-down view and a diagrammatic cross-sectional side view, respectively, of the region of the example integrated assembly of FIGS. 1 and 1A at an example
process stage following the process stage of FIGS. 1 and 1A. The cross-sectional view of FIG. 2A is along the line A-A of FIG. 2.FIGS. 3 and 3A are a diagrammatic top-down view and a diagrammatic cross-sectional side view, respectively, of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 2 and 2A. The cross-sectional view of FIG. 3A is along the line A-A of FIG. 3.FIGS. 4 and 4A are a diagrammatic top-down view and a diagrammatic cross-sectional side view, respectively, of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 3 and 3A. The cross-sectional view of FIG. 4A is along the line A-A of FIG. 4.FIGS. 5 and 5A are a diagrammatic top-down view and a diagrammatic cross-sectional side view, respectively, of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 4 and 4A. The cross-sectional view of FIG. 5A is along the line A-A of FIG. 5.FIGS. 6 and 6A are a diagrammatic top-down view and a diagrammatic cross-sectional side view, respectively, of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 5 and 5A. The cross-sectional view of FIG. 6A is along the line A-A of FIG. 6.FIGS. 7 and 7A are a diagrammatic top-down view and a diagrammatic cross-sectional side view, respectively, of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 6 and 6A. The cross-sectional view of FIG. 7A is along the line A-A of FIG. 7.FIGS. 8, 8A and 8B are a diagrammatic top-down view (FIG. 8) and diagrammatic cross-sectional side views (FIGS. 8A and 8B) of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 7 and 7A. The cross-sectional views of FIGS. 8A and 8B are along the lines A-A and B-B of FIG. 8, respectively.
FIGS. 9, 9A and 9B are a diagrammatic top-down view (FIG. 9) and diagrammatic cross-sectional side views (FIGS. 9A and 9B) of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 8, 8Aand 8B. The cross-sectional views of FIGS. 9A and 9B are along the lines A-A and B-B of FIG. 9, respectively.FIGS. 10, 10A and 10B are a diagrammatic top-down view (FIG. 10) and diagrammatic cross-sectional side views (FIGS. 10A and 10B) of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 9, 9Aand 9B. The cross-sectional views of FIGS. 10A and 10B are along the lines A-A and B-B of FIG. 10, respectively.FIGS. 11 , 11 A, 11 B and 11 C are a diagrammatic top-down view (FIG. 11 ) and diagrammatic cross-sectional side views (FIGS. 11 A, 11 B and 11 C) of the region of the example integrated assembly of FIGS. 1 and 1A at an example process stage following the process stage of FIGS. 10, 10Aand 10B. The cross-sectional views of FIGS. 11A, 11 B and 11 C are along the lines A-A, B-B and C-C of FIG. 11 , respectively. FIG. 11 C is at a different scale than FIG. 11.FIG. 12 is a diagrammatic top-down view of the region of the example integrated assembly of FIG. 11 incorporated into an example memory array.FIG. 13 is a diagrammatic cross-sectional side view of a region of an example integrated assembly alternative to the region shown in FIG 11 C.FIG. 14 is a diagrammatic cross-sectional side view of a region of an example integrated assembly alternative to the region shown in FIG 11 A.FIG. 15 is a diagrammatic cross-sectional side view of an assembly comprising a vertical stack of tiers.DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTSSome embodiments include memory architectures having memory active regions supported over digit lines (i.e., sense lines,
bitlines, etc.), and having wordlines (i.e., access lines, etc.) extending across the digit lines and the active regions. The memory active regions may be incorporated into memory cells, and each of the memory cells may be uniquely addressed utilizing one of the digit lines and one of the wordlines. Some embodiments include memory architectures in which memory active regions comprise semiconductor material which includes at least one element selected from Group 13 of the periodic table (e.g., gallium (Ga), indium (In), thallium (Tl), etc.). The memory active regions may be over digit lines. The digit lines may extend horizontally, and the memory active regions may also extend horizontally. Example embodiments are described with reference to FIGS. 1 -15.FIGS. 1 -12 illustrate an example method of forming an example memory array.Referring to FIGS. 1 and 1A, an integrated assembly 10 includes a mass 14 over a supporting insulative structure 16.The mass 14 comprises material (mass material) 15. Such material may comprise any suitable composition(s); and in some embodiments may comprise one or more of silicon dioxide, low-k dielectric material, etc. The term “low-k” means a dielectric constant less than that typically associated with silicon dioxide (i.e., less than about 3.9). Example low-k materials are porous silicon dioxide, carbon- doped silicon dioxide, boron-doped silicon dioxide, etc.The insulative structure 16 comprises an insulative material 17. Such insulative material may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon nitride, silicon dioxide, low-k dielectric material, high-k dielectric material, etc.The insulative structure 16 is supported by an underlying base 12. The base 12 may comprise semiconductor material; and may, for example, comprise, consist essentially of, or consist of monocrystalline silicon. The base 12 may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any construction comprising semiconductive material, including, but not limited to, bulk
semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductor substrates described above. In some applications, the base 12 may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit fabrication. Such materials may include, for example, one or more of refractory metal materials, barrier materials, diffusion materials, insulator materials, etc.A gap is provided between the base 12 and the insulative material 16 to indicate that other materials, components, etc., may be provided between the base 12 and the insulative material 16. In some embodiments, the insulative material 16 may be provided directly against an upper surface of the base 12.Referring to FIGS. 2 and 2A, the mass 14 (FIGS. 1 and 1A) is patterned into a plurality of linear features 18, with such linear features being spaced from one another by intervening gaps 20. The linear features extend along a first direction which is indicated to be a y-axis direction relative to the top-down view of FIG. 2.Referring to FIGS. 3 and 3A, conductive digit line material 22 is provided within the gaps 20. The digit line material within the gaps is patterned into digit lines 24. Such digit lines extend along the first direction (y-axis direction) defined by the gaps 20. In some embodiments the digit line material 22 may be formed over the material 15, and may then be removed from over the material 15 utilizing planarization (e.g., chemical-mechanical processing) or other suitable processing. The process of forming trenches, followed by forming of material within the trenches to a level which overfills the trenches, and then removing excess material with planarization or other suitable processing, may be referred to as damascene processing.The digit lines 24 may be recessed within the gaps 20 utilizing any suitable processing; including, for example, etch chemistry which is selective for the conductive material 22 relative to the material 15.
The digit lines 24 are spaced apart from one another, and specifically are spaced from one another along the x-axis direction of FIG. 3.The digit line material 22 comprise may comprise any suitable electrically conductive composition(s); such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the digit line material 22 may be a metal-containing material. Such metal- containing material may comprise any suitable composition(s); such as, for example, one or more of titanium, tungsten, titanium nitride, tungsten nitride, tantalum nitride, etc.Referring to FIGS. 4 and 4A, conductive interconnect material 26 is formed within the gaps 20 (FIG. 3 and 3A), and over the digit lines 24. The conductive interconnect material 26 may be formed within the gaps 20 utilizing damascene processing.The conductive interconnect material 26 may comprise any suitable electrically conductive composition(s); such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the conductive interconnect material 26 may be a metal-containing material. Such metal-containing material may comprise any suitable composition(s); such as, for example, one or more of titanium, tungsten, titanium nitride, tungsten nitride, tantalum nitride, etc.The conductive interconnect material 26 may comprise a same composition as the digit line material 22, or may comprise a different composition relative to the digit line material. In some embodiments, the conductive interconnect material 26 and the digit line material 22 may both be metal-containing materials, but may be different
compositions relative to one another. For instance, the conductive interconnect material 26 may comprise a metal nitride (e.g., titanium nitride, tantalum nitride, tungsten nitride, etc.), and the digit line material 22 may comprise, consist essentially of, or consist of tungsten. In some embodiments, the digit line material 22 may be referred to as a first conductive material, and the interconnect material 26 may be referred to as a second conductive material.Referring to FIGS. 5 and 5A, regions of the conductive interconnect material 26 are removed to pattern remaining portions of the conductive interconnect material 26 into conductive contacts (interconnects) 28. The conductive interconnect material 26 may be patterned with any suitable processing. For instance, a patterned mask (e.g., a photolithographically-patterned photoresist mask) may be utilized to protect the material 26 within the locations of the conductive contacts 28, and then unprotected segments of the material 26 may be removed with an etch selective for the material 26 relative to the underlying digit line material 22. Subsequently, the protective mask may be removed to leave the configuration shown in FIGS. 5 and 5A. In some embodiments, it may be advantageous for the conductive interconnect material 26 to comprise a different composition than the digit line material 22 so that the conductive interconnect material 26 may be selectively removed relative to the digit line material 22.The contacts 28 are spaced apart from one another, and are electrically coupled with the digit lines 24. In the illustrated embodiment, the contacts 28 are directly against the digit lines 24. In some embodiments, the digit lines 24 may be considered to extend substantially horizontally, and the contacts 28 may be considered to extend substantially vertically (i.e., upwardly) from the digit lines. The term “substantially horizontally” means horizontally to within reasonable tolerances of fabrication and measurement, and the term “substantially vertically” means vertically to within reasonable tolerances of fabrication and measurement. In some embodiments, the base 12 may have a planar, horizontal upper surface (as shown), the term “substantially horizontal” may mean to within 10° of being parallel
to the planer upper surface, and the term “substantially vertical” may mean to within 10° of being orthogonal to the planar upper surface.The removal of the segments of the conductive material 26 from over the digit lines 24 leaves gaps 30 over regions of the digit lines.Referring to FIGS. 6 and 6A, an insulative material 32 is formed within the gaps 30 (FIGS. 5 and 5A). The insulative material 32 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide.The insulative material 32 may be initially formed to extend across the material 15, as well as within the gaps 30; and may then be removed from over the material 15 with a planarization process (e.g., chemical mechanical polishing). The planarization process forms a planarized surface 31 extending across the materials 15, 26 and 32.Referring to FIGS. 7 and 7A, a semiconductor material 34 is formed on the planarized surface 31 , and specifically is formed over the spaced-apart contacts 28.The semiconductor material 34 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of material comprising at least one element selected from Group 13 of the periodic table (e.g., one or more of aluminum (Al), gallium (Ga), indium (In) and thallium (Tl)). The semiconductor material 34 may further include at least one element selected from Group 15 of the periodic table (e.g., one or more of phosphorus (P), arsenic (As) and antimony (Sb)). For instance, the semiconductor material may comprise one or more of GaP, AIAs, GaAs, AIP, InP, AlSb, GaAIAs, GalnAs and GalnP; where the chemical formulas indicate primary constituents rather than specific stoichiometries.In some embodiments, the semiconductor material 34 may comprise at least one element selected from Group 13 of the periodic table, and at least one element selected from Group 16 of the periodic table (e.g., one or more of oxygen (O), sulfur (S), selenium (Se) and tellurium (Te)). In some embodiments, the semiconductor material 34
may comprise one or more elements selected from Group 14 of the periodic table (e.g., one or more of silicon, germanium, etc.).In some embodiments, the semiconductor material 34 may comprise a metal selected from the group consisting of aluminum, gallium, indium, thallium, tin, cadmium, zinc and mixtures thereof, in combination with one or more of oxygen, sulfur, selenium and tellurium.Referring to FIGS. 8, 8A and 8B, the semiconductor material 34 is patterned into active regions 36. Such active regions may be considered to extend horizontally (or at least substantially horizontally) along the planarized upper surface 31. The contacts 28 are shown in dashed-line view in FIG. 8 to indicate that such contacts are under the active regions 36. The insulative material beneath the active regions 36 is generically indicated to be “15/32” to indicate that such insulative material comprises both the material 15 and the material 32. The insulative material 32 is not specifically diagrammed in FIG. 8 so that the emphasis of such figure is on the active regions 36 and the general layout of such active regions.One of the active regions 36 is designated as 36a so that it may be distinguished from the other active regions in the description which follows. All of the active regions are substantially identical to one another, with the term “substantially identical” meaning identical to within reasonable tolerances of fabrication and measurement.The active regions 36 are in one-to-one correspondence with the contacts 28. Each active region has a central region 38 over an associated one of the contacts 28, and has a pair of distal regions 40 and 42 which are horizontally offset from the central region. The regions 38, 40 and 42 are only labeled relative to the active region 36a, but are present relative to all of the active regions 36. In some embodiments, the regions 38, 40 and 42 may correspond to source/drain regions, and accordingly may be doped with suitable conductivity-enhancing dopant. The doping of the regions 38, 40 and 42 may be conducted at any suitable process stage; including, for example, one or more implants conducted after the patterning of the active regions 36, and/or after the patterning of the wordlines (FIGS. 10, 10A and 10B). Suitable dopants
may include one or both of sulfur and selenium in applications in which the semiconductor material 34 comprises elements from Groups 13 and 15 of the periodic table; and may include one or both of nitrogen and fluorine in applications in which the semiconductor material 34 comprises elements from Groups 13 and 16 of the periodic table.In some embodiments, the central source/drain region 38 may be referred to as an inner source/drain region, and the distal source/drain regions 40 and 42 may be referred to as outer source/drain regions. In some embodiments, the source/drain regions 40, 38 and 42 may be referred to as first, second and third source/drain regions, respectively.The inner source/drain region 38 (or alternatively, the second source/drain region 38) is electrically coupled to the underlying digit line 24 through one of the conductive interconnects 28. In the shown embodiment, the conductive interconnect 28 directly contacts both the central source/drain region 38 and the digit line 24.A region 44 is between the source/drain regions 38 and 40, and another region 46 is between the source/drain regions 38 and 42. The regions 44 and 46 may ultimately correspond to channel regions, and may be doped to an appropriate level with an appropriate dopant to achieve a desired threshold voltage. The doping, if any, of the regions 44 and 46 may be conducted at any suitable process stage. The regions 44 and 46 may be referred to as first and second channel regions, respectively.Referring to FIGS. 9, 9A and 9B, gate dielectric material (also referred to as dielectric material, or insulative material) 48 is formed over and between the active regions 36, conductive gate material 50 is formed over the gate dielectric material 48, and insulative capping material 52 is formed over the gate material 50.The gate dielectric material 48 may comprise any suitable composition(s); and in some embodiment may comprise, consist essentially of, or consist of silicon dioxide.The conductive gate material 50 may comprise any suitable electrically conductive composition(s); such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel,
platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the conductive gate material 50 may comprise one or more metal- containing materials; such as, for example, one or more of tungsten, titanium nitride, tantalum nitride, tungsten nitride, etc.The insulative capping material 52 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of one or both of silicon dioxide and silicon nitride.Referring to FIGS. 10, 10A and 10B, the conductive material 50 is patterned into wordlines 54. The wordlines extend along a second direction (the x-axis direction of FIG. 10). The second direction of the wordlines crosses the first direction of the digit lines (with such first direction being along the y-axis direction as shown in, for example, FIG. 3). In the illustrated embodiment of FIGS. 1 -10, the second direction of the wordlines is substantially orthogonal to the first direction of the digit lines; with the term “substantially orthogonal” meaning orthogonal to within reasonable tolerances of fabrication and measurement. In other embodiments, the wordlines may cross the digit lines without extending substantially orthogonal to such digit lines.The active regions 36 are shown in dashed-line (phantom) view relative to the top view of FIG. 10 to indicate that such active regions are under other materials.Two of the wordlines 54 are labeled as 54a and 54b so that such wordlines may be distinguished from the other wordlines. The wordlines 54a and 54b are along the channel regions 44 and 46 associated with the active region 36a as shown in FIG. 10B. Regions of the wordlines 54a and 54b proximate the active region 36a may be considered to comprise transistor gates 56a and 56b. Each of the wordlines 54 will comprise a transistor gate where it crosses an active region, and the transistor gates 56a and 56b are to be understood as being representative of such transistor gates.
A transistor 60a comprises the transistor gate 54a, the channel region 44, and the source/drain regions 38 and 40. The transistor gate 54a may be considered to be operatively adjacent to (operatively proximate to) the channel region 44 such that a sufficient voltage applied to the gate 56a will induce an electric field which enables current flow through the channel region 44 to electrically couple the source/drain regions 38 and 40 with one another. If the voltage to the gate is below a threshold level, the current will not flow through the channel region 44, and the source/drain regions 38 and 40 will not be electrically coupled with one another. The selective control of the coupling/decoupling of the source/drain regions 38 and 40 through the level of voltage applied to the gate 56a may be referred to as gated coupling of the source/drain regions. In other words, the source/drain regions 38 and 40 may be considered to be gatedly coupled to one another through the channel region 44 during operation of the transistor 60a. Similarly, the gate 56b may be considered to be operatively adjacent to the channel region 46 such that source/drain regions 38 and 42 of a second transistor 60b may be gatedly coupled to one another through operation of the gate 56b. The gates 56a and 56b may be considered to be representative of a large number of transistor gates formed across the active regions 36 and associated with the wordlines 54.Referring to FIGS. 11 , 11A, 11 B and 11 C, insulative material 62 is formed along sidewalls of the wordlines 50 and along edges of the active regions 36. The material 62 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of one or both of silicon dioxide and silicon nitride. In some embodiments, the insulative material 62 may be referred to as spacer material. The insulative material 62 may be formed with any suitable processing. For instance, the material 62 may be deposited as a layer across a surface of the assembly 10, and may then be anisotropically etched into the illustrated configuration.
The cross-sectional view of FIG. 11 A shows that the digit lines 24 may be electrically coupled with sense-amplifier-circuitry (SA), and that such sense-amplifier-circuitry may be supported by the base 12.The cross-sectional view of FIG. 11 B shows that the wordlines 54 may be electrically coupled with wordline-driver-circuitry (DRIVER), and that such wordline-driver-circuitry may be supported by the base 12.The illustrated wordlines 54 and digit lines 24 may be representative of a large number of wordlines and digit lines formed across a memory array. For instance, the memory array may have hundreds, thousands, millions, etc., of substantially identical wordlines, and substantially identical digit lines. The wordlines may be considered to extend along rows of the memory array, and the digit lines may be considered to extend along columns of the memory array.FIG. 11 C shows a cross-section through the channel region 44 (with FIG. 11 C being at a different scale than FIG. 11 ). The structure under the channel region is generically illustrated as 15/32/22/26 to indicate that such structure may comprise one or more of the materials 15, 32, 22 and 26. Flowever, the specific materials 15, 32, 22 and 26 are not shown in FIG. 11 C so that the emphasis of the drawing is on the channel region 44 and the wordline 54a extending along such channel region. In the illustrated embodiment, the wordline 54a comprises a gate 56a, and such gate extends along a top of the channel region 44 and along sidewalls of the channel region 44. However, the gate only extends partially around the channel region, and does not extend entirely around the channel region (specifically, does not extend along a bottom of the channel region). In other embodiments (described below with reference to FIG. 13) the gate may extend entirely around the channel region.Referring to FIG. 12, the integrated assembly 10 is illustrated at a process stage which may follow the process stage of FIG. 11. Specifically, contacts 70 (only some of which are labeled) are formed to extend to the distal (outer) source/drain regions associated with the active regions 34 (e.g., the source/drain regions 42 and 44 associated
with the active region 36a), and then storage-elements 72 (only some of which are labeled) are formed over the active regions and electrically coupled with the distal source/drain regions through the contacts 70. The storage-elements may be any suitable devices having at least two detectable states; and in some embodiments may be, for example, capacitors (as shown), resistive-memory devices, conductive-bridging devices, phase-change-memory (PCM) devices, programmable metallization cells (PMCs), etc. In some embodiments, one or more of the contacts 70 could be spatially shifted with a redistribution layer in order to closely pack the storage elements.In some embodiments, each of the active regions 36 may be considered to be associated with two of the storage-elements 72, with one of the storage-elements being a first storage-element and the other being a second storage-element. For instance, the active region 36a is shown to be associated with two storage-elements 72a and 72b. The storage-element 72a may be considered to be a first storage-element which is electrically coupled with the first source/drain region 42. The storage-element 72b may be considered to be a second storage- element which is electrically coupled with the second source/drain region 44.Memory cells 80 (only some of which are labeled) may comprise the storage-elements 72. In some embodiments, the transistors (e.g., 60a and 60b) may be considered to be access transistors of the memory cells 80. For instance, the transistor 60a may be considered to be an access device for a memory cell labeled 80a, and the transistor 60b may be considered to be an access device for the memory cell labeled 80b.A memory array 82 comprises the memory cells 80. The illustrated memory cells may be representative of a large number of substantially identical memory cells formed across the memory array. Each of the memory cells may be uniquely addressed by one of the wordlines 54 and one of the digit lines 24.The configurations of the structures described above with reference to FIGS. 11 and 12 are example configurations, and other
suitable configurations may be utilized in other embodiments. For instance, FIG. 13 shows a view of a channel region 44 analogous to that described above with reference to FIG. 11 C. However, the embodiment of FIG. 13 has the transistor gate 56a entirely surrounding an outer periphery of the channel region 44.As another example, FIG. 14 shows digit lines 24 in an arrangement similar to that of FIG. 11 A, but shows shield lines 84 between the digit lines. The shield lines 84 may comprise any suitable electrically conductive composition(s); such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). The shield lines 84 may be electrically coupled with a reference voltage (e.g., ground, VCC/2), may be electrically floating, or may be electrically coupled with an active circuit. The shield lines may assist in precluding capacitive coupling (crosstalk) between adjacent digit lines during operation of a memory array.The formation of the digit lines 24 beneath the active areas 36 in the embodiments described above (e.g., the embodiments of FIGS. 11 and 12) may enable additional room to be provided for the digit lines as compared to conventional structures. Such may enable larger digit lines to be utilized, which may reduce resistance along the digit lines.The utilization of semiconductor material comprising elements from groups 13 and 15 of the periodic may enable low leakage (e.g., negligible gate induced drain leakage (GIDL)) which may improve device refresh as compared to conventional configurations.The configurations described above (e.g., the memory array configurations of FIGS. 11 and 12) may enable low coupling (crosstalk) between adjacent wordlines, which may alleviate a so-called “row hammer” problem associated with conventional architectures.In some embodiments, the memory arrays described herein (e.g., the memory array 82 of FIG. 12) may be within a memory tier (i.e.,
memory deck) which is within a vertically-stacked arrangement of tiers (or decks). For instance, FIG. 15 shows a portion of an integrated assembly 100 comprising a vertically-stacked arrangement of tiers 110, 120 and 130. The vertically-stacked arrangement may extend upwardly to include additional tiers. The tiers 110, 120 and 130 may be considered to be examples of levels that are stacked one atop the other. The levels may be within different semiconductor dies, or at least two of the levels may be within the same semiconductor die.The bottom tier (first tier) 110 may include control circuitry and/or sensing circuitry (e.g., may include wordline-driver-circuitry, sense- amplifier-circuitry, etc.); and in some applications may comprise CMOS circuitry. The upper tiers (second and third tiers) 120 and 130 may include memory arrays, such as, for example, the memory array 82 described above with reference to FIG. 12. The memory arrays within the various tiers may be the same as one another (e.g., may all be DRAM arrays), or may be different relative to one another (e.g., some may be DRAM arrays, while others are NAND arrays). Also, one or more of the upper tiers may include control circuitry or other logic circuitry.The assemblies and structures discussed above may be utilized within integrated circuits (with the term “integrated circuit” meaning an electronic circuit supported by a semiconductor substrate); and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.Unless specified otherwise, the various materials, substances, compositions, etc. described herein may be formed with any suitable methodologies, either now known or yet to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.
The terms “dielectric” and “insulative” may be utilized to describe materials having insulative electrical properties. The terms are considered synonymous in this disclosure. The utilization of the term “dielectric” in some instances, and the term “insulative” (or “electrically insulative”) in other instances, may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow, and is not utilized to indicate any significant chemical or electrical differences.The terms “electrically connected” and “electrically coupled” may both be utilized in this disclosure. The terms are considered synonymous. The utilization of one term in some instances and the other in other instances may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow.The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The descriptions provided herein, and the claims that follow, pertain to any structures that have the described relationships between various features, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation.The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections, unless indicated otherwise, in order to simplify the drawings.When a structure is referred to above as being “on”, “adjacent” or “against” another structure, it can be directly on the other structure or intervening structures may also be present. In contrast, when a structure is referred to as being “directly on”, “directly adjacent” or “directly against” another structure, there are no intervening structures present. The terms "directly under", "directly over", etc., do not indicate direct physical contact (unless expressly stated otherwise), but instead indicate upright alignment.Structures (e.g., layers, materials, etc.) may be referred to as “extending vertically” to indicate that the structures generally extend
upwardly from an underlying base (e.g., substrate). The vertically- extending structures may extend substantially orthogonally relative to an upper surface of the base, or not.Some embodiments include an integrated assembly having a digit line which includes a first conductive material, and which extends substantially horizontally. An interconnect extends upwardly from the digit line and includes a second conductive material. An active region is over the interconnect and extends substantially horizontally. The active region includes semiconductor material. The active region includes first and second source/drain regions within the semiconductor material, and includes a channel region within the semiconductor material and between the first and second source/drain regions. The interconnect electrically couples the second source/drain region with the digit line. A transistor gate is operatively proximate the channel region. A storage-element is electrically coupled with the first source/drain region.Some embodiments include an integrated assembly having an active region which contains semiconductor material. The active region includes first, second and third source/drain regions within the semiconductor material, includes a first channel region within the semiconductor material and between the first and second source/drain regions, and includes a second channel region within the semiconductor material and between the second and third source/drain regions. The semiconductor material includes at least one element selected from Group 13 of the periodic table. A digit line is electrically coupled with the second source/drain region. A first transistor gate is operatively proximate the first channel region. A second transistor gate is operatively proximate the second channel region. A first storage- element is electrically coupled with the first source/drain region. A second storage-element is electrically coupled with the third source/drain region.Some embodiments include a method of forming an integrated assembly. Spaced-apart digit lines are formed to extend along a first direction. Conductive interconnect material is formed over the digit
lines. The conductive interconnect material is patterned into spaced- apart contacts which are electrically coupled with the digit lines. Semiconductor material is formed over the spaced-apart contacts. The semiconductor material is patterned into active regions. The active regions are in one-to-one correspondence with the contacts. Each active region has a central region over an associated one of the contacts, and has a pair of distal regions horizontally offset from the central region. Outer source/drain regions are formed within the distal regions of the active regions. Inner source/drain regions are formed within the central regions of the active regions. Channel regions are between the inner and outer source/drain regions. The inner source/drain regions are electrically coupled to the digit lines through the contacts. Wordlines are formed to extend along a second direction. The second direction crosses the first direction. The wordlines comprise transistor gates along the channel regions. Storage-elements are formed to be electrically coupled with the outer source/drain regions.In compliance with the statute, the subject matter disclosed herein has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the claims are not limited to the specific features shown and described, since the means herein disclosed comprise example embodiments. The claims are thus to be afforded full scope as literally worded, and to be appropriately interpreted in accordance with the doctrine of equivalents. |
Methods of forming conductive elements, such as interconnects and electrodes, for semiconductor structures and memory cells. The methods include forming a first conductive material and a second conductive material comprising silver in a portion of at least one opening and performing a polishing process to fill the at least one opening with at least one of the first and second conductive materials. An annealing process may be performed to form a mixture or an alloy of the silver and the material. The methods enable formation of silver containing conductive elements having reduced dimensions (e.g., less than about 20 nm). The resulting conductive elements have a desirable resistivity. The methods may be used, for example, to form interconnects for electrically connecting active devices and to form electrodes for memory cells. A semiconductor structure and a memory cell including such a conductive structure are also disclosed. |
CLAIMS What is claimed is: 1. A method of forming a semiconductor structure, comprising: forming a first conductive material over a structure comprising at least one opening defined by sidewalls of a dielectric material; forming a second conductive material over the first conductive material; and conducting at least one of annealing the structure to form a material comprising at least a portion of the first conductive material and the second conductive material and performing a polishing process to substantially redistribute at least one of the first conductive material and the second conductive material into an unfilled region of the at least one opening. 2. The method of claim 1 , wherein forming a first conductive material over a structure comprising at least one opening defined by sidewalls of a dielectric material comprises forming the first conductive material over the sidewalls of the dielectric material and a surface of an electrode therebetween. 3. The method of claim 1, wherein forming a first conductive material over a structure comprising at least one opening defined by sidewalls of a dielectric material comprises forming the first conductive material over the at least one opening having at least one dimension of less than about 20 ran. 4. The method of claim 1, wherein forming a second conductive material over the first conductive material comprises forming silver over the first conductive material without substantially filling the at least one opening. 5. The method of claim 1, wherein annealing the structure to form a material comprising at least a portion of the first conductive material and the second conductive material comprises annealing the structure to form a material comprising mixture of silver and tantalum. 6. The method of claim 1, wherein annealing the structure to form a material comprising at least a portion of the first conductive material and the second conductive material comprises annealing the structure to form a material comprising an alloy consisting of silver and at least one of platinum, aluminum, tin, copper, iridium, titanium, nickel, cobalt, ruthenium, and rhodium. 7. The method of claim 1, wherein annealing the structure to form a material comprising at least a portion of the first conductive material and the second conductive material comprises exposing the structure to a temperature of between about 200°C and about 600°C. 8. The method of claim 1 , wherein performing a polishing process to substantially redistribute at least one of the first conductive material and the second conductive material into an unfilled region of the at least one opening comprises at least one of substantially filling the at least one opening with at least one of the first conductive material and the second conductive material and removing material from surfaces of the dielectric material adjacent to the at least one opening. 9. The method of claim 1, wherein performing a polishing process to redistribute at least one of the first conductive material and the second conductive material into an unfilled region of the at least one opening comprises performing the polishing process using a liquid component consisting of water. 10. The method of claim 1 , further comprising forming a memory material over the structure comprising the at least one opening defined by sidewalls of the dielectric material. 11. The method of claim 10, wherein forming a memory material over the structure comprises forming at least one of a chalcogenide material and an oxide material over the structure. 12. The method of claim 10, wherein forming a memory material over the structure comprises forming at least one of germanium sulfide, germanium selenide, silicon dioxide, tantalum oxide, titanium oxide, nitrogen oxide, zirconium oxide, and hafnium oxide over the structure. 13. The method of claim 1, wherein forming a first conductive material over a structure comprising at least one opening defined by sidewalls of a dielectric material comprises forming the at least one opening having an aspect ratio of between 1 :1 and about 20:1. 14. The method of claim 1 , wherein forming a first conductive material over a structure comprises forming the first conductive material comprising silver over the structure. 15. The method of claim 1 , wherein forming a first conductive material over a structure comprises forming the first conductive material comprising at least one of platinum, tantalum, aluminum, tin, copper, iridium, titanium, nickel, cobalt, ruthenium, and rhodium over the structure. 16. The method of claim 1 , wherein forming a second conductive material over the first conductive material comprises forming the second conductive material comprising silver over the first conductive material. 17. The method of claim 1 , wherein forming a second conductive material over the first conductive material comprises forming the second conductive material comprising at least one of platinum, tantalum, aluminum, tin, copper, iridium, titanium, nickel, cobalt, ruthenium, and rhodium over the first conductive material. 18. A semiconductor structure, comprising: a conductive structure overlying an electrode; at least one of a chalcogenide material and an oxide material in contact with the conductive structure; and a conductive material overlying the at least one of a chalcogenide material and an oxide material, the conductive material comprising silver and tantalum and at least one region comprising another material. 19. The semiconductor structure of claim 18, wherein the conductive material comprises tantalum overlying the silver. 20. The semiconductor structure of claim 18 or claim 19, wherein the conductive structure overlying the electrode comprises an alloy of silver and at least one of platinum, aluminum, tin, copper, iridium, titanium, nickel, cobalt, ruthenium, and rhodium. |
TITLE METHODS OF FORMING AT LEAST ONE CONDUCTIVE ELEMENT, METHODS OF FORMING A SEMICONDUCTOR STRUCTURE, METHODS OF FORMING A MEMORY CELL AND RELATED SEMICONDUCTOR STRUCTURES PRIORITY CLAIM This application claims the benefit of the filing date of United States Patent Application Serial Number 13/050,725, filed March 17, 2011, for "METHODS OF FORMING AT LEAST ONE CONDUCTIVE ELEMENT, METHODS OF FORMING A SEMICONDUCTOR STRUCTURE, METHODS OF FORMING A MEMORY CELL AND RELATED SEMICONDUCTOR STRUCTURES." TECHNICAL FIELD Embodiments of the present disclosure relate to methods of forming conductive elements for semiconductor devices and, in addition, to semiconductor structures that include such conductive elements. BACKGROUND Integrated circuits (ICs), the key components in thousands of electronic systems, generally include interconnected networks of electrical components fabricated on a common foundation, or substrate. Conductive interconnects are used to electrically connect semiconductor devices, such as capacitors or transistors, or to define a specific IC, such as a computer memory or microprocessor. The quality of the conductive interconnects greatly affects overall manufacturability, performance and lifetime of the IC. Thus, the material used to form the conductive interconnects is increasingly determining the limits in performance, density and reliability of integrated circuits. For example, electrical conductivity of interconnects is extremely significant to the operational speed of the integrated circuit (IC). Aluminum (Al) and alloys thereof have been widely used as interconnect materials in semiconductor devices based on their low resistivity and ready adhesion to interlayer dielectric materials, such as silicon dioxide (Si02). Unfortunately, aluminum is susceptible to corrosion and offers poor resistance to electromigration, which increases the potential for open circuits from voids or short circuits. In an attempt to improve the performance, reliability, and density of the conductive interconnects, alternative metals to aluminum and aluminum alloys are being explored. To improve conductivity in the wiring, it has been proposed that copper (Cu) and alloys thereof be used to form conductive interconnects. However, copper rapidly diffuses through many conventional dielectric materials to form undesired copper oxide compounds, hi addition, copper does not adhere well to conventional dielectric materials or to itself. Silver (Ag) has also been proposed as a substitute for aluminum-containing conductive interconnects and is becoming increasingly significant in use as an electrochemically active material in electrodes of programmable memory cells, such as those of conductive bridge random access memory (CBRAM) cell. Silver has an extremely low resistivity, but is difficult to deposit in narrow gaps (e.g., gaps having a dimension of 20 nm or less) due to limitations on currently available deposition techniques. While silver may be deposited by sputtering (physical) deposition techniques, these techniques are not suitable for filling narrow gaps with silver. Furthermore, interconnects have been difficult to form from silver due to adhesion issues and agglomeration at increased temperatures. Since silver is resistant to dry etch processes, conventional techniques for forming semiconductor conductive elements (e.g., interconnects and electrodes) are impractical for making such conductive elements from silver. SUMMARY In one embodiment, the present disclosure includes methods of forming at least one conductive element. Such a method may include forming a first conductive material over a structure comprising at least one opening defined by sidewalls of a dielectric material, forming a second conductive material comprising silver over the first conductive material and annealing the structure to form a material comprising at least a portion of the first conductive material and the conductive material. A method of forming the conductive element may also include forming a conductive material comprising silver over surfaces of a structure comprising at least one opening defined by sidewalls of a dielectric material, forming another conductive material over the conductive material and performing a polishing process to substantially redistribute at least one of the conductive material and the material into an unfilled region of the at least one opening. In a further embodiment, the present disclosure includes a method of forming a semiconductor structure. The method may include removing a portion of a dielectric material overlying a substrate to form at least one opening therein, forming a first conductive material over the dielectric material and exposed surfaces of the at least one opening, forming a second conductive material comprising silver over the material, a portion of the at least one opening remaining unfilled and performing a polishing process to substantially fill the unfilled portion of the at least one opening. In yet another embodiment, the present disclosure includes a method of forming a memory cell. The method includes forming a first conductive material over surfaces of a structure comprising at least one opening overlying a first electrode, forming a memory material over the first conductive material, forming a second conductive material comprising silver over the material, a portion of the at least one opening remaining unfilled and performing a process to substantially fill the at least one opening with the material and the conductive material. The method of forming the memory cell may also include forming a first conductive material comprising silver over surfaces of a memory material exposed by at least one opening overlying a first electrode, forming a second conductive material over the first conductive material, a portion of the at least one opening remaining unfilled and performing a process to substantially fill the at least one opening with the material and the conductive material. In yet another embodiment, the present disclosure includes a semiconductor structure. The semiconductor may include a conductive structure overlying an electrode, at least one of a chalcogenide material and an oxide material in contact with the conductive structure and a conductive material overlying the chalcogenide material, the conductive material comprising silver and at least one region comprising another material. In further embodiments, the present disclosure includes memory cell. The memory cell a memory material overlying an electrode and a conductive material comprising silver and another material, the conductive material overlying the memory material and disposed in at least one opening. BRIEF DESCRIPTION OF THE DRAWINGS FIGS. 1 A through IE are partial cross-sectional views of a semiconductor structure and illustrate a method of forming an interconnect in accordance with embodiments of the present disclosure; FIGS. 2 A through 2E are partial cross-sectional views of a semiconductor structure and illustrate another method of forming an interconnect in accordance with embodiments of the present disclosure; FIG. 3 A is a partial cross-sectional view of a conductive bridge random access memory (CBRAM) cell; and FIGS. 3B through 3D are partial cross-sectional views of a semiconductor structure and illustrate a method of forming the CBRAM cell shown in FIG. 3A in accordance with embodiments of the present disclosure. MODE(S) FOR CARRYING OUT THE INVENTION Methods of forming conductive elements, such as interconnects and electrodes, are disclosed, as are semiconductor structures and memory devices that include such conductive elements. The conductive element is fonned from a silver material, such as silver or a silver alloy. Since silver has low resistivity and alloys and mixtures with other materials, the resistivity of the conductive element may be less than or equal to that of a conductive element formed from copper. In addition, use of a silver alloy or silver mixture may substantially reduce or eliminate issues with agglomeration associated with silver during thermal processing acts conducted at a later stage of semiconductor processing including such conductive elements. Using silver, a silver alloy or a silver mixture may also enable narrow openings, such as those having at least one dimension of less than about 20 nm, to be filled. As used herein, the term "alloy" means and includes means and includes a homogeneous mixture or solid solution of a plurality of materials (e.g., metals or nonmetals), atoms of one of the materials occupying interstitial positions between atoms of another one of the materials. By way example and not limitation, an alloy may include a mixture of silver and a metal selected from platinum, aluminum, tin, copper, iridium, titanium, nickel, cobalt, ruthenium and rhodium. As used herein, the term "mixture" means and includes a material formed by mixing a plurality of metals, or a metal and a nonmetal. By way example and not limitation, a mixture may include a mixture of silver and a metal such as tungsten. As used herein, the term "liner" means and includes any structure overlies a surface of at least one material. By way example and not limitation, a liner may include a layer of material disposed over another material. As used herein, the term "adhesion material" means and includes a material selected to facilitate adhesion of a first material to a second material immediately adj acent the first material. As used herein, the term "chalcogenide" means and includes a material, including a glass or crystalline material, that includes an element from Group VIA (also identifiable as Group 16) of the periodic table of elements. Group VIA elements, often referred to as "chalcogens," include sulfur (S), selenium (Se), tellurium (Te), polonium (Po) and oxygen (O). Examples of chalcogenides include, but are not limited to, germanium selenide (GeSe), germanium sulfide (GeS), germanium telluride (GeTe), indium selenide (InSe) and antimony selenide (SbSe). While the exemplary chalcogenides have a stoichiometry of one atom of each element, the chalcogenide may have other stoichiometries. As used herein, the terms "redistribute" and "redistributing" mean and include spreading or smearing a material across a surface and into a partially filled, lined or, previously unfilled opening (e.g., via, trench) in a structure to fill or substantially fill the opening with the material. As used herein, the term "substrate" means and includes a base material or construction upon which additional materials are formed. The substrate may be a semiconductor substrate, a base semiconductor layer on a supporting structure, a metal electrode or a semiconductor substrate having one or more layers, structures or regions formed thereon. The substrate may be a conventional silicon substrate or other bulk substrate comprising a layer of semiconductive material. As used herein, the term "bulk substrate" means and includes not only silicon wafers, but also silicon-on-insulator ("SOI") substrates, such as silicon-on-sapphire ("SOS") substrates and silicon-on-glass ("SOG") substrates, epitaxial layers of silicon on a base semiconductor foundation, and other semiconductor or optoelectronic materials, such as silicon-germanium, germanium, gallium arsenide, gallium nitride, and indium phosphide. The substrate may be doped or undoped. The following description provides specific details, such as material types and processing conditions in order to provide a thorough description of embodiments of the present disclosure. However, a person of ordinary skill in the art will understand that the embodiments of the present disclosure may be practiced without employing these specific details. Indeed, the embodiments of the present disclosure may be practiced in conjunction with conventional semiconductor fabrication techniques employed in the industry. In addition, the description provided below does not form a complete process flow for manufacturing a semiconductor device. The semiconductor structures described below do not necessarily form a complete semiconductor device. Only those process acts and structures necessary to understand the embodiments of the present disclosure are described in detail below. Additional acts to form a complete semiconductor device from the semiconductor structures may be performed by conventional fabrication techniques. FIGS. 1A through IE are simplified partial cross-sectional views of a semiconductor structure 100 illustrating embodiments of a method of forming interconnects. Referring to FIG. 1A, the semiconductor structure 100 may include an opening 106 in a material 104 overlying a substrate 102. The material 104 may be formed from silicon nitride (S13N4), silicon dioxide (Si02) or a silicon oxynitride (SiOxNy), for example. The material 104 may be formed over the substrate 102 using a conventional deposition process, such as a chemical vapor deposition process, an atomic layer deposition process or a physical vapor deposition process. The semiconductor structure 100 may, optionally, include an electrode material 108 (shown in broken lines) between the material 104 and the substrate 102. The electrode material 108 may be formed from a conductive material, such as, tungsten (W), platinum (Pt), titanium nitride (TiN) or nickel (Ni). The electrode material 108 may be formed over the substrate 102 using conventional deposition process, such as, a chemical vapor deposition process or an atomic layer deposition process. While FIGs. 1 A through IE indicate that the electrode material 108 is present, it is understood that the electrode material 108 is optional and that material 104 may be in direct contact with substrate 102 with the opening 106 extending at least partially through material 104. The opening 106 may be formed by removing a portion of the material 104 using, for example, conventional photolithography techniques (e.g., masking and etching) known in the art of integrated circuit fabrication. By way of non-limiting example, the opening 106 may extend longitudinally into a plane of FIG. 1 A. Removing the portion of the material 104 may expose a surface of the material 104 or, if present, a surface of the electrode material 108. By way of example and not limitation, the opening 106 may have a width wl of less than about 100 nm and, more particularly, less than about 20 nm. The aspect ratio of the opening 106 may be between about 1:1 and about 20:1 and, more particularly, between about 5:1 and about 10:1. The elements shown in the FIG. 1 A and the following figures have been drawn for the purposes of illustration and should not be understood as being drawn to scale. Referring to FIG. IB, a liner material 110 may be formed over surfaces of the semiconductor structure 100 (i.e., exposed surfaces of the material 104 and, if present, the electrode material 108). For example, the liner material 110 may be formed over surfaces exposed within the opening 106 (i.e., exposed sidewalls of the material 104 and an exposed surface of the electrode material 108, if present) as well as exposed, unrecessed surfaces of the material 104. In embodiments in which the electrode material 108 is present, the liner material 110 may be formed from a material that facilitates adhesion to and reduces contact resistance in the electrode material 108, or provides both characteristics. For example, the liner material 110 may be formed from at least one of platinum (Pt), tantalum (Ta), aluminum (Al), tin (Sn), copper (Cu), iridium (Ir), titanium (Ti), nickel (Ni), cobalt (Co), ruthenium (Ru) and rhodium (Rh). The liner material 110 may be formed using a conventional deposition process, such as, a chemical vapor deposition process, a physical vapor deposition process or a sputtering process. By way of example and not limitation, the liner material 110 may be formed having a thickness of between about 0.5 nm and about 20 nm and, more particularly, between about 1 nm and about 5 nm. Referring to FIG. 1C, a conductive material 112 may be formed over the liner material 1 10. The conductive material 1 12 may be formed from silver (Ag) or an alloy or a mixture thereof using a conventional deposition process, such as, a physical vapor deposition process or a physical deposition process. Conventional vapor deposition processes (e.g., chemical vapor deposition and physical vapor deposition) may not effectively deposit silver in narrow openings (e.g., openings having at least one dimension of less than or equal to 20 nm). Thus, in embodiments in which at least one dimension (i.e., the width wl ) of the opening 106 is less than or equal to about 20 nm, a sputtering process may be used to form the conductive material 112 within the opening 106. By way of non-limiting example, the conductive material 112 may be substantially conformally deposited over an entire exposed surface of the liner material 110. The conductive material 1 12 may be formed having a thickness sufficient to at least partially fill the remaining portion of the opening 106. As shown in FIG. 1C, a portion of the opening 106 may remain unfilled (i.e., unfilled region 1 16) after the conductive material 112 has been formed on the semiconductor structure 100. By way of example and not limitation, the conductive material 112 may be formed from silver and have a thickness of between about 5 nm and about 30 nm and, more particularly, between about 10 nm and about 20 nm. The thicknesses of the liner material 110 and the conductive material 112 may be selected based on a desired ratio of materials. In embodiments in which the liner material 1 10 includes platinum and the conductive material 112 includes silver, a ratio of the liner material 110 to the conductive material 112 may be less than or equal to about 1 to 2. Referring to FIG. ID, in embodiments in which the liner material 110 (shown in broken lines) includes a material that forms an alloy with the conductive material 1 12, an annealing process may optionally be performed to form an alloy of the liner material 110 and the conductive material 1 12. By reacting the liner material 110 and the conductive material 112, an intermetallic compound is formed. For example, the conductive material 112 may include silver, the liner material 110 may include at least one material, such as, platinum, aluminum, tin, copper, iridium, titanium, nickel, -Si- cobalt, ruthenium and rhodium, which reacts with the silver to form the alloy. By way of example and not limitation, the annealing process may include exposing the semiconductor structure 100 to a temperature of between about 100°C and about 500°C and, more particularly, a temperature of about 200°C. During the annealing process, a material 1 14 (shown in broken lines) that includes the alloy may be formed at an interface between the conductive material 112 and material 104 underlying the remaining (i.e., non-alloyed) portions of the conductive material 112. The alloy may include a substantially homogeneous mixture of the liner material 110 and the conductive material 1 12, or may be a heterogeneous mixture that includes regions having different ratios of the liner material 110 to the conductive material 112. In embodiments in which the liner material 1 10 includes platinum and the conductive material 112 includes silver, the semiconductor structure 100 may be exposed to a temperature of about 200°C such that the platinum and the silver combine to form a silver-platinum alloy. The liner material 110 may be at least substantially completely alloyed with the conductive material 112 to form the material 114, or a portion of the liner material 110 may remain at an interface between the material 114 and surfaces of the material 104 and the electrode material 108, if present. In embodiments in which the liner material 1 10 is formed from a material that does not form an alloy with the conductive material 1 12, the annealing process may be bypassed and the liner material 110 may remain at the interface between the conductive material 112 and the material 104 and, if present, the electrode material 108 (as shown in FIG. 1C). For example, the conductive material 112 may include silver and the liner material 1 10 may comprise tantalum and the tantalum may be disposed between the silver and the material 104 and, if present, the electrode material 108. An exposed surface of the semiconductor structure 100 may be subjected to a material removal process, such as a so-called polishing process in the form of, for example, a chemical mechanical polishing (CMP) process or a mechanical polishing process, to form an interconnect 120, as shown in FIG. IE. For example, the employed process may be used to remove portions of each of the liner material 110, the conductive material 1 12 and, if present, the material 1 14 overlying the material 104 (FIG. ID). In addition, the process may be used to redistribute at least one of the conductive material 112, the liner material 110 and the material 114, if present, into the unfilled region 1 16 (FIG. ID) of the opening 106 to substantially completely fill the opening 106. Without wishing to be bound by any particular theory, it is believed that malleable materials, such as the conductive material 1 12 and, optionally, the liner material 110 and the material 114, may be mechanically pushed or redistributed into voids (e.g., the unfilled region 1 16) during the polishing process, thus filling the unfilled region 116 of the opening 106. However, mechanical stresses exerted on the malleable materials during the polishing process may cause the malleable materials to pull out of the opening 106. Such mechanical stresses may be substantially reduced or eliminated by leaving a portion of the opening 106 unfilled and by improving adhesion between the conductive material 112 and the underlying material (i.e., material 104 or, if present, the electrode material 108). For example, in embodiments in which the conductive material 112 is formed from a material (e.g., silver) that exhibits poor adhesion with an underlying region (e.g., the electrode material 108), the liner material 110 may substantially improve adhesion between the conductive material 112 and the underlying region to prevent the conductive material 112 from being removed from the opening 106 by the mechanical stresses. The polishing process may be a chemical mechanical polishing process that is performed using a conventional chemical mechanical polishing apparatus and a slurry that enables redistributing of the malleable materials (e.g., the conductive material 112 and, optionally, the liner material 110) into the unfilled region 1 16 of the opening 106 to form the interconnect 120. Such a slurry may be, for example, an alumina-based slurry at a neutral or slightly basic pH that is substantially free of oxidizer. The polishing process may also be a mechanical polishing process performed using the conventional chemical mechanical polishing apparatus and water (e.g., deionized water) instead of a chemical slurry. Using water as the liquid component in the polishing process, without addition of chemical etching agents, may enable redistribution of the conductive material 1 12 and the liner material 1 10, if present, into the unfilled region of the opening 106 without substantially removing such materials. After forming the interconnect 120, another annealing process may, optionally, be performed. By way of example and not limitation, this annealing process may include exposing the semiconductor structure 100 of FIG. IE to a temperature of between about 100°C and about 500°C and, more particularly, about 200°C. The annealing process may result in formation of an alloy of the materials of the interconnect 120 (conductive material 112 and the liner material 110), as previously discussed. After annealing, the interconnect 120 may include regions of the conductive material 112, the liner material 1 10 and the alloy or may substantially include the alloy. For the sake of simplicity, the methods described with respect to FIGS. 1 A through IE illustrate a method of forming a single interconnect 120. However, as would be understood by one of ordinary skill in the art, a plurality of interconnects or a network of metal routing (e.g., a metallization layer) may be formed using the methods described with respect to FIGS. 1 A through IE. The interconnect 120 may be present in various semiconductor devices, as would be understood by one of ordinary skill in the art. For example, the interconnect 120 may be used to electrically connect active devices, such as transistors, capacitors, etc. The interconnect 120 may include a portion of a network of metal routing electrically connecting such active devices. FIGS. 2 A through 2E are simplified partial cross-sectional views of a semiconductor structure 200 illustrating embodiments of another method of forming an interconnect. As shown in FIG. 2A, the semiconductor structure 200 may be formed including an opening 206 in a material 204 overlying a substrate 202. The opening 206 may have a width w2 of less than about 100 nm and, more particularly, less than about 20 nm. The opening 206 may expose a surface of the material 204 or, if present, an optional electrode material 208 disposed between the material 204 and the substrate 202. The semiconductor structure 200 shown in FIG. 2A may be formed using substantially the same methods used to form the semiconductor structure 100 shown in FIG. 1 A. While FIGs. 2 A through 2E indicate that the electrode material 208 is present, it is understood that the electrode material 208 is optional and that material 204 may be in direct contact with substrate 202 with the opening 206 extending at least partially through material 204. Referring to FIG. 2B, a conductive material 212 may be formed over the semiconductor structure 200 (e.g., over exposed surfaces of each of the material 204 and, if present, the electrode material 208). The conductive material 212 may be formed from silver (Ag) or an alloy thereof using a conventional deposition process, such as, a chemical vapor deposition process, a physical vapor deposition process or a physical deposition process. Conventional vapor deposition processes (e.g., chemical vapor deposition and physical vapor deposition) may not effectively deposit silver in narrow openings (e.g., openings having at least one dimension of less than or equal to 20 nm). Thus, in embodiments in which at least one dimension (i.e., the width w2) of the opening 206 is less than or equal to about 20 nm, a sputtering process may be used to form the conductive material 212 within the opening 206. By way of non-limiting example, the conductive material 212 may be substantially conformally deposited over an entire exposed surface of the semiconductor structure 200. The conductive material 212 may be formed having a thickness sufficient to at least partially fill the opening 206. A portion of the opening 206 may remain unfilled (i.e., unfilled region 216) after deposition of the conductive material 212. By way of example and not limitation, the conductive material 212 may be formed from silver and have a thickness of between about 5 nm and about 30 nm and, more particularly, between about 10 nm and about 20 nm. Referring to FIG. 2C, a liner material 210 may be formed over surfaces of the conductive material 212. The liner material 210 may be formed from a material that facilitates adhesion to and/or reduces contact resistance in an upper electrode (not shown) that may be formed over a completed interconnect, as will be discussed in further detail. For example, the liner material 210 may be formed from at least one of platinum, aluminum, tin, copper, iridium, titanium, nickel, cobalt, ruthenium and rhodium. The liner material 210 may be formed using a conventional deposition process, such as, a chemical vapor deposition process, a physical vapor deposition process or a sputtering process. As shown in FIG. 2C, a portion of the unfilled region 216 of the opening 206 may remain after the liner material 210 has been formed over the conductive material 212. By way of example and not limitation, the liner material 210 may be formed having a thickness of between about 0.5 nm and about 20 nm and, more particularly, between about 1 nm and about 5 nm. The thicknesses of the liner material 210 and the conductive material 212 may be selected based on a desired ratio of materials. In embodiments in which the liner material 210 includes platinum and the conductive material 212 includes silver, a ratio of the liner material 210 to the conductive material 212 may be less than or equal to about 1 to 2. Referring to FIG. 2D, in embodiments in which the liner material 210 (shown in broken lines) includes a material that forms an alloy with the conductive material 212, an annealing process may optionally be performed to form an alloy of the conductive material 212 and the liner material 210. For example, the conductive material 212 may include silver, the liner material 210 may include at least one material, such as, platinum, aluminum, tin, copper, iridium, titanium, nickel, cobalt, ruthenium and rhodium, which reacts with the silver to form the alloy. By way of example and not limitation, the annealing process may include exposing the semiconductor structure 200 to a temperature of between about 100°C and about 500°C and, more particularly, about 200°C. During the annealing process, at least a portion of the conductive material 212 and the liner material 210 may be converted to form a material 214 (shown in broken lines) that includes the alloy. The alloy in the material 214 may include a substantially homogeneous mixture of the liner material 210 and the conductive material 212, or may be a heterogeneous mixture that includes regions having different ratios of the liner material 210 to the conductive material 212. In embodiments in which the liner material 110 includes platinum and the conductive material 212 includes silver, the semiconductor structure 200 may be exposed to a temperature of about 200°C such that the platinum and the silver combine to form a silver-platinum alloy. The liner material 210 may be at least substantially completely alloyed with the conductive material 212 to form the material 214, or a portion of the liner material 210 may remain overlying the material 214. In embodiments in which the liner material 210 is formed from a material that does not form an alloy with the conductive material 212, the annealing process may be bypassed and the liner material 210 may remain over the conductive material 212 (as shown in FIG. 2C). For example, the conductive material 212 may include silver and the liner material 210 may comprise tantalum and the tantalum may be disposed over the silver. An exposed surface of the semiconductor structure 200 may be subjected to a material removal process, such as so-called polishing process in the form of a chemical mechanical polishing (CMP) process or a mechanical polishing process, to form an interconnect 220, as shown in FIG. 2E. For example, the employed process may be used to remove portions of each of the conductive material 212 and, if present, the material 114 and/or the liner material 210 overlying the material 204 (FIG. 2D). In addition, the polishing process may be used to redistribute at least one of the conductive material 212, the material 214 and/or the liner material 210 into the unfilled region 216 of the opening 206 (FIG. 2D) to substantially completely fill the opening 206. Without wishing to be bound by any particular theory, it is believed that malleable materials (e.g., the conductive material 212 and, optionally, the liner material 210 and/or the material 214), may be mechanically pushed or redistributed into voids (e.g., the unfilled region 216 of the opening 206) during the polishing process, thus filling the unfilled region 216 of the opening 206. However, mechanical stresses exerted on the malleable materials during the polishing process may cause the malleable materials to pull out of the opening 206. Such mechanical stresses may be substantially reduced or eliminated by leaving a portion of the opening 206 unfilled and by improving adhesion between the conductive material 212 and the underlying material (i.e., the material 204 or, if present, the electrode 208). The polishing process may be a chemical mechanical polishing process or mechanical polishing process, as previously discussed with respect to FIG. IE. After forming the interconnect 220, another annealing process may, optionally, be performed. By way of example and not limitation, the annealing process may include exposing the semiconductor structure 200 to a temperature of between about 100°C and about 500°C and, more particularly, to a temperature of about 200°C. The anneal ing process may result in formation of an alloy of the conductive material 212 and the liner material 210, as previously discussed. After annealing, the interconnect 220 may include regions of the conductive material 212, the liner material 210 and the alloy or may substantially include the alloy. For the sake of simplicity, the methods described with respect to FIGS. 2A through 2E illustrate a method of forming a single interconnect 220. However, as would be understood by one of ordinary skill in the art, a plurality of interconnects or a network of metal routing (e.g., a metallization layer) may be formed using the methods described with respect to FIGS. 2A through 2E. The interconnect 220 may be present in various semiconductor devices, as would be understood by one of ordinary skill in the art. For example, the interconnect 220 may be used to electrically connect active devices, such as transistors, capacitors, etc. The interconnect 220 may include a portion of a network of metal routing electrically connecting such active devices. FIG. 3A through 3D are simplified partial cross-sectional views of a semiconductor structure 300 illustrating embodiments of a method of forming a conductive element for a semiconductor device, such as an electrode 311 of a conductive bridge random access memory (CBRAM) device. A CBRAM may include a plurality of memory cells, one of which is shown in FIG. 3A. The CBRAM cell 330 may include a memory material 309, disposed between a first electrode 308 and a second electrode 311. For example, the memory material 309 may be disposed over a surface of an underlying material or over exposed surfaces of an opening 306, as will be described in further detail. The memory material 309 and the second electrode 31 1 may overlie a conductive structure 303 that provides an electrical connection between the first and second electrodes 308 and 31 1. The second electrode 311 may be fonned from silver. While not wishing to be bound by any particular theory, it is believed that operation of the CBRAM cell 330 occurs due to selective formation and dissolution of a conductive bridge formed by electromigration of silver into the memory material 309. Thus, it is important to control diffusion of silver ions into the memory material 309 during deposition of the second electrode 311. FIGS. 3B through 3D illustrate embodiments of a method of forming the CBRAM cell 330 shown in FIG. 3A. As shown in FIGS. 3B1, a semiconductor structure 300 may be formed that includes an opening 306 in a dielectric material 304, the opening 306 overlying a conductive structure 303 in an interlayer dielectric material 305 overlying the first electrode 308. The first electrode 308 may be formed from a conductive material, such as, tungsten, platinum, titanium nitride (TiN) or nickel. The first electrode 308 may be formed over a substrate (not shown) using conventional deposition process, such as, a chemical vapor deposition process or an atomic layer deposition process. The semiconductor structure 300 may include the memory material 309 overlying surfaces of the conductive structure 303 and the interlayer dielectric material 305. The interlayer dielectric material 305 may be formed from, for example, silicon nitride, silicon dioxide or a silicon oxynitride. The interlayer dielectric material 305 may be formed over the first electrode 308 using a conventional deposition process, such as a chemical vapor deposition process, an atomic layer deposition process or a physical vapor deposition process. The conductive structure 303 may be formed from a conductive material, such as, at least one of titanium nitride, tungsten, tungsten nitride, tantalum and tantalum nitride. The conductive structure 303 may be formed in electrical connection with the first electrode 308. The conductive structure 303 may be formed in the interlayer dielectric material 305 using conventional techniques, the details of which are known in the art and, therefore, are not described in detail herein. For example, a conventional damascene process may be used to form the conductive structure 303 in the interlayer dielectric material 305 by forming a trench in the interlayer dielectric material 305, forming the conductive material over interlayer dielectric material 305 to fill the trench, and performing a chemical mechanical polishing (CMP) process to remove portions of the conductive material overlying the interlayer dielectric material 305. The memory material 309 may be formed from a chalcogenide material, such as germanium selenide or germanium sulfide, or an oxide material, such as a high-k oxide material. Examples of suitable high-k dielectric materials include, but are not limited to, silicon dioxide, tantalum oxide, titanium oxide, nitrogen oxide, zirconium oxide and hafnium oxide. For example, the memory material 309 may be deposited using a conventional deposition process, such as, a physical vapor deposition process, a chemical vapor deposition process or an atomic layer deposition process. The dielectric material 304 may be formed from, for example, silicon nitride, tetraethyl orthosilicate (TEOS), silicon dioxide or a silicon oxynitride. The dielectric material 304 may be formed over the interlayer dielectric material 305 and the conductive structure 303 using a conventional deposition process, such as, a chemical vapor deposition process, an atomic layer deposition process or a physical vapor deposition process. In some embodiments, the dielectric material 304 may be formed as a monolithic structure. In other embodiments, the dielectric material 304 may be formed as a stacked structure that includes a plurality of materials 304A, 304B, 304C, as shown in broken lines. For example, the materials 304 A and 304C may be formed from silicon nitride and the material 304B may be formed from tetraethyl orthosilicate. The opening 306 may be formed in the dielectric material 304 by removing a portion of the dielectric material 304 using, for example, conventional photolithography techniques (e.g., masking and etching) known in the art of integrated circuit fabrication. The portion of the dielectric material 304 removed to form the opening 306 may overlie the conductive structure 303 such that the opening 306 exposes a surface of the conductive structure 303 and, optionally, surfaces of the interlayer dielectric material 305 adjacent the surface of the conductive structure 303. By way of example and not limitation, the opening 306 may have a width w3 of less than about 100 nm and, more particularly, less than about 20 nm. Referring to FIG. 3B2, the memory material 309 may alternatively be formed over sidewalls of the dielectric material 304 and surfaces of the conductive structure 303 and the interlayer dielectric material 305 after forming the dielectric material 304 and the opening 306 in the dielectric material 304. As previously discussed with respect to FIG. 3B1, The memory material 309 may be formed from a chalcogenide material, such as germanium selenide or germanium sulfide, or an oxide material, such as a high-k oxide material, using a conventional deposition process, such as, a physical vapor deposition process, a chemical vapor deposition process or an atomic layer deposition process. After deposition of the memory material 309, an annealing process may optionally be performed. By way of example and not limitation, the annealing process may include exposing the semiconductor structure 300 to a temperature of between about 100°C and about 500°C and, more particularly, a temperature of about 200°C. As shown in FIG. 3C, a conductive material 312 that includes silver may be formed over the memory material 309. For simplicity, the semiconductor structure 300 is shown with the memory material 309 (shown in broken lines) disposed over surfaces in the opening 306 and over surfaces of the dielectric material 304. However, as configured, the memory material 300 may also be disposed between the interlayer dielectric material 305 and the dielectric material 304 and the memory material 309 as shown in FIG. 3B 1. Forming silver using a conventional vapor deposition process, such as a physical vapor deposition (PVD) process or a chemical vapor deposition (CVD) process, may cause undesirable diffusion of the silver into the memory material 309 during formation of the second electrode 311. Such diffusion of the silver may results in variability in cell-to-cell operation of the CBRAM device. Thus, the conductive material 312 may be formed from silver (Ag) or a silver alloy using a conventional sputtering process. By way of example and not limitation, the conductive material 312 may be substantially conformally deposited over an entire exposed surface of the memory material 309. A thickness of the conductive material 312 may be such that a portion of the opening 306 remains unfilled (i.e., unfilled region 316). By way of example and not limitation, the conductive material 312 may be formed having a thickness of between about 10 nm and about 20 nm. Referring to FIG. 3D, a liner material 310 may be formed over surfaces of the conductive material 312. For example, the liner material 310 may be formed from at least one of platinum, tantalum, aluminum (Al), lead (Sb), copper, iridium, titanium, nickel, cobalt, ruthenium and rhodium. The liner material 310 may be formed using a conventional deposition process, such as, a chemical vapor deposition process, a physical vapor deposition process or a sputtering process. By way of example and not limitation, the liner material 310 may be formed having a thickness of between about 0.5 nm and about 20 nm and, more particularly, between about 1 nm and about 5 nm. Removal of silver from unwanted areas may be complicated as there are currently no known etchants for selectively removing the silver with respect to the other materials. Thus, material (i.e., the conductive material 312 and the liner material 310) may be pushed or redistributed from upper surfaces of the dielectric material 304 into voids (e.g., the unfilled region 316 of the opening 306) by subjecting an exposed surface of the semiconductor structure 300 to a polishing process, as described with respect to FIG. 3D. During the polishing process, the unfilled region 316 (FIGS. 3C and 3D) may be filled to form the second electrode 311 shown in FIG. 3 A. Optionally, an annealing process may then be performed to form an alloy of the conductive material 312 and the liner material 310. For example, in embodiments in which the liner material 310 comprises platinum, aluminum (Al), lead (Sb), copper, iridium, titanium, nickel, cobalt, ruthenium and rhodium, the annealing process may be performed to form the alloy. In embodiments in which the annealing process is perfomied before deposition of the conductive material 312, the annealing process may be bypassed at this stage. The annealing process may include exposing the semiconductor structure 300 to a temperature of between about 100°C and about 500°C and, more particularly, about 200°C. By way of example and not limitation, the conductive material 312 may be formed from silver, the liner material 310 may be formed from platinum and a silver-platinum alloy may be formed during the annealing process. A majority of the alloy or substantially all of the alloy may be located in a region of the interconnect 320 opposite a surface of the memory material 309 such that a region of the interconnect 320 in contact with or adjacent to the memory material 309 substantially includes silver. In FIGS. 3A through 3D, embodiments of methods of forming a silver-containing conductive element (i.e., second electrode 311) are illustrated in the CBRAM cell 330. However, such methods may also be used to form other conductive elements in a multitude of semiconductor structures and devices, as would be understood by one of ordinary skill in the art. EXAMPLES Example 1 A plurality of trenches was formed in a silicon dioxide material overlying a silicon wafer. The trenches of the plurality each had a depth of about 50 nm. Silver was deposited over the surface of the silicon wafer using a conventional sputtering process. The sputtering process was performed using a conventional sputter coater. The silver was sputtered over the surface of the silicon wafer for about two minutes, during which time the silver reached a thickness of about 15 nm. Platinum was then formed over the silver using the sputter coater. The platinum was sputtered over the surface of the silicon wafer for about 30 seconds, during which time the platinum reached a thickness of about 6 nm. A mechanical polishing process was performed on the silicon wafer having the silver and platinum thereon using deionized water and a conventional polishing pad. No chemical slurry was used during the mechanical polishing process. The surface of the platinum was polished using a pad rotation of about 100 RPM. After the mechanical polishing process, a scanning electron microscope (SEM) was used to observe that the trenches were substantially filled with material (e.g., the silver and the platinum). An annealing process was then performed using a conventional industrial oven. The industrial oven was set to 200°C and the silicon wafer having the silver and platinum thereon was placed therein for about 10 minutes. It was confirmed that the post annealed silver-platinum alloy was substantially smooth with low resistance. While the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the invention as defined by the following appended claims and their legal equivalents. |
Methods, systems, and devices for bank-configurable power modes are described. Aspects include operating a memory device that has multiple memory banks in a first mode. While operating in the first mode, the memory device may receive a command to enter a second mode having a lower power consumption level than the first mode. The memory device may enter the second mode by switching a first subset of the memory banks to a first low power mode that operates at a first power consumption level and a second subset of the memory banks to a second low power mode that operates at a second power consumption level that may be lower than the first power consumption level. In some cases, the memory device may switch the first subset of memory banks from the first low power mode while maintaining the second subset of memory banks in the low power mode. |
CLAIMSWhat is claimed is:1. A method, comprising: operating a memory device in a first mode, the memory device comprising a plurality of memory banks; receiving, while operating the memory device in the first mode, a command for the memory device to enter a second mode corresponding to less power consumption by the memory device than the first mode; and switching, based at least in part on receiving the command for the memory device to enter the second mode, the memory device into the second mode by switching a first subset of memory banks of the plurality to a first low power mode corresponding to a first power consumption level and a second subset of memory banks of the plurality to a second low power mode corresponding to a second power consumption level that is lower than the first power consumption level.2. The method of claim 1, further comprising: receiving, while operating the memory device in the second mode, a second command to switch the first subset of memory banks from the first low power mode to the first mode; and switching, based at least in part on receiving the second command, the first subset of memory banks out of the first low power mode.3. The method of claim 2, further comprising: maintaining the second subset of memory banks in the second low power mode while switching the first subset of memory banks out of the first low power mode.4. The method of claim 2, further comprising: performing one or more access operations on the first subset of memory banks after switching the first subset of memory banks out of the first low power mode; and maintaining the second subset of memory banks in the second low power mode while performing the one or more access operations on the first subset of memory banks.5. The method of claim 2, further comprising: receiving, while operating the memory device in the second mode, a third command for the memory device to exit the second mode; and switching, based at least in part on receiving the second command, the memory device out of the second mode by switching the first subset of memory banks out of the first low power mode and the second subset of memory banks out of the second low power mode.6. The method of claim 1, wherein the first low power mode corresponds to a quicker wakeup time than the second low power mode.7. The method of claim 1, further comprising: receiving an indication of the second power consumption level, wherein the second power consumption level corresponds to one of a plurality of power consumption levels supported by the memory device for the second low power mode.8. The method of claim 1, further comprising: receiving information indicating an assignment of the first low power mode to the first subset of memory banks and the second low power mode to the second subset of memory banks; and writing an indication of the assignment to one or more registers.9. The method of claim 8, further comprising: accessing the one or more registers based at least in part on receiving the command for the memory device to enter the second mode; and identifying the first low power mode for the first subset of memory banks and the second low power mode for the second subset of memory banks based at least in part on the accessing, wherein the switching the first subset of memory banks to the first low power mode and the second subset of memory banks to the second low power mode is based at least in part on the identifying.10. The method of claim 8, wherein the indication of the assignment comprises one or more bitmaps that associate the first subset of memory banks with the first low power mode and the second subset of memory banks with the second low power mode.11. The method of claim 10, wherein the information further indicates a power consumption level associated with the second low power mode, further comprising: writing an indication of the power consumption level associated with the second low power mode to the one or more registers.12. A method, comprising: writing, to one or more registers of a memory device, information that assigns a first low power mode to a first memory bank of the memory device and a second low power mode to a second memory bank of the memory device; receiving, at the memory device, a command to reduce a level of power consumption for the memory device; and operating, based at least in part on receiving the command and the information, the first memory bank in the first low power mode and the second memory bank in the second low power mode.13. The method of claim 12, further comprising: reading, based at least in part on receiving the command, the one or more registers; and determining to operate the first memory bank in the first low power mode and the second memory bank in the second low power mode based at least in part on reading the one or more registers, wherein the operating is based at least in part on the determining.14. The method of claim 12, wherein writing the information to the one or more registers comprises: writing a first set of values and a second set of values, wherein each of a plurality of memory banks included in the memory device is associated with a corresponding low power mode based at least in part on a respective combination of a first value from the first set of values and a second value from the second set of values.15. The method of claim 12, wherein writing the information to the one or more registers comprises: writing an indication of a power consumption level associated with the second low power mode.16. The method of claim 12, wherein writing the information to the one or more registers comprises: writing, for each of a plurality of memory banks included in the memory device, a respective indication of the first low power mode or the second low power mode.17. The method of claim 12, wherein writing the information to the one or more registers comprises: writing, for each of a plurality of memory banks included in the memory device, a respective indication of one of a plurality of low power modes, the plurality of low power modes comprising the first low power mode, the second low power mode with a first power consumption level, and the second low power mode with a second power consumption level.18. A method, comprising: operating a plurality of memory banks in respective first modes, wherein the plurality of memory banks are within a memory device; receiving, at the memory device while operating the plurality of memory banks in the respective first modes, signaling that indicates to operate a first memory bank of the plurality in a second mode corresponding to a lower power consumption level than a respective first mode for the first memory bank; and switching, based at least in part on receiving the signaling, the first memory bank from the respective first mode for the first memory bank to the second mode while maintaining a second memory bank of the plurality in a respective first mode for the second memory bank.19. The method of claim 18, wherein: the second mode is one of a plurality of low power modes supported by the memory device for the plurality of memory banks, each of the plurality of low power modes corresponding to a respective power consumption level that is lower than a power consumption level corresponding to an idle mode supported by the memory device for the plurality of memory banks; and the signaling comprises an indication of a selected low power mode from the plurality of low power modes, the selected low power mode being the second mode.20. The method of claim 19, further comprising: receiving, at the memory device, second signaling that indicates to operate a third memory bank of the plurality in a third mode included in the plurality of low power modes; and switching, based at least in part on receiving the second signaling, the third memory bank from a respective first mode for the third memory bank to the third mode while maintaining the first memory bank in the second mode.21. The method of claim 18, wherein the signaling comprises an identifier specific to the first memory bank.22. The method of claim 18, wherein the signaling comprises an identifier of a group of banks that includes the first memory bank.23. The method of claim 18, wherein the signaling comprises one or more identifiers corresponding to a range of bank addresses that includes a bank address for the first memory bank.24. A method, comprising: receiving a command for a memory device to enter a reduced power mode from a first power mode; switching a first memory bank of the memory device to a first low power mode based at least in part on receiving the command, the first low power mode associated with a first power consumption level; switching a second memory bank of the memory device to a second low power mode based at least in part on receiving the command, the second low power mode associated with a second power consumption level that is lower than the first power consumption level; receiving, while the first memory bank is in the first low power mode and the second memory bank is in the second low power mode, an exit command associated with the first low power mode; switching, based at least in part on receiving the exit command, the first memory bank out of the first low power mode while maintaining the second memory bank in the second low power mode; and
performing an access operation on the first memory bank while the second memory bank is in the second low power mode.25. The method of claim 24, further comprising: receiving, after switching the first memory bank out of the first low power mode, a second command for the memory device to enter the reduced power mode; and switching the first memory bank to the first low power mode based at least in part on receiving the second command.26. The method of claim 25, further comprising: receiving, while the first memory bank is in the first low power mode and the second memory bank is in the second low power mode, a command for the memory device to exit the reduced power mode; and switching the first memory bank out of the first low power mode and the second memory bank out of the second low power mode based at least in part on receiving the command for the memory device to exit the reduced power mode.27. The method of claim 26, wherein, based at least in part on the command for the memory device to exit the reduced power mode, the first memory bank is available for access before the second memory bank is available for access.28. The method of claim 24, further comprising: receiving, while the first memory bank is not in the first low power mode and the second memory bank is in the second low power mode, a command for the memory device to exit the reduced power mode; and switching the second memory bank out of the second low power mode based at least in part on receiving the command for the memory device to exit the reduced power mode.29. An apparatus, comprising: a plurality of memory banks within a memory device, wherein each memory bank of the plurality supports an access mode, a first low power mode corresponding to less power consumption than the access mode, and a second low power mode corresponding to less power consumption than the first low power mode; and
a controller coupled with the plurality of memory banks and configured to cause the apparatus to operate at least one memory bank of the plurality in a selected mode comprising one of the access mode, the first low power mode, or the second low power mode independent of whether other memory banks of the plurality are in the access mode, the first low power mode, or the second low power mode.30. The apparatus of claim 29, further comprising: one or more registers configured to store an assignment of the first low power mode to a first subset of the plurality of memory banks and the second low power mode to a second subset of the plurality of memory banks.31. The apparatus of claim 30, wherein the controller is further configured to cause the apparatus to: access the one or more registers based at least in part on the memory device receiving a command to reduce an amount of power consumption for the memory device; and operate the first subset of the plurality of memory banks in the first low power mode and the second subset of the plurality of memory banks in the second low power mode based at least in part on accessing the one or more registers.32. The apparatus of claim 30, wherein: a power consumption level for the second low power mode is selectable from among a plurality of power consumption levels; and the one or more registers are further configured to store an indication of a selected power consumption level for the second low power mode.33. The apparatus of claim 29, wherein the controller is further configured to cause the apparatus to: switch a first subset of the plurality of memory banks out of the first low power mode and maintain a second subset of the plurality of memory banks in the second low power mode based on at least in part on the memory device receiving an exit command for the first low power mode.34. The apparatus of claim 29, wherein each of the plurality of memory banks is configured to be available for access operations with a first latency when switched out of the first low power mode and available for access operations with a second latency
when switched out of the second low power mode, the first latency shorter than the second latency. 35. The apparatus of claim 29, wherein the controller is further configured to cause the apparatus to: operate a first subset of the plurality of memory banks in the first low power mode and a second subset of the plurality of memory banks in the second low power mode based at least in part on the memory device receiving one or more commands indicating the first low power mode for the first subset of the plurality of memory banks and the second low power mode for the second subset of the plurality of memory banks |
BANK-CONFIGURABLE POWER MODESTECHNICAL FIELD[0001] The present Application for Patent claims priority to U.S. Patent Application No. 16/551,581 by Mirichigni, et al., entitled “BANK CONFIGURABLE POWER MODES,” filed August 26, 2019, which is assigned to the assignee hereof and which is expressly incorporated by reference in its entirety herein.BACKGROUND[0002] The following relates generally to a system that includes at least one memory device and more specifically to bank-configurable power modes.[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programming different states of a memory device. For example, binary devices most often store one of two states, often denoted by a logic 1 or a logic 0. In other devices, more than two states may be stored. To access the stored information, a component of the device may read, or sense, at least one stored state in the memory device.To store information, a component of the device may write, or program, the state in the memory device.[0004] Various types of memory devices exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), and others. Memory devices may be volatile or non-volatile. Non-volatile memory, e.g., FeRAM, may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory devices, e.g., DRAM, may lose their stored state when disconnected from an external power source. FeRAM may be able to achieve densities similar to volatile memory but may have non-volatile properties due to the use of a ferroelectric capacitor as a storage device.[0005] Improving memory devices, generally, may include increasing memory cell density, increase read/write speeds, increasing reliability, increasing data retention, reducing
power consumption, or improving manufacturing processes, among other metrics. Solutions for improving power consumption at a memory device may be desired.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 illustrates an example of a system that supports bank-configurable power modes in accordance with examples as disclosed herein.[0007] FIG. 2 illustrates an example of a memory die that supports bank-configurable power modes in accordance with examples as disclosed herein.[0008] FIG. 3 illustrates an example of a memory device state diagram that supports bank-configurable power modes in accordance with examples as disclosed herein. [0009] FIGs. 4A-C illustrate examples of command mode state diagrams that supports bank-configurable power modes in accordance with examples as disclosed herein.[0010] FIG. 5 illustrates an example of a process flow that supports bank-configurable power modes in accordance with examples as disclosed herein.[0011] FIGs. 6A-C illustrate examples of a power mode bitmap for a memory device that supports bank-configurable power modes in accordance with examples as disclosed herein.[0012] FIGs. 7A-C illustrate examples of a power mode bitmap for a memory device that supports bank-configurable power modes in accordance with examples as disclosed herein.[0013] FIGs. 8A-C illustrate examples of a power mode bitmap for a memory device that supports bank-configurable power modes in accordance with examples as disclosed herein. [0014] FIG. 9 illustrates an example of a command mode state diagram that supports bank-configurable power modes in accordance with examples as disclosed herein.[0015] FIG. 10 illustrates an example of a power level consumption profile for a memory device that supports bank-configurable power modes in accordance with examples as disclosed herein. [0016] FIG. 11 shows a block diagram of a memory device that supports bank- configurable power modes in accordance with examples as disclosed herein.[0017] FIGs. 12 through 15 show flowcharts illustrating a method or methods that support bank-configurable power modes in accordance with examples as disclosed herein.
DETAILED DESCRIPTION[0018] Some memory devices may operate in one or more low power modes where the memory device may disable or change operation of circuitry supporting the memory cells to reduce power consumption. For example, a FeRAM device may transition from an idle state to a low power state that has lower power consumption than the idle state based on deactivating some amount of circuitry. In the low power state, the memory device may not be able to perform access operations (e.g., read operations, write operations, etc.) on memory cells of the memory device or able to transition directly to an active state in which such operations may be performed (e.g., when in the lower power state, the memory device may have to first transition to the idle state, then to the active state). In some cases, a FeRAM device may support different low power states associated with different decreased levels of current consumption (e.g., associated with different amounts of deactivated circuitry).[0019] Similarly, a DRAM device may also transition its memory banks from an idle state to a low power state, which may include powering down one or more circuitry components to decrease the current consumption at the DRAM device. In some cases, a DRAM may maintain a self-refresh mode while powering down other components that are used to perform access operations (e.g., read operations, write operations, etc.).[0020] Transitioning a memory device from a low power mode to an active mode (e.g., to an idle mode and then to an active mode) may take a period of time. For example, the memory device may need to perform one or more procedures to activate circuitry that was deactivated while in the low power state in order to access memory cells. In some cases, low power modes having lower current consumption (i.e., using less energy, having more circuitry deactivated) may take longer to transition to an idle or active mode.[0021] Some memory devices (e.g., some FeRAM and DRAM devices) may control low power modes at a device or die level. That is, when entering a low power mode, circuitry for the entire device or die may change operating modes to use less power. Accordingly, if a memory device receives a command to perform an operation at the memory array, there may be a latency associated with transitioning the memory device or die from a low power mode to an idle mode and then transitioning at least a portion of the memory array (e.g., a bank of the memory array) to an active mode. Further, although the entire memory device may switch in or out of a low power mode, the access operations (e.g., read, write, etc.) may only be performed on the active portion of the memory array. As a result, the memory die may only
switch to a low power mode if it will stay in that low power mode for a minimum duration, such as a long enough duration to achieve an overall reduction in power consumption (e.g., the reduced power use from operating in the low power mode is greater than power loss associated with transitioning the die into and out of the low power mode). Accordingly, a memory die may achieve less power savings than desired due to losses associated with switching the entire device or die into and out of a lower power mode. Further, a memory device or die may operate in a low power mode less frequently due to the latency required to wake up the entire device or die, further increasing power consumption.[0022] A memory device may achieve greater power savings (e.g., less current consumption) by operating different portions of the memory device or a die therein (e.g., different portions of a single memory array) in different power modes. For example, a first portion of the memory device or die may be operated in a first power mode and a second portion of the memory device or die may be operated in a second power mode. The first power mode could be an active mode or a low power mode that can be accessed with a shorter latency (e.g., a quicker wake up time) than other low power modes. Such a low power mode with a relatively shorter latency may be referred to as a power down (PD) mode. The memory device may operate the second portion of the memory in a second low power mode as compared to the PD mode, where the second low power mode may be accessed with a longer latency. The low power mode with a relatively longer latency may be referred to as a deep sleep (DS) mode. In some case, a memory device may support multiple DS modes, which may respectively correspond to different amounts of power consumption (e.g., different amounts of deactivated circuitry for a portion of the memory device in the DS mode), and different portions of the memory device may concurrently be in different DS modes. In some cases, the memory device may switch the first portion of the memory from the PD mode to an idle or active mode while maintaining the second portion of memory in the DS mode (or vice versa). In this regard, the memory device may achieve increased power saving by operating different portions of the device in different power modes and in some cases only transitioning a portion of the memory cells to an active mode to perform access operations. Accordingly, the memory device may reduce losses from transitioning into and out of low power modes and operate in one or more low power modes more often and for a greater amount of time.
[0023] Aspects of the teachings herein include using a power mode bitmap to indicate the portions (e.g., memory banks) that are to be operated in various low power modes (e.g., PD and DS power modes). The power mode bitmap may be written to one or more registers (e.g., mode registers) or other storage within the memory device. Additionally or alternatively, the memory device may be configured via one or more commands for switching different portions (e.g., different memory banks) to one or more low power modes. In some cases, when the memory device receives a command to enter a low power mode, the memory device may access the power mode bitmap to determine which portions of the memory device should be operated in which low power mode. Further, while operating in one or more low power modes, the memory device may switch one or more portions from a low power mode to an active or idle mode, while maintaining other portions of the memory device in the low power mode. In some cases, the memory device may control low power modes at the bank level. Accordingly, the memory device may switch different banks between an active mode, an idle mode and one or more different low power modes.[0024] Operation of the different portions of the memory device in different low power modes may result in an overall decreased current consumption. For example, the memory device may be able to operate some portions of the memory in a low power mode such as a DS mode more often and for longer periods of time as compared to a memory device which switches the entire memory device into and out of a low power mode. Further, the memory device may decrease latency operating a portion of the memory device in a PD mode that it can more quickly switching to an idle or active mode to address command received by the memory device.[0025] In some cases, a memory device may benefit from operating some memory banks in a PD mode and other memory banks in one or more DS modes, including when the same or a limited number of banks are repeatedly accessed. For example, these frequently accessed banks may be operated in a PD mode, which may support more quickly switching them to idle/active modes for performing one or more access operations. Accordingly, these banks may achieve slightly increased power savings by switching between the PD mode and idle/active modes. Further, the memory device may achieve greater decreases in power use by operating other memory banks in one or more DS modes and may not need to switch theses DS banks into an idle/active mode because access operations are concentrated at the PD banks. Accordingly, the overall current use (e.g., due to both PD and DS modes) for a
memory device may decrease as compared to systems that switch the entire memory device or die between active and low power modes. In some cases, operating additional memory banks in DS modes to increase power savings may decrease bandwidth for access operations due to the greater access times associated with portions of the memory device operating in the DS mode. The quantity of memory banks operating in each of the PD and DS modes may be varied to balance power savings and bandwidth. Further, in some cases a memory controller (internal or external to the memory device) may allocate data for one or more applications across one or more memory banks based on power consumption considerations (e.g., by concentrating associated data within a relatively small number of memory banks, to support increased use of DS modes for other memory banks). These and other benefits may be appreciated by one of ordinary skill in the art.[0026] Features of the disclosure are initially described in the context of a memory system and memory die as described with reference to FIGs. 1-2. Features of the disclosure are described in the context memory device state diagrams, process flows, power mode bitmaps, and power level consumption diagrams as described with reference to FIGs. 3-10. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram and flowcharts that relate to bank-configurable power modes as described with references to FIGs. 11-15.[0027] FIG. 1 illustrates an example of a system 100 that utilizes one or more memory devices in accordance with examples as disclosed herein. The system 100 may include an external memory controller 105, a memory device 110, and a plurality of channels 115 coupling the external memory controller 105 with the memory device 110. The system 100 may include one or more memory devices, but for ease of description the one or more memory devices may be described as a single memory device 110.[0028] The system 100 may include portions of an electronic device, such as a computing device, a mobile computing device, a wireless device, or a graphics processing device. The system 100 may be an example of a portable electronic device. The system 100 may be an example of a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, or the like. The memory device 110 may be component of the system configured to store data for one or more other components of the system 100. In some examples, the system 100 is capable of machine-type communication
(MTC), machine-to-machine (M2M) communication, or device-to-device (D2D) communication.[0029] At least portions of the system 100 may be examples of a host device. Such a host device may be an example of a device that uses memory to execute processes such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, some other stationary or portable electronic device, or the like. In some cases, the host device may refer to the hardware, firmware, software, or a combination thereof that implements the functions of the external memory controller 105. In some cases, the external memory controller 105 may be referred to as a host or host device. In some examples, system 100 is a graphics card.[0030] In some cases, a memory device 110 may be an independent device or component that is configured to be in communication with other components of the system 100 and provide physical memory addresses/space to potentially be used or referenced by the system 100. In some examples, a memory device 110 may be configurable to work with at least one or a plurality of different types of systems 100. Signaling between the components of the system 100 and the memory device 110 may be operable to support modulation schemes to modulate the signals, different pin designs for communicating the signals, distinct packaging of the system 100 and the memory device 110, clock signaling and synchronization between the system 100 and the memory device 110, timing conventions, and/or other factors.[0031] The memory device 110 may be configured to store data for the components of the system 100. In some cases, the memory device 110 may act as a slave-type device to the system 100 (e.g., responding to and executing commands provided by the system 100 through the external memory controller 105). Such commands may include an access command for an access operation, such as a write command for a write operation, a read command for a read operation, a refresh command for a refresh operation, or other commands. The memory device 110 may include two or more memory dice 160 (e.g., memory chips) to support a desired or specified capacity for data storage. The memory device 110 including two or more memory dice may be referred to as a multi-die memory or package (also referred to as multi chip memory or package).[0032] The system 100 may further include a processor 120, a basic input/output system (BIOS) component 125, one or more peripheral components 130, and an input/output (I/O)
controller 135. The components of system 100 may be in electronic communication with one another using a bus 140.[0033] The processor 120 may be configured to control at least portions of the system 100. The processor 120 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or it may be a combination of these types of components. In such cases, the processor 120 may be an example of a central processing unit (CPU), a graphics processing unit (GPU), a general purpose graphic processing unit (GPGPU), or a system on a chip (SoC), among other examples.[0034] The BIOS component 125 may be a software component that includes a BIOS operated as firmware, which may initialize and run various hardware components of the system 100. The BIOS component 125 may also manage data flow between the processor 120 and the various components of the system 100, e.g., the peripheral components 130, the I/O controller 135, etc. The BIOS component 125 may include a program or software stored in read-only memory (ROM), flash memory, or any other non-volatile memory.[0035] The peripheral component s) 130 may be any input device or output device, or an interface for such devices, that may be integrated into or with the system 100. Examples may include disk controllers, sound controller, graphics controller, Ethernet controller, modem, universal serial bus (USB) controller, a serial or parallel port, or peripheral card slots, such as peripheral component interconnect (PCI) or specialized graphics ports. The peripheral component(s) 130 may be other components understood by those skilled in the art as peripherals.[0036] The I/O controller 135 may manage data communication between the processor 120 and the peripheral component(s) 130, input devices 145, or output devices 150. The I/O controller 135 may manage peripherals that are not integrated into or with the system 100. In some cases, the I/O controller 135 may represent a physical connection or port to external peripheral components.[0037] The input 145 may represent a device or signal external to the system 100 that provides information, signals, or data to the system 100 or its components. This may include a user interface or interface with or between other devices. In some cases, the input 145 may
be a peripheral that interfaces with system 100 via one or more peripheral components 130 or may be managed by the I/O controller 135.[0038] The output 150 may represent a device or signal external to the system 100 configured to receive an output from the system 100 or any of its components. Examples of the output 150 may include a display, audio speakers, a printing device, or another processor on printed circuit board, and so forth. In some cases, the output 150 may be a peripheral that interfaces with the system 100 via one or more peripheral components 130 or may be managed by the I/O controller 135.[0039] The memory device 110 may include a device memory controller 155 and one or more memory dice 160. Each memory die 160 may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, and/or local memory controller 165-/V) and a memory array 170 (e.g., memory array 170-a, memory array 170-b, and/or memory array 170-A). A memory array 170 may be a collection (e.g., a grid) of memory cells, with each memory cell being configured to store at least one bit of digital data. Features of memory arrays 170 and/or memory cells are described in more detail with reference to FIG. 2.[0040] The memory device 110 may be an example of a two-dimensional (2D) array of memory cells or may be an example of a three-dimensional (3D) array of memory cells. For example, a 2D memory device may include a single memory die 160. A 3D memory device may include two or more memory dice 160 (e.g., memory die 160-a, memory die 160-b, and/or any quantity of memory dice 160-A). In a 3D memory device, a plurality of memory dice 160-A may be stacked on top of one another or next to one another. In some cases, memory dice 160-Ain a 3D memory device may be referred to as decks, levels, layers, or dies. A 3D memory device may include any quantity of stacked memory dice 160-A (e.g., two high, three high, four high, five high, six high, seven high, eight high). This may increase the quantity of memory cells that may be positioned on a substrate as compared with a single 2D memory device, which in turn may reduce production costs or increase the performance of the memory array, or both. In some 3D memory device, different decks may share at least one common access line such that some decks may share at least one of a word line, a digit line, and/or a plate line.[0041] The device memory controller 155 may include circuits or components configured to control operation of the memory device 110. As such, the device memory controller 155
may include the hardware, firmware, and software that enables the memory device 110 to perform commands and may be configured to receive, transmit, or execute commands, data, or control information related to the memory device 110. The device memory controller 155 may be configured to communicate with the external memory controller 105, the one or more memory dice 160, or the processor 120. In some cases, the memory device 110 may receive data and/or commands from the external memory controller 105. For example, the memory device 110 may receive a write command indicating that the memory device 110 is to store certain data on behalf of a component of the system 100 (e g., the processor 120) or a read command indicating that the memory device 110 is to provide certain data stored in a memory die 160 to a component of the system 100 (e.g., the processor 120). In some cases, the device memory controller 155 may control operation of the memory device 110 described herein in conjunction with the local memory controller 165 of the memory die 160. Examples of the components included in the device memory controller 155 and/or the local memory controllers 165 may include receivers for demodulating signals received from the external memory controller 105, decoders for modulating and transmitting signals to the external memory controller 105, logic, decoders, amplifiers, filters, or the like.[0042] The local memory controller 165 (e.g., local to a memory die 160) may be configured to control operations of the memory die 160. Also, the local memory controller 165 may be configured to communicate (e.g., receive and transmit data and/or commands) with the device memory controller 155. The local memory controller 165 may support the device memory controller 155 to control operation of the memory device 110 as described herein. In some cases, the memory device 110 does not include the device memory controller 155, and the local memory controller 165 or the external memory controller 105 may perform the various functions described herein. As such, the local memory controller 165 may be configured to communicate with the device memory controller 155, with other local memory controllers 165, or directly with the external memory controller 105 or the processor 120.[0043] The external memory controller 105 may be configured to enable communication of information, data, and/or commands between components of the system 100 (e.g., the processor 120) and the memory device 110. The external memory controller 105 may act as a liaison between the components of the system 100 and the memory device 110 so that the components of the system 100 may not need to know the details of the memory device’s operation. The components of the system 100 may present requests to the external memory controller 105 (e.g., read commands or write commands) that the external memory controller
105 satisfies. The external memory controller 105 may convert or translate communications exchanged between the components of the system 100 and the memory device 110. In some cases, the external memory controller 105 may include a system clock that generates a common (source) system clock signal. In some cases, the external memory controller 105 may include a common data clock that generates a common (source) data clock signal.[0044] In some cases, the external memory controller 105 or other component of the system 100, or its functions described herein, may be implemented by the processor 120. For example, the external memory controller 105 may be hardware, firmware, or software, or some combination thereof implemented by the processor 120 or other component of the system 100. While the external memory controller 105 is depicted as being external to the memory device 110, in some cases, the external memory controller 105, or its functions described herein, may be implemented by a memory device 110. For example, the external memory controller 105 may be hardware, firmware, or software, or some combination thereof implemented by the device memory controller 155 or one or more local memory controllers 165. In some cases, the external memory controller 105 may be distributed across the processor 120 and the memory device 110 such that portions of the external memory controller 105 are implemented by the processor 120 and other portions are implemented by a device memory controller 155 or a local memory controller 165. Likewise, in some cases, one or more functions ascribed herein to the device memory controller 155 or local memory controller 165 may in some cases be performed by the external memory controller 105 (either separate from or as included in the processor 120).[0045] The components of the system 100 may exchange information with the memory device 110 using a plurality of channels 115. In some examples, the channels 115 may enable communications between the external memory controller 105 and the memory device 110. Each channel 115 may include one or more signal paths or transmission mediums (e.g., conductors) between terminals associated with the components of system 100. For example, a channel 115 may include a first terminal including one or more pins or pads at external memory controller 105 and one or more pins or pads at the memory device 110. A pin may be an example of a conductive input or output point of a device of the system 100, and a pin may be configured to act as part of a channel. In some cases, a pin or pad of a terminal may be part of to a signal path of the channel 115. Additional signal paths may be coupled with a terminal of a channel for routing signals within a component of the system 100. For example, the memory device 110 may include signal paths (e.g., signal paths internal to the memory
device 110 or its components, such as internal to a memory die 160) that route a signal from a terminal of a channel 115 to the various components of the memory device 110 (e.g., a device memory controller 155, memory dice 160, local memory controllers 165, memory arrays 170).[0046] Channels 115 (and associated signal paths and terminals) may be dedicated to communicating specific types of information. In some cases, a channel 115 may be an aggregated channel and thus may include multiple individual channels. For example, a data channel 190 may be x4 (e.g., including four signal paths), x8 (e.g., including eight signal paths), xl6 (e.g., including sixteen signal paths), and so forth. Signals communicated over the channels may use a double data rate (DDR) timing scheme. For example, some symbols of a signal may be registered on a rising edge of a clock signal and other symbols of the signal may be registered on a falling edge of the clock signal. Signals communicated over channels may use single data rate (SDR) signaling. For example, one symbol of the signal may be registered for each clock cycle.[0047] In some cases, the channels 115 may include one or more command and address (CA) channels 186. The CA channels 186 may be configured to communicate commands between the external memory controller 105 and the memory device 110 including control information associated with the commands (e.g., address information). For example, the CA channel 186 may include a read command with an address of the desired data. In some cases, the CA channels 186 may be registered on a rising clock signal edge and/or a falling clock signal edge. In some cases, a CA channel 186 may include any quantity of signal paths to decode address and command data (e.g., eight or nine signal paths).[0048] In some cases, the channels 115 may include one or more clock signal (CK) channels 188. The CK channels 188 may be configured to communicate one or more common clock signals between the external memory controller 105 and the memory device 110. Each clock signal may be configured to oscillate between a high state and a low state and coordinate the actions of the external memory controller 105 and the memory device 110. In some cases, the clock signal may be a differential output (e.g., a CK t signal and a CK c signal) and the signal paths of the CK channels 188 may be configured accordingly. In some cases, the clock signal may be single ended. A CK channel 188 may include any quantity of signal paths. In some cases, the clock signal CK (e.g., a CK_t signal and a CK_c signal) may provide a timing reference for command and addressing operations for the memory device
110, or other system-wide operations for the memory device 110. The clock signal CK may therefore be variously referred to as a control clock signal CK, a command clock signal CK, or a system clock signal CK. The system clock signal CK may be generated by a system clock, which may include one or more hardware components (e.g., oscillators, crystals, logic gates, transistors, or the like).[0049] In some cases, the channels 115 may include one or more data (DQ) channels 190. The data channels 190 may be configured to communicate data and/or control information between the external memory controller 105 and the memory device 110. For example, the data channels 190 may communicate information (e.g., bi-directional) to be written to the memory device 110 or information read from the memory device 110.[0050] In some cases, the channels 115 may include one or more other channels 192 that may be dedicated to other purposes. These other channels 192 may include any quantity of signal paths.[0051] The channels 115 may couple the external memory controller 105 with the memory device 110 using a variety of different architectures. Examples of the various architectures may include a bus, a point-to-point connection, a crossbar, a high-density interposer such as a silicon interposer, or channels formed in an organic substrate or some combination thereof. For example, in some cases, the signal paths may at least partially include a high-density interposer, such as a silicon interposer or a glass interposer.[0052] Signals communicated over the channels 115 may be modulated using a variety of different modulation schemes. In some cases, a binary-symbol (or binary-level) modulation scheme may be used to modulate signals communicated between the external memory controller 105 and the memory device 110. A binary-symbol modulation scheme may be an example of a M-ary modulation scheme where M is equal to two. Each symbol of a binary- symbol modulation scheme may be configured to represent one bit of digital data (e.g., a symbol may represent a logic 1 or a logic 0). Examples of binary-symbol modulation schemes include, but are not limited to, non-retum-to-zero (NRZ), unipolar encoding, bipolar encoding, Manchester encoding, pulse amplitude modulation (PAM) having two symbols (e.g., PAM2), and/or others.[0053] The memory device 110 may receive one or more commands to operate one or more portions of the memory device 110 in a low power mode. For example, the memory device 110 may receive a command to write power bitmap data (e.g., via a CA channel 186)
to one or more mode registers of the memory device 110. The power bitmap data may indicate a first portion of the memory device 110 (e.g., one or more memory banks) to be operated in a first low power mode such as a PD mode. The power bitmap data may also indicate a second portion of the memory device 110 to be operated in a second low power mode that is associated with a lower power consumption level than the first mode such as a DS mode. The memory device 110 may receive the power bitmap data and write it to one or more mode registers.[0054] The memory device 110 receive a command to enter a low power mode and access the power bitmap data stored on the mode registers. Additionally or alternatively, the memory device 110 may receive one or more commands specifying information that may be otherwise written to mode registers. The memory device 110 may switch the first portion of the memory device 110 to the PD mode and switch the second portion of the memory device 110 to the DS mode. While operating in the first and second portions in their respective low power modes, the memory device 110 may receive a command to switch the first portion from the PD mode, for example, to an idle or active mode (e.g., the memory device 110 may receive a command to selectively switch only portions in the PD mode to an idle or active mode, leaving portions in a DS mode in the DS mode). The memory device 110 may cause the first portion of the memory device 110 to exit from the PD mode while continuing to operate the second portion in the DS mode The memory device 110 may perform one or more operations at the first portion of the memory device 110. For example, the memory device 110 may perform read or write operations on banks associated with the first portion of the memory device 110. In some cases, the memory device 110 may receive a command to switch the first portion back to the low power mode and cause the first portion to enter the PD mode.[0055] FIG. 2 illustrates an example of a memory die 200 in accordance with examples as disclosed herein. The memory die 200 may be an example of the memory dice 160 described with reference to FIG. 1. In some cases, the memory die 200 may be referred to as a memory chip, a memory device, or an electronic memory apparatus. The memory die 200 may include one or more memory cells 205 that are programmable to store different logic states. Each memory cell 205 may be programmable to store two or more states. For example, the memory cell 205 may be configured to store one bit of information at a time (e.g., a logic 0 or a logic 1). In some cases, a single memory cell 205 (e.g., a multi-level memory cell) may
be configured to store more than one bit of information at a time (e.g., a logic 00, logic 01, logic 10, or a logic 11).[0056] A memory cell 205 may store a state (e.g., polarization state or dielectric charge) that represents digital data. In FeRAM architectures, the memory cell 205 may include a capacitor that includes a ferroelectric material to store a charge and/or a polarization representative of the programmable state. In DRAM architectures, the memory cell 205 may include a capacitor that includes a dielectric material to store a charge representative of the programmable state.[0057] Operations such as reading and writing may be performed on memory cells 205 by activating or selecting access lines such as a word line 210, a digit line 215, and/or a plate line 220. In some cases, digit lines 215 may also be referred to as bit lines. References to access lines, word lines, digit lines, plate lines or their analogues, are interchangeable without loss of understanding or operation. Activating or selecting a word line 210, a digit line 215, or a plate line 220 may include applying a voltage to the respective line.[0058] The memory die 200 may include the access lines (e.g., the word lines 210, the digit lines 215, and the plate lines 220) arranged in a grid-like pattern. Memory cells 205 may be positioned at intersections of the word lines 210, the digit lines 215, and/or the plate lines 220. By biasing a word line 210, a digit line 215, and a plate line 220 (e.g., applying a voltage to the word line 210, digit line 215, or plate line 220), a single memory cell 205 may be accessed at their intersection.[0059] Accessing the memory cells 205 may be controlled through a row decoder 225, a column decoder 230, and a plate driver 235. For example, a row decoder 225 may receive a row address from the local memory controller 265 and activate a word line 210 based on the received row address. A column decoder 230 receives a column address from the local memory controller 265 and activates a digit line 215 based on the received column address. A plate driver 235 may receive a plate address from the local memory controller 265 and activates a plate line 220 based on the received plate address. For example, the memory die 200 may include multiple word lines 210, labeled WL_1 through WL_M, multiple digit lines 215, labeled DL_1 through DL_N, and multiple plate lines, labeled PL_1 through PL_P, where M, N, and P depend on the size of the memory array. Thus, by activating a word line 210, a digit line 215, and a plate line 220, e.g., WL_1, DL_3, and PL_1, the memory cell 205 at their intersection may be accessed. The intersection of a word line 210 and a digit line 215,
in either a two-dimensional or three-dimensional configuration, may be referred to as an address of a memory cell 205. In some cases, the intersection of a word line 210, a digit line 215, and a plate line 220 may be referred to as an address of the memory cell 205.[0060] The memory cell 205 may include a logic storage component, such as capacitor 240, and a switching component 245. The capacitor 240 may be an example of a ferroelectric capacitor. A first node of the capacitor 240 may be coupled with the switching component 245 and a second node of the capacitor 240 may be coupled with a plate line 220. The switching component 245 may be an example of a transistor or any other type of switch device that selectively establishes or de-establishes electronic communication between two components.[0061] Selecting or deselecting the memory cell 205 may be accomplished by activating or deactivating the switching component 245. The capacitor 240 may be in electronic communication with the digit line 215 using the switching component 245. For example, the capacitor 240 may be isolated from digit line 215 when the switching component 245 is deactivated, and the capacitor 240 may be coupled with digit line 215 when the switching component 245 is activated. In some cases, the switching component 245 is a transistor and its operation is controlled by applying a voltage to a transistor gate, where the voltage differential between the transistor gate and transistor source is greater or less than a threshold voltage of the transistor. In some cases, the switching component 245 may be a p-type transistor or an n-type transistor. The word line 210 may be in electronic communication with the gate of the switching component 245 and may activate/deactivate the switching component 245 based on a voltage being applied to word line 210.[0062] A word line 210 may be a conductive line in electronic communication with a memory cell 205 that is used to perform access operations on the memory cell 205. In some architectures, the word line 210 may be in electronic communication with a gate of a switching component 245 of a memory cell 205 and may be configured to control the switching component 245 of the memory cell. In some architectures, the word line 210 may be in electronic communication with a node of the capacitor of the memory cell 205 and the memory cell 205 may not include a switching component.[0063] A digit line 215 may be a conductive line that connects the memory cell 205 with a sense component 250. In some architectures, the memory cell 205 may be selectively coupled with the digit line 215 during portions of an access operation. For example, the word
line 210 and the switching component 245 of the memory cell 205 may be configured to selected couple and/or isolate the capacitor 240 of the memory cell 205 and the digit line 215. In some architectures, the memory cell 205 may be in electronic communication (e.g., constant) with the digit line 215.[0064] A plate line 220 may be a conductive line in electronic communication with a memory cell 205 that is used to perform access operations on the memory cell 205. The plate line 220 may be in electronic communication with a node (e.g., the cell bottom) of the capacitor 240. The plate line 220 may be configured to cooperate with the digit line 215 to bias the capacitor 240 during access operation of the memory cell 205.[0065] The sense component 250 may be configured to determine a state (e.g., a polarization state or a charge) stored on the capacitor 240 of the memory cell 205 and determine a logic state of the memory cell 205 based on the detected state. The charge stored by a memory cell 205 may be extremely small, in some cases. As such, the sense component 250 may include one or more sense amplifiers to amplify the signal output of the memory cell 205. The sense amplifiers may detect minute changes in the charge of a digit line 215 during a read operation and may produce signals corresponding to either a logic 0 or a logic 1 based on the detected charge. During a read operation, the capacitor 240 of memory cell 205 may output a signal (e.g., discharge a charge) to its corresponding digit line 215. The signal may cause a voltage of the digit line 215 to change. The sense component 250 may be configured to compare the signal received from the memory cell 205 across the digit line 215 to a reference signal 255 (e.g., a reference voltage). The sense component 250 may determine the stored state of the memory cell 205 based on the comparison. For example, in binary signaling, if digit line 215 has a higher voltage than the reference signal 255, the sense component 250 may determine that the stored state of memory cell 205 is a logic 1, and, if the digit line 215 has a lower voltage than the reference signal 255, the sense component 250 may determine that the stored state of the memory cell 205 is a logic 0. The sense component 250 may include various transistors or amplifiers to detect and amplify a difference in the signals. The detected logic state of the memory cell 205 may be provided as an output of the sense component 250 (e.g., to an input/output 260), and may indicate the detected logic state to another component of a memory device 110 that includes the memory die 200, such as a device memory controller 155 (e g , directly or using the local memory controller 265). In
some cases, the sense component 250 may be in electronic communication with the row decoder 225, the column decoder 230, and/or the plate driver 235.[0066] The local memory controller 265 may control the operation of memory cells 205 through the various components (e g., row decoder 225, column decoder 230, plate driver 235, and sense component 250). The local memory controller 265 may be an example of the local memory controller 165 described with reference to FIG. 1. In some cases, one or more of the row decoder 225, column decoder 230, and plate driver 235, and sense component 250 may be co-located with the local memory controller 265. The local memory controller 265 may be configured to receive one or more commands and/or data from an external memory controller 105 (or a device memory controller 155 described with reference to FIG. 1), translate the commands and/or data into information that can be used by the memory die 200, perform one or more operations on the memory die 200, and communicate data from the memory die 200 to the external memory controller 105 (or the device memory controller 155) in response to performing the one or more operations. The local memory controller 265 may generate row, column, and/or plate line address signals to activate the target word line 210, the target digit line 215, and the target plate line 220. The local memory controller 265 may also generate and control various voltages or currents used during the operation of the memory die 200. In general, the amplitude, shape, or duration of an applied voltage or current discussed herein may be adjusted or varied and may be different for the various operations discussed in operating the memory die 200.[0067] In some cases, the local memory controller 265 may be configured to perform a write operation (e.g., a programming operation) on one or more memory cells 205 of the memory die 200. During a write operation, a memory cell 205 of the memory die 200 may be programmed to store a desired logic state. In some cases, a plurality of memory cells 205 may be programmed during a single write operation. The local memory controller 265 may identify a target memory cell 205 on which to perform the write operation. The local memory controller 265 may identify a target word line 210, a target digit line 215, and/or a target plate line 220 in electronic communication with the target memory cell 205 (e g., the address of the target memory cell 205). The local memory controller 265 may activate the target word line 210, the target digit line 215, and/or the target plate line 220 (e.g., applying a voltage to the word line 210, digit line 215, or the plate line 220), to access the target memory cell 205. The local memory controller 265 may apply a specific signal (e.g., voltage) to the digit line 215
and a specific signal (e.g., voltage) to the plate line 220 during the write operation to store a specific state in the capacitor 240 of the memory cell 205, the specific state being indicative of a desired logic state.[0068] In some cases, the local memory controller 265 may be configured to perform a read operation (e.g., a sense operation) on one or more memory cells 205 of the memory die 200. During a read operation, the logic state stored in a memory cell 205 of the memory die 200 may be determined. In some cases, a plurality of memory cells 205 may be sensed during a single read operation. The local memory controller 265 may identify a target memory cell 205 on which to perform the read operation. The local memory controller 265 may identify a target word line 210, a target digit line 215, and/or a target plate line 220 in electronic communication with the target memory cell 205 (e.g., the address of the target memory cell 205). The local memory controller 265 may activate the target word line 210, the target digit line 215, and/or a target plate line 220 (e.g., applying a voltage to the word line 210, the digit line 215, or the plate line 220), to access the target memory cell 205. The target memory cell 205 may transfer a signal to the sense component 250 in response to biasing the access lines. The sense component 250 may amplify the signal. The local memory controller 265 may fire the sense component 250 (e.g., latch the sense component) and thereby compare the signal received from the memory cell 205 to the reference signal 255. Based on that comparison, the sense component 250 may determine a logic state that is stored on the memory cell 205. The local memory controller 265 may communicate the logic state stored on the memory cell 205 to the external memory controller 105 (or the device memory controller) as part of the read operation.[0069] In some memory architectures, accessing the memory cell 205 may degrade or destroy the logic state stored in a memory cell 205. For example, a read operation performed on a ferroelectric memory cell may destroy the logic state stored in the ferroelectric capacitor. In another example, a read operation performed in DRAM architectures may partially or completely discharge the capacitor of the target memory cell. The local memory controller 265 may perform a re-write operation or a refresh operation to return the memory cell to its original logic state. The local memory controller 265 may re-write the logic state to the target memory cell after a read operation. In some cases, the re-write operation may be considered part of the read operation. Additionally, activating a single access line, such as a word line 210, may disturb the state stored in some memory cells in electronic communication with that
access line. Thus, a re-write operation or refresh operation may be performed on one or more memory cells that may not have been accessed.[0070] FIG. 3 illustrates an example of a memory device state diagram 300 that supports bank-configurable power modes in accordance with examples as disclosed herein. The features of the memory device state diagram 300 may be performed by a memory device (e.g., the memory device 110, the memory dice 160 or the memory die 200 described with reference to FIGs. 1-2) or one or more components of a memory device such as the memory device controller 155, the local memory controllers 165 or the local memory controller 265 described with reference to FIGs. 1-2. The memory device state diagram 300 may illustrate different states or operating modes that one or more portions (e.g., one or more banks) of the memory device may transition between. In some cases, certain functions of the memory device (e.g., read operations, write operations, refresh operations, etc.) may only be executed when a relevant portion (e.g., bank) of the memory device is operating in a specific state or mode (e.g., an active mode).[0071] Among other states, a memory device (e.g., at the direction of one or more controllers for the memory device, such as an external memory controller 105, a device memory controller 155, or a local memory controller 165, or a combination thereof) may operate in an idle state 305 or an active state 310. In the idle state 305, no memory cells may be available for access. In the active state 310, at least one portion (e.g., one bank, one row within one bank) of the memory device may be activated and available for access operations (e.g., read operations, write operations, or refresh operations). The memory device may switch between the idle mode 305 and the active mode 310 based on receiving one or more commands. For example, a memory device may be operating in the idle mode 305 and switch to the active mode 310 based on receiving a command (e.g., an Activate command for a memory bank) from a host device. The Activate command may transition the device into an activating mode 306, which may power up one or more components of the memory device. Once components of the memory device have powered up, the memory device may automatically transition from the activating mode 306 to the active mode 310. In some cases, a memory device may be operating in the active mode 310 and switch to the idle mode 305. For example, the memory device, while operating in the active mode 310, may receive a command to switch to an idle mode and perform one or more procedures to transition to the idle mode 305. In some cases (e.g., DRAM memory devices), a memory device may receive
a precharge command and transition to a precharging mode 312. From the precharging mode 312 the memory device may automatically transition to the idle mode 305, for example, after completing one or more precharging operations. The memory device may switch from the idle state 305 to any of a first set of operating modes, and from the active state 310 to any of a second set of modes.[0072] In some cases, from the idle mode 305, a controller may switch the memory device to operate in one of multiple low power modes 315, 320. For example, a memory device may enter a first low power mode 315 (e.g., Power Down-0), which may be referred to as a power down (PD) mode. When operating in the PD mode 315, a memory device may consume less current than when operating in the idle mode 305 or the active mode 310. In some examples, the PD mode 315 may be associated with the highest amount of current consumption out of the low power modes 315, 320 and have the shortest exit time back to the idle mode 305. In other examples, a memory device may enter a second set of low power modes 320 (e.g., Power Down-1, Power Down-2, Power Down-3), which may be referred to as a deep sleep (DS) mode 320. When operating in the DS mode 320, a memory device may consume less current (and deactivate more components) than when operating in the PD mode 315. A first DS mode 320-a may have the highest current consumption of the DS modes 320 and be referred to as a first DS level. A second DS mode 320-b may have a lower current consumption (and deactivate more components) than the first DS mode 320-a and be referred to as a second DS level. A third DS mode 320-c may have the lowest current consumption (and the most deactivated components) of the DS modes 320 and be referred to as a third DS level. In some examples, the first DS mode 320-a may have a slower exit time (e.g., time to switch to the idle state 305 or active state 310) than the PD mode 315, but the fastest exit time of the DS modes 320. The second DS mode 320-b may have a slower exit time than the first DS mode 320-a and a faster exit time than the third DS mode 320-c. It is to be understood that any number of PD or DS modes is possible.[0073] In some examples, when a memory device is exiting a DS mode 320, the device may first transition to a different low power mode such as the PD mode 315 or a refresh mode before switching to the idle mode 305. For example, when a memory device is operating in the first DS mode 320-a and receives a command to exit the low power mode, the memory device may first switch to the PD mode 315 before switching to the idle mode 305. In examples including a DRAM memory device, the PD mode 315 may include
performing self-refresh operations before switching to the idle mode 305. In some examples, when a memory device is exiting a DS mode 320 or a PD mode 315, it may transition directly to an idle mode, an active mode, or a different low power mode (e.g., from one DS mode 320 to another DS mode 320, from one PD mode 315 to another PD mode 315, from DS mode 320 to a PD mode 315, or from a PD mode 315 to a DS mode 320).[0074] In some cases, from the active mode 310, a controller may switch a memory device to operate in the active power down mode 325 or the access mode 330. In the active power down mode 325, current consumption may be higher than in low power modes 315, 320. For example, at least some circuitry within the memory device that is deactivated while the memory device is in one of the low power modes 315, 320 may remain active while the memory device is in the active power down mode 325. When the memory bank is in the access mode 330, a memory device may perform one or more access operations (e.g., read, write, etc.) on memory cells of an activated portion of the memory device (e.g., an activated memory bank).[0075] In some cases, as described herein, different portions of a memory device may be operated in different modes. For example, a memory device may be operating in a first mode such as an idle mode 305. A controller may switch a first portion of the memory device to operate in a first low power mode such as the PD mode 315 and a second portion of the memory device to operate in a second low power mode such as the DS mode 320. In some cases, the controller may subsequently switch the first portion from the PD mode 315 to the idle mode 305 or active mode 310 while maintaining the second portion in the DS mode. The controller may also subsequently switch the first portion from the idle 305 or active mode 310 back to a low power mode, such as the PD mode 315 while maintaining the second portion in the DS mode.[0076] In some cases, a controller may switch operating modes of the memory device at a bank level. For example, one or more banks of the memory device may be independently switched between different operating modes. In some examples, a first bank or set of banks may be switched (e.g., from an idle mode 305) to operate in the PD mode 315, while a second bank or set of banks are switched (e.g., from the idle mode 305) to operate in the DS mode 320. In other examples, additional sets of banks may be independently operated in other or the same modes. For example, a first set of banks may be switched to the PD mode 315 and a second set of banks may also be switched to the PD mode 315. In some cases, the first set of
banks may be switched out of the PD mode 315 while maintaining the second set of banks in the PD mode 315 (or in a DS mode 320). In further examples, a third set of banks may be operated in a different mode, such the DS mode 320. Accordingly, a memory device may dynamically and independently switch different banks or bank groups between different operating modes. It is to be understood that concepts may in some cases be described with references to one or more memory banks as an example and for clarity, but that a memory device may switch other portions of a memory device or die between different operating modes in a similar manner as described herein for memory banks.[0077] FIG. 4A illustrates an example of a command mode state diagram 401 that supports bank-configurable power modes in accordance with examples as disclosed herein. The features of the command mode state diagram 401 may be performed by a memory device (e.g., the memory device 110, the memory dice 160 or the memory die 200 described with reference to FIGs. 1-2) or one or more components of a memory device such as the memory device controller 155, the local memory controllers 165 or the local memory controller 265 described with reference to FIGs. 1-2. The command mode state diagram 400 may illustrate one or more commands 425, 426 used to switch a memory device between an idle mode 405, which may be an example of the idle mode 305 described with reference to FIG. 3, and a low power mode 415, which may be an example of the low power modes 320 (e.g., DS modes) described with reference to FIG. 3. In some cases, a mode register write (MRW) command 430 may be used to write data to one or more mode registers of a memory device.[0078] A memory device may be configured via one or more commands for switching different portions of the memory device between different modes. In some examples, data indicating an assignment of different portions of the memory device (e.g., memory banks) to different low power modes may be stored in a mode register. A mode register write (MRW) command 430 may be used to switch a memory device into an MRW mode 420. For example, a memory device may be operating in the idle mode 405 and receive the MRW command 430. In response, the memory device may switch from the idle mode 405 to the MRW mode 420, and while in the MRW mode 420, the memory device may write data to the mode register indicating an assignment of different memory banks to different low power modes. Upon completion of writing the mode register data, the memory device may switch from the MRW mode 420 back to the idle mode 405. In some examples, the switch from the
MRW mode 420 to the idle mode may be automatic (e.g., upon completion of writing the mode register data).[0079] A power down enter (PDE) command may be used to switch a memory device into the low power mode 415. For example, a memory device may be operating in the idle mode 405 and receive the PDE command 425-a. In response, the memory device may switch from the idle mode 405 to the low power mode 415 (e.g., DS mode), which may include different portions of the memory device entering different low power modes (e.g., a first set of banks entering a first DS level and a second set of banks entering a second DS level). In some cases, the assignment of a first portion of the memory device to a first low power mode 415 (e.g., a first DS level) and a second portion of the memory device to a second low power mode 415 (e.g., a second DS level) may be based data stored at the mode register. Different portions of the memory device may be operated in their respective low power modes until receiving one or more power down exit (PDX) commands.[0080] In some cases, the memory device may receive an exit command 426 (e.g., PDX ALL) to switch all memory banks out of the low power mode 415. For example, the exit all command 426 may be configured to indicate that all memory banks operating in the low power mode 415 (e.g., DS mode) are to be switched to the idle mode 405. The memory banks operating in the DS mode may switch to the idle mode 405 in the DS exit time, which may be greater than the PD exit time. In some cases, an exit time may alternatively be referred to as a wakeup time.[0081] FIG. 4B illustrates an example of a command mode state diagram 402 that supports bank-configurable power modes in accordance with examples as disclosed herein. The features of the command mode state diagram 402 may be performed by a memory device (e.g., the memory device 110, the memory dice 160 or the memory die 200 described with reference to FIGs. 1-2) or one or more components of a memory device such as the memory device controller 155, the local memory controllers 165 or the local memory controller 265 described with reference to FIGs. 1-2. The command mode state diagram 402 may illustrate one or more commands 425, 427 used to switch a memory device between an idle mode 405, which may be an example of the idle mode 305 described with reference to FIG. 3, and a lower power mode 417, which may be an example of the lower power modes 315 (e.g., PD mode) described with reference to FIG. 3. In some cases, the MRW command 430 may be used to write date to one or more mode registers of a memory device as described herein.
[0082] A power down enter (PDE) command may be used to switch a memory device into the low power mode 417 (e.g., PD mode). For example, a memory device may be operating in the idle mode 405 and receive the PDE command 425. In response, the memory device may switch from the idle mode 405 to the low power mode 417 (e.g., PD mode). In some cases, the PDE command 425 may include aspects of the PDE command 425 described with reference to FIG. 4A. For example, the PDE command 425 may switch from an idle mode 405 to one or more low power modes 415, 417, which may include different portions of the memory device entering different low power modes (e.g., a first set of banks entering a PD mode and a second set of banks entering a DS mode). In some cases, the assignment of a first portion of the memory device to a first lower power mode 415 (e.g., a DS mode) and a second portion of the memory device to a second low power mode 417 (e.g., a PD mode) may be based data stored at the mode register. Different portions of the memory device may be operated in their respective low power modes until receiving one or more power down exit (PDX) commands.[0083] In some cases, the memory device may receive a first selective exit command 427 (e.g., PDX SEL) instructing the memory device to switch a portion of the memory device out of the low power mode 417. For example, the selective exit command 427 may be configured to indicate that all memory banks operating in the PD mode 417 are to be switched to the idle mode 405, while maintaining memory banks operating in the DS mode 415. The memory banks operating in the PD mode may switch to the idle mode 405 within the exit time associated with the PD mode.[0084] In some cases, the memory device may receive a second exit command 427 (e.g., PDX_ALL) to switch all memory banks out of the low power mode 417. For example, the exit all command 427 may be configured to indicate that all memory banks operating in the low power mode 415 (e.g., PD mode, DS mode, etc.) are to be switched to the idle mode 405. The memory banks operating in the PD mode may switch to the idle mode 405 in the PD exit time and the memory banks operating in the DS mode may switch to the idle mode 405 in the DS exit time, which may be greater than the PD exit time. In some cases, an exit time may alternatively be referred to as a wakeup time.[0085] FIG. 4C illustrates an example of a command mode state diagram 403 that supports bank-configurable power modes in accordance with examples as disclosed herein. The features of the command mode state diagram 403 may be performed by a memory device
(e.g., the memory device 110, the memory dice 160 or the memory die 200 described with reference to FIGs. 1-2) or one or more components of a memory device such as the memory device controller 155, the local memory controllers 165 or the local memory controller 265 described with reference to FIGs. 1-2. The command mode state diagram 403 may illustrate one or more commands 425, 427 used to switch a memory device between an active mode 410, which may be an example of the active mode 310 described with reference to FIG. 3, and a lower power mode 419, which may be an example of the active power down 325 described with reference to FIG. 3. In some cases, the MRW command 430 may be used to write date to one or more mode registers of a memory device as described herein.[0086] In some cases, a memory device may transition directly between the active mode 410 and the low power mode 419. In cases where a memory device switches directly between the active mode 410 and the low power mode 419, the memory device may enter an active PD mode (e.g., active power down 325 described with reference to FIG. 3) and may not be able to enter a DS mode. For example, when a memory device is operating in the active mode 410 and receives the PDE command 425, the memory device may be configured to switch one or more memory banks to the active PD mode.[0087] FIG. 5 illustrates an example of a process flow 500 that supports bank- configurable power modes in accordance with examples as disclosed herein. The process flow 500 may be performed by a memory device (e.g., the memory device 110, the memory dice 160 or the memory die 200 described with reference to FIGs. 1-2) or one or more components of a memory device such as the memory device controller 155, the local memory controllers 165 or the local memory controller 265 described with reference to FIGs. 1-2.The process flow 500 may illustrate command signals transmitted over one or more channels such as command and address (CA) bus 505 and data (DQ) bus 510, which may be examples of the CA channel 186, the DQ channel 190, or other channels 115 described with reference to FIG. 1. The process flow 500 may also illustrate a PD mode 515 and a DS mode 520, which may indicate when subsets of memory banks are operating in the PD mode and/or DS mode as described herein. The process flow 500 may illustrate a command sequence for switching different subsets of memory banks between different modes (e.g., an idle mode, an active mode, a PD mode or a DS mode).[0088] At a first time, the host device or memory device may determine to transition one or more memory banks to a low power mode. This may be triggered by a variety of factors
including an anticipated inactivity period at the memory banks, an inactivity time threshold, a host device command, power constraints/thresholds, or the like. In some cases, the memory device may store data on one or more mode registers that indicates an assignment of different memory banks to different low power modes. In some cases, the memory device may store this data at the mode registers prior to determining to enter a low power mode. For example, a memory device may write the data at start-up or based on receiving a command from a host device. In other cases, the memory device may store this data at the mode registers after determining enter a low power mode. For example, in response to receiving a command to enter a low power mode or in response to a command from a host device.[0089] A memory device may receive an MRW command 525 over the CA bus 505, which may be an example of the MRW command 430 described with reference to FIG. 4. In response to receiving the MRW command 525, the memory device may write, to one or more mode registers, power mode data (PMD) 530 indicating the assignment of different memory banks to different low power modes. In some cases, the PMD 530 may include one or more power mode bitmaps, which are described further in relation to FIGs. 6-8. In some examples, the memory device may receive the PMD 530 via the DQ bus 510 or other channels (e.g., CA bus 505) and write the PMD 530 to one or more mode resisters of the memory device.[0090] A determination may be made to enter a low power mode and the memory device may receive a PDE command 535 over the CA bus 505, which may be an example of the PDE command 425-a described with reference to FIG. 4. The memory device may access the mode register to determine what low power mode the first set of memory banks should be switched to. That is, the memory device may use data stored in the mode register to determine whether the first set of memory banks should be switched to the PD mode or the DS mode. In response to receiving the PDE command 535 and determining that the first set of memory banks should be switched to the PD mode, the memory device may switch a first set of memory banks to a PD mode 540 and a second set of memory banks to a DS mode 545, which may be examples of the PD and DS modes described herein. The first and second sets of memory banks may continue to operate in the respective PD and DS modes until the memory device receives one or more additional commands.[0091] While operating in the low power mode, a host device (or memory device) may determine to perform one or more operations on a portion (subset) of the memory banks in the low power mode. The memory device may receive a PDX_SEL command 550 over the
CA bus 505, which may be an example of the PDX_SEL command 425-b described with reference to FIG. 4. In response to receiving the PDX_SEL command 550, the memory device may switch the first set of memory banks from the PD mode 540 to an idle or active mode. The first set of memory banks may exit the PD mode in a first duration, which may be referred to as the PD exit time. In some examples, the PD exit time may be faster than a DS exit time. In some examples, the memory device may perform one or more operations at the first set of memory banks such as one or more access operations (e.g., read, write, etc.) while maintaining the second set of the memory banks in the DS mode. Accordingly, the second set of memory banks may continue to operate in a lower power consumption mode, while the first set of memory banks are operating in a higher power consumption mode.[0092] After performing the operations on the first set of memory banks, a host device (or memory device) may determine to switch the first set of memory banks back to a low power mode. In some cases, the first set of banks will be switched to the PD mode to be able to access these banks in a shorter exit time (PD exit time) as compared to DS banks. In other, cases the first set of banks may be switched to the DS mode, which may decrease their power consumption compared to the PD mode but increase their exit time. The memory device may receive a PDE command 555 over the CA bus 505 and access the mode register to determine what low power mode the first set of memory banks should be switched to. That is, the memory device may use data stored in the mode register to determine whether the first set of memory banks should be switched to the PD mode or the DS mode. In response to receiving the PDE command 555 and determining that the first set of memory banks should be switched to the PD mode, the memory device may switch a first set of memory banks back to the PD mode 560.[0093] At a later time, another determination may be made to switch the memory banks out of the low power mode. In some cases, the first set of memory banks may be switched out of the PD mode independently of the second set of memory banks in the DS mode as described above. In some cases, the second set of memory banks may independently be switched out of the DS mode. In some cases, both the first and second sets of memory banks may be switched out of the PD and DS modes using a single command (e.g., PDX ALL). In one example, the memory device may receive a PDX_SEL command 565 and switch the first set of memory banks from the PD mode. The first set of memory banks may transition to an idle mode or active mode in the PD exit time. Additionally or alternatively, the memory
device may receive a PDX_ALL command 570 and switch the second set of memory banks from the DS mode. The second set of memory banks may transition to an idle or active mode in the DS exit time, which may be greater than the PD exit time. Accordingly, the first set of memory banks may be accessed within a quicker time than the second set of memory banks, even when a single PDX ALL command is received, and the memory device initiates the switching procedure for both sets of memory banks at the same time.[0094] The foregoing description of the process flow 500 in the context of a first set of memory banks and a second set of memory banks is presented to illustrate general concepts related to transitioning portions of memory device to and from different low power modes. Accordingly, this description is not intended to be limiting as these concepts apply to greater numbers of memory banks, different groups or memory banks, other memory devices hierarchy’s such as memory die, memory arrays, other groupings of memory cells, or the like or a combination thereof.[0095] FIGs. 6A-C illustrates an example of a power mode data that supports bank- configurable power modes in accordance with examples as disclosed herein. The power mode data may include a set of bank mask variables 605 (which may collectively comprise a bank mask) and a set of bank group mask variables 610 (which may collectively comprise a bank group mask), and which in some cases may be written to one or more mode registers of a memory device. A memory device may determine (e.g., in response to a PDE command) the low power mode in which different memory banks 615 are to be operated based on the bank mask variables 605 and the bank group mask variables 610. For example, different memory banks 615 may be switched into different low power modes (e.g., PD, mode and DS mode of varying DS levels as described herein) based on the corresponding power mode data. The bank mask variables 605 and bank group mask variables 610 may be correlated with specific memory banks 615 of a memory device based on a mapping between the fields of the one or more mode registers and the memory banks 615.[0096] FIG. 6 A illustrates an example of power mode assignments 601 for a set of memory banks 615 based on corresponding bank mask variables 605 and bank group mask variables 610. FIG. 6B illustrates an example of bitmap formats 602 for writing the corresponding bank mask and bank group mask to one or more mode registers. FIG. 6C illustrates an example of bitmap data 603 that comprises the corresponding bank mask variables 605 and bank group mask variables 610 (indicating the power mode assignments
601 illustrated in FIG. 6A) as written to mode registers in accordance with the bitmap formats602 illustrated in FIG. 6B. The power mode data illustrated in FIGs. 6A-C may be utilized in accordance with the techniques described herein by a memory device (e.g., the memory device 110, the memory dice 160 or the memory die 200 described with reference to FIGs. 1- 2) or one or more components of a memory device, such as the memory device controller 155, the local memory controllers 165, or the local memory controller 265 described with reference to FIGs. 1-2.[0097] FIG. 6A illustrates an example of power mode assignments 601 for a set of memory banks 615 based on corresponding bank mask variables 605 and bank group mask variables 610. For example, a low power mode assigned to each memory bank 615 may be indicated by (and thus determined based on) one or more of a corresponding bank mask variable 605 and a corresponding bank group mask variable 610. In some cases, for a given memory bank 615, a corresponding bank group mask 610 variable may indicate whether (i) the memory bank 615 is to be operated in a first low power mode (e.g., if the corresponding bank group mask 610 variable is a first logic value, such as “0,” the memory bank may be operated in the DS mode) or (ii) the memory bank is to be operated in a power mode specified by a second corresponding variable (e.g., if the corresponding bank group mask variable 610 is a second logic value, such as “1,” the corresponding bank mask variable 605 may be evaluated to determine whether the memory bank is to be operated in the PD mode or DS mode).[0098] In FIG. 6A, each column of memory banks 615 may correspond to a group of memory banks associated with a same bank group mask variable 610, and each row of memory banks 615 may be associated with a bank number (index) within a group of memory banks and thus a corresponding bank mask variable 605. Thus, each bank group mask variable 610 may be associated with a corresponding column of memory banks 615, and each bank mask variable 605 may be associated with a corresponding row of memory banks 615.It is to be understood that any number of groups of memory banks 615, each including any number of memory banks 615, may be used, and that the memory banks 615 and groups thereof need not be arranged in physical columns and rows as depicted in FIG. 6A.[0099] As shown in the example of FIG. 6A, a first group of memory banks 615 may be associated with bank group mask variable 610-a (BG0), a second group of memory banks 615 may be associated with bank group mask variable 610-b (BG1), a third group of memory
banks 615 may be associated with bank group mask variable 610-c (BG2), and a fourth group of memory banks 615 may be associated with bank group mask variable 610-d (BG3). The second group of memory banks 615 may all be assigned the DS mode based on setting BG1 to “0.” The first, third, and fourth groups of memory banks 615 may be assigned low power modes based on corresponding bank mask variables 605 based on setting BG0, BG2, and BG3 to “1.” Within each of the first, third, and fourth groups of memory banks 615, memory banks 615 in a row for which the corresponding bank mask variable 605 is set to “0” may be assigned the DS mode, and memory banks 615 in a row for which the corresponding bank mask variable 605 is “1” may be assigned the PD mode.[0100] The bank mask variables 605 and bank group mask variables 610 may alternatively be considered, explained, or evaluated as indicating, for each memory bank 615, a respective two-variable sequence in which the two variables collectively indicate the assigned low power mode (e.g., based on the combination of variables in the two-variable sequence). For example, a first variable of the two-variable sequence may be the corresponding bank group mask variable 610, and a second variable of the two-variable sequence may be the corresponding bank mask variable 605. Thus, in some cases, memory banks 615 associated with a 00, 01, or 10 sequence may be assigned to a first low power mode (e.g., a DS mode) and memory banks 615 associated with 11 sequence may be assigned to a second low power mode (e.g., a PD mode). Each memory bank 615 may be associated (e.g., based on a mapping to a field of a mode register) with a corresponding bank mask variable 605 and a corresponding bank group mask variable 610.[0101] By way of illustrative example, a first memory bank 615-a may be assigned a value for the first variable according to a fourth bank group mask variable 610-d (e.g., BG3=1) and a value for the second variable according to the first bank mask variable 605-a (e.g., B0 =1). Accordingly, the two-variable sequence for the first memory bank 615-a is 11, which may indicate that the first memory bank 615-a is assigned the PD mode (e.g., is to be switched into the PD mode in response to a PDE command). A second memory bank 615-b may be assigned a value for the first variable according to the fourth bank group mask 610-d (e.g., BG3=1) and a value for the second variable according to a fifth bank mask variable 605-e (e.g., B4=0). Accordingly, the two-variable sequence for the second memory bank 615- b is 01, which may be indicate that the second memory bank 615-b is assigned the DS mode (e.g., is to be switched into the DS mode in response to a PDE command).
[0102] FIG. 6B illustrates an example of bitmap formats 602 for writing the bank mask variables 605 (e.g., as a bank mask) and bank group mask variables 610 (e.g., as a bank group mask) to respective mode registers. The bank mask variables 605 and bank group mask variables 610 may be associated with specific addresses (fields, bit locations) in the mode register such that a memory device may correlate values stored in the mode register to specific memory banks 615. In some cases, the bitmap formats 602 may also include a format for storing data that indicates the DS level (out of multiple possible DS levels) into which the memory device is to switch the memory banks 615 assigned the DS mode. For example, the data that indicates the DS level may indicate whether memory banks assigned the DS mode are to be switched to a first, second, or third DS level (e.g., DS level 320-a, DS level 320-b, or DS level 320-c as described with reference to FIG. 3).[0103] In some cases, the PMD may include a first register entry 620 (e.g., PMD[0]), which may include a first set of values (B0-B7) each written to a specific register field (e.g., one of register fields 0-7). A memory device may be configured to identify that the values in the first register entry 620 correspond to the bank mask variables 605. The memory device may also be configured to identify that each register field (0-7) within the first register entry corresponds to a specific bank mask 605 value. For example, that register field 0 includes the B0 value, register field 1 includes the B1 value, etc. Accordingly, the memory device may access PMD data including the first register entry 620 and determine a value of a bank mask variable 605 (or additionally or alternatively, a value of a second variable in a two-variable sequence) for each memory bank 615.[0104] The PMD may also include a second register entry 625 (e.g., PMD[1]) which may include a second set of values (BG0-BG3) each written to a specific register field (e.g., 0-7). A memory device may be configured to identify that the values in the second register entry 625 correspond to the bank group mask variables 610. The memory device may also be configured to identify that register fields (0-3) within the second register entry correspond to a specific bank group mask 610 value. For example, that register field 0 includes the BG0 value, register field 1 includes the BG1 value, etc. Accordingly, the memory device may access PMD including the second register entry 625 and determine a value of a bank group mask variable 610 (or additionally or alternatively, a value of a first variable in a two-variable sequence) for each memory bank 615.
[0105] The memory device may identify a low power mode (e.g., PD mode or DS mode) that each memory bank 615 should be switched into based on the corresponding bank group mask variable 610 and the corresponding bank mask variable 605 (e.g., first and second variables) stored in the PMD and the configured mapping between those variables and the memory banks 615 (e.g., as discussed in relation to FIG. 6A).[0106] In some cases, a third mode register may contain a DS sequence, which may comprise an indication of the DS level into which the memory device is to switch the DS memory banks 615. This may be an option when a memory device supports multiple DS levels, such as DS levels 320 described with reference to FIG. 3. For example, a first DS level (e.g., 320-a) may be associated with a first DS sequence (e.g., 01), a second DS level (e.g., 320-b) may be associated with a second DS sequence (e.g., 10), and a third DS level may be associated with a third DS sequence (e.g., 11). To identify the DS level, a third register entry 630 may contain a set of values corresponding to one of the DS sequences. Accordingly, the PMD may include the third register entry 630 (e.g., PMD[2]) which may include a third set of values (DS_Level[l] and DS_Level[0]) each written to a specific register field (e.g., 0-1). A memory device may be configured to identify the values in the third register entry 630 as corresponding to the different DS sequences. For example, the first value of the DS sequence may be associated with register field 0 and the second value of the DS sequence may be associated with the register field 1. Accordingly, the memory device may access PMD including a specific DS level associated with the DS memory banks.[0107] FIG. 6C illustrates an example of bitmap values 603 stored to mode registers in accordance with example bitmap formats 602 illustrated in FIG. 6B and that indicate the example power mode assignments 601 illustrated in FIG. 6A. For example, the first register entry 620 (1, 1, 1, 1, 0, 0, 0, 0) corresponds to the example bank mask variable 605 values (B0=1, Bl=l, B2=l, B3=l, B4=0, B5=0, B6=0, B7=0), and the second register entry 626 (1, 0, 1, 1) corresponds to the example bank group mask 610 values (BG0=1, BG1=0, BG2=1, BG3=1). Accordingly, a memory device may be configured to access the mode registers, identify the stored bitmap values, and thereby determine which memory banks 615 should be switched into which low power mode (along with which DS level to use for the memory banks 615 assigned the DS mode).[0108] The example presented in FIG. 6 provides an example of a mapping that groups multiple memory banks together for the purpose of assigning a variable sequence. Such a
method may allow a fewer number of variables (e.g. 12 mode register values) to assign low power modes to a greater number of banks (e.g., 32 banks). In some case, such as larger memory arrays, this method may provide a solution for storing a relatively small number of variables in mode registers, while still allowing flexibility in terms of assigning different low power modes to different subsets of memory banks 615. The examples presented in FIGs. 7 and 8 respectively illustrate methods for individually assigning each memory bank 615 to a different low power mode (e.g., either a PD mode or a DS mode) and methods for further individually assigning each memory bank 615 that is assigned a DS mode to a specific DS level (e.g., DS levels 320). Accordingly, these methods may provide a greater granularity of control over the low power mode assigned to each memory bank 615 but may store greater amounts of variables at one or more mode registers. In some cases, the methods of FIGs. 6-8 may be combined, modified or otherwise adapted to provide different ways of assigning different low power modes to different memory banks. It is to be understood that different amounts of data (e.g., different numbers of bits within a mode register) may be dedicated to indicating low power mode assignments to memory banks or other portions of a memory device with tradeoffs between granularity and flexibility of control and associated overhead.It is further to be understood that register entries as described herein may be stored in any number of mode registers.[0109] FIGs. 7A-C illustrates an example of a power mode bitmap 703 that supports bank-configurable power modes in accordance with examples as disclosed herein. The power mode bitmap 703 may include writing PMD to one or more mode registers of a memory device where the PMD indicates an assignment of different memory banks to different low power modes. In the example of FIG. 7, the power mode bitmap 703 may include a unique value for each memory bank of a memory device. A first value (e.g., 0) may be associated with a first low power mode (e.g., a PD mode) and a second value (e.g., 1) may be associated with a second low power mode (e.g., a DS mode). Accordingly, each memory bank can either be assigned to a PD mode or a DS mode based on a mode register value associated with each memory bank. In some cases, the power mode bitmap 703 may include DS level data for assigning one of multiple different DS levels to the DS mode memory banks.[0110] FIG. 7A illustrates an example of memory bank assignments 701 that associate each memory bank 715 with a low power mode Each memory bank 715 may be associated with a bank mask address 705 and a bank group mask address 710. For example, a first
memory bank 715-a may have a unique address corresponding to a first bank mask address 705-a (B0) and a fourth bank group mask address 710-d (BG3). That is, the first memory bank 715-a may be associated with the unique address BG3_B0. By way of another example, a second memory bank 715-b may have a second unique address corresponding to a second bank mask address 705-e (B4) and a second bank group mask address 710-d (BG3). According, the second memory bank 715-b may be associated with the unique address BG3 B4. Each memory bank of a memory device may be associated with a unique address. The unique address may be used to associate each memory bank with a different mode register value that is used to indicate a low power mode for each memory bank.[0111] FIG. 7B illustrates a memory bank association 702 that correlates each unique memory bank address to a specific field in the mode register that may be used to store a value indicating a low power mode for the corresponding memory bank. For example, a first register entry 720 (PMD[0]) may include a unique register field for each memory bank in a first memory bank group mask 710-a (BG0). Further, a first register field (0) may be associated with unique bank address BG0_B0, a second register field (1) may be associated with unique bank address BG0 Bl, such that each memory bank in the first bank group mask 710-a (BG0) is assigned to a different register field. In some examples, each register entry 725, 730 and 735 (PMD[1], PMD[2] and PMD[3]) may include a unique register field for each memory bank in their respective groups. Accordingly, a memory device may be configured to associate different mode register locations with a different memory bank.[0112] In some cases, the memory bank association 702 may include a fifth register entry 740 (PMD[4]), which may be used to store values that indicate a DS level for memory banks assigned to the DS mode. In some cases, a single DS level may be specified by the values stored in the fourth register entry 740, which may be an example of the DS levels discussed in relation to FIG. 6.[0113] FIG. 7C illustrates an example of a power mode bitmap 703 stored at the mode register that correspond to the assignment of memory banks to low power modes illustrated in FIG. 7A. For example, the first register entry 720 (e.g., 0, 1, 0, 0, 1, 1, 1, 1) each correspond to a different memory bank in the first bank group mask 710-a (BG0). The second register entry 725 (e.g., 0, 1, 0, 1, 0, 0, 1, 0) each corresponds to a different memory bank in the second bank group mask 710-b (BG1). The third register entry 730 and fourth register entry 735 may contain values that each correspond to different memory banks in the third bank
group mask 710-c (BG2) and fourth bank group mask 710-d (BG3), respectively. A memory device may be configured to associate first mode register value (e g., 0) with a first low power mode and a second register value (e g., 1) with a second low power mode. In the illustrated example, first register value 0 is associated with the DS mode and the second register value 1 is associated with the PD mode. In this regard, a memory device may be configured to access the power mode bitmap 703 and determine a low power mode for each memory bank. In some cases, the memory device may access the fifth register entry 740 to determine which DS power mode (e.g., DS level) memory banks in the DS mode should be operated in. For example, a first set of values (e g., 0, 1) may correspond to a first DS level, a second set of values (1,0) may correspond to a second DS level, and a third set of values may correspond to a third DS level.[0114] FIG. 8A-C illustrates an example of a power mode bitmap 803 that supports bank-configurable power modes in accordance with examples as disclosed herein. The power mode bitmap 803 may include writing PMD to one or more mode registers of a memory device where the PMD indicates an assignment of different memory banks to different low power modes. In some cases, the power mode bitmap may also indicate a different DS levels for each memory bank that is assigned a DS mode. In the example of FIG. 8, the power mode bitmap 803 may include two mode register fields for each memory bank of a memory device. The values stored in the two mode register fields may uniquely indicate one or multiple different low power modes. For example, (i) if the two fields associated with a memory bank store a 00 sequence, the memory bank may be assigned to a PD mode, (ii) if the two fields store a 01, the memory bank may be assigned to a DS mode at DS level 1, (iii) if the two fields store a 10, the memory bank may be assigned to a DS mode at DS level 2, and (iv) if the two fields store a ll, the memory bank may be assigned to a DS mode at DS level 3. Accordingly, each memory bank may be individually assigned a low power mode and a DS level based on the two fields in the mode register that are associated with each memory bank.[0115] FIG. 8A illustrates an example of memory bank assignments 801 that associate each memory bank 815 with a low power mode and a DS level for DS memory banks Each memory bank may be associated with a unique bank address such as described in relation to FIG. 7. Additionally, each memory bank may be associated with a DS level, which may be used for memory banks assigned to the DS mode. For example, a first memory bank 815-a may have a first unique bank that correspond to the PD mode. By way of another example, a
second memory bank 815-b may have a second unique bank address corresponding to a DS mode and a DS level 2.[0116] FIG. 8B illustrates a memory bank association 802 that correlates each unique bank addresses to a specific low power mode field and a DS level field by correlating each memory bank to two fields in the mode register. For example, a first register entry (PMD[0]) may include a first register field (e.g., BG0_B0_0) and a second register field (e.g., BG0_B0_1) correlating to each memory bank. The combination of the first register field and the second register field may be used to differentiate between multiple different low power modes. For example, two register field may be able to indicate four different low power states using binary variables (e.g., a different power mode indicated by each unique variable combination - 00, 01, 10, 11).[0117] FIG. 8C illustrates an example of a power mode bitmap 803 stored at the mode register that corresponds to the assignment of memory banks to low power modes and DS levels illustrated in FIG. 8A. For example, the first register entry (e.g., 0, 0, 0, 0, 0, 0, 0, 0) has a first value (BG0_B0_0 = 0) and a second value (BG0_B0_1 = 0) that corresponds to a low power mode for a memory bank (BG0_B0 memory bank). Accordingly, a memory device may be configured to associate a sequence of mode register field to one of a multitude of low power modes, which may include different DS levels for memory banks operating in a DS mode.[0118] FIG. 9 illustrates an example of a command mode state diagram 900 that supports bank-configurable power modes in accordance with examples as disclosed herein. The features of the command mode state diagram 900 may be performed by a memory device (e.g., the memory device 110, the memory dice 160 or the memory die 200 described with reference to FIGs. 1-2) or one or more components of a memory device such as the memory device controller 155, the local memory controllers 165 or the local memory controller 265 described with reference to FIGs. 1-2. The command mode state diagram 900 may illustrate one or more commands 915, 920 used to switch a memory device between an idle mode 905 which may be an example of the idle mode 305 described with reference to FIG. 3 and a lower power mode 910, which may be an example of the low power modes 315, 320 (e.g.,PD mode or DS modes) described with reference to FIG. 3.[0119] In some cases, a memory device may be configured via a power down mode (PDM) command 915 that switches one or more banks, bank groups, bank ranges or the like
to one or more low power modes. In some examples, the PDM command 915 may switch memory banks to the low power mode without accessing PMD (e.g., a power mode bitmap) stored in a mode register. That is, the PDM command 915 may include information identifying the one or more memory banks to be switched to the low power mode (e.g., while other memory banks may be maintained in whichever mode they were operating at the time the PDM command is received). In some examples the PDM command 915 may also indicate which low power mode (e.g., PD mode, DS mode, or DS level) that the memory banks are to be switched into.[0120] In some examples, the PDM command 915 may switch a single memory bank into a low power mode by specifying a memory bank identifier (e.g., memory bank address) and a low power mode (e.g., PD mode or DS mode). A memory device that receives this command may be configured to identify the memory bank and low power mode indicated in the command and switch that memory bank to the designated low power mode.[0121] In some examples, the PDM command 915 may switch a group of memory banks into a low power mode. The PDM command 915 may include a memory bank group address that is associated with a group of memory banks at a memory device and a low power mode. A memory device may be configured to switch memory banks associated with the memory bank group address into the designated low power mode.[0122] In other examples, the PDM command 915 may switch a range of memory banks into a low power mode. The PDM command 915 may include a first memory bank address that designates a first memory bank in the range, a last memory bank address that designates a last memory bank in the range and a low power mode. A memory device may identify a range of memory banks that includes the memory bank associated with the first address, the memory bank associated with the last address, and any memory banks with addresses that fall between the first and last addresses. The memory device may switch the range of memory banks into the low power mode designated by the PDM command 915.[0123] The memory device may be configured via a PDM exit command 920 that switches one or more portions of the memory device out of the low power mode 910. For example, the PDM exit command 920 may be configured to identify memory banks that are to be switched to the idle mode 905. The memory banks indicated in the PDM exit command 920 operating in one or more low power modes may switch to the idle mode 905 within the exit time associated with their low power mode. In some cases, the PDM exit command 920
and the PDM command 915 may be implemented as a single command that includes a variable in which a value of the variable indicates whether the command comprises a PDM command 915 (to enter a low power mode) or a PDM exit command 920 (to exit a low power mode).[0124] In some examples, the PDM exit command 920 may switch a single memory bank out of a low power mode by specifying a memory bank identifier (e.g., memory bank address). A memory device that receives this command may be configured to identify the memory bank and switch that memory bank to the idle mode 905 or other mode.[0125] In some examples, the PDM exit command 920 may include a memory bank group address that is associated with a group of memory banks at a memory device. A memory device may be configured to switch memory banks associated with the memory bank group address out of the one or more low power modes.[0126] In other examples, the PDM exit command 920 may include a first memory bank address that designates a first memory bank in the range, a last memory bank address that designates a last memory bank in the range. A memory device may identify a range of memory banks that includes the memory bank associated with the first address, the memory bank associated with the last address, and any memory banks with addresses that fall between the first and last addresses. The memory device may switch the range of memory out of the one or more low power modes designated by the PDM exit command 920. In some cases, the PDM exit command 920 could be configured with a variable that switches all of the memory banks out of a low power mode. For example, PDM exit command 920 could include an “all” variable to indicate that each memory bank operating in a low power mode is to be switched to a different mode (e g., an idle mode). Thus, one or more parameters (e.g., variables) included in a command may indicate the action and the address (of one or more memory banks) corresponding to the command.[0127] FIG. 10 illustrates an example of a power level consumption profile 1000 that supports bank-configurable power modes in accordance with examples as disclosed herein. The power level consumption profile 1000 may provide a relative estimate of current use at a memory device based on the number of banks in a PD mode and the number of banks in a DS mode. The power level consumption profile 1000 provides an example for a memory device with thirty -two banks, however this is provided to illustrate the concepts and other quantities of memory banks are possible. The power level consumption profile 1000 may illustrate a
relative current use for a memory device operating in one or more low power modes as described herein, such as the memory device 110, the memory dice 160 or the memory die 200 described with reference to FIGs. 1-2, or one or more components of a memory device such as the memory device controller 155, the local memory controllers 165 or the local memory controller 265 described with reference to FIGs. 1-2.[0128] The power level consumption profile 1000 may relate a relative level of current consumption 1005 (y-axis) to the number of memory banks operating in a PD mode 1010 (x- axis). If all banks the memory banks (e.g., 32 memory banks) are operating in the PD mode then the relative current consumption 1005 at the memory device may be classified at 100% and if the none or the memory banks are operating in the PD mode (e.g., all memory banks are in a DS mode) then the relative current consumption may be 40%. In some cases, different ratios of memory banks operating in the PD mode and the DS mode may be desired, for example, to balance latency (exit time from a low power mode) and power consumption. The current consumption indicator 1015 may characterize the relationship between the ratio of banks in the PD and DS modes and the current consumption. For example, a first index point 1020 may relate the relative current consumption 1005 for a ratio that includes nine memory banks operating in the PD mode and twenty -three memory banks operating in the DS mode. Accordingly, if the memory device is operating nine memory banks in the PD mode and twenty-three banks in the DS mode, the relative current consumption may be 60%.[0129] The memory device may be configured with a power level consumption profile 1000 to determine how many memory banks to operate in each of the PD and DS modes. For example, if the memory device determines to operate at a 60% relative current use 1005, then the memory device would be able to determine that nine banks should be operated in the PD mode and 23 banks should be operated in the DS mode.[0130] FIG. 11 shows a block diagram 1100 of a memory device 1105 that supports bank-configurable power modes in accordance with examples as disclosed herein. The memory device 1105 may be an example of aspects of a memory device as described with reference to FIGs. 1-10. The memory device 1105 may include an operating mode manager 1110, a command processing component 1115, and a power mode manager 1120. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses).
[0131] The operating mode manager 1110 may operate a memory device in a first mode, the memory device including a set of memory banks. In some examples, the operating mode manager 1110 may operate, based on receiving the command and the information, the first memory bank in the first low power mode and the second memory bank in the second low power mode. In some examples, the operating mode manager 1110 may operate a set of memory banks in respective first modes, where the set of memory banks are within a memory device. In some examples, the operating mode manager 1110 may perform an access operation on the first memory bank while the second memory bank is in the second low power mode. In some examples, the operating mode manager 1110 may perform one or more access operations on the first subset of memory banks after switching the first subset of memory banks out of the first low power mode.[0132] The command processing component 1115 may receive, while operating the memory device in the first mode, a command for the memory device to enter a second mode corresponding to less power consumption by the memory device than the first mode. In some examples, the command processing component 1115 may receive, at the memory device, a command to reduce a level of power consumption for the memory device. In some examples, the command processing component 1115 may receive, at the memory device while operating the set of memory banks in the respective first modes, signaling that indicates to operate a first memory bank of the set in a second mode corresponding to a lower power consumption level than a respective first mode for the first memory bank. In some examples, the command processing component 1115 may receive a command for a memory device to enter a reduced power mode from a first power mode. In some examples, the command processing component 1115 may receive, while the first memory bank is in the first low power mode and the second memory bank is in the second low power mode, an exit command associated with the first low power mode.[0133] In some examples, the command processing component 1115 may receive, while operating the memory device in the second mode, a second command to switch the first subset of memory banks from the first low power mode to the first mode In some examples, the command processing component 1115 may receive, while operating the memory device in the second mode, a third command for the memory device to exit the second mode.[0134] In some examples, the command processing component 1115 may access the one or more mode registers based on receiving the command for the memory device to enter the
second mode. In some examples, the command processing component 1115 may receive, at the memory device, second signaling that indicates to operate a third memory bank of the set in a third mode included in the set of low power modes. In some examples, the command processing component 1115 may receive, after switching the first memory bank out of the first low power mode, a second command for the memory device to enter the reduced power mode. In some examples, the command processing component 1115 may receive, while the first memory bank is in the first low power mode and the second memory bank is in the second low power mode, a command for the memory device to exit the reduced power mode. In some examples, the command processing component 1115 may receive, while the first memory bank is not in the first low power mode and the second memory bank is in the second low power mode, a command for the memory device to exit the reduced power mode.[0135] In some cases, the signaling includes an indication of a selected low power mode from the set of low power modes, the selected low power mode being the second mode. In some cases, the signaling includes an identifier specific to the first memory bank. In some cases, the signaling includes an identifier of a group of banks that includes the first memory bank. In some cases, the signaling includes one or more identifiers corresponding to a range of bank addresses that includes a bank address for the first memory bank.[0136] The power mode manager 1120 may switch, based on receiving the command for the memory device to enter the second mode, the memory device into the second mode by switching a first subset of memory banks of the set to a first low power mode corresponding to a first power consumption level and a second subset of memory banks of the set to a second low power mode corresponding to a second power consumption level that is lower than the first power consumption level. In some examples, the power mode manager 1120 may write, to one or more mode registers of a memory device, information that assigns a first low power mode to a first memory bank of the memory device and a second low power mode to a second memory bank of the memory device. In some examples, the power mode manager 1120 may switch, based on receiving the signaling, the first memory bank from the respective first mode for the first memory bank to the second mode while maintaining a second memory bank of the set in a respective first mode for the second memory bank.[0137] In some examples, the power mode manager 1120 may switch a first memory bank of the memory device to a first low power mode based on receiving the command, the first low power mode associated with a first power consumption level In some examples, the
power mode manager 1120 may switch a second memory bank of the memory device to a second low power mode based on receiving the command, the second low power mode associated with a second power consumption level that is lower than the first power consumption level. In some examples, the power mode manager 1120 may switch, based on receiving the exit command, the first memory bank out of the first low power mode while maintaining the second memory bank in the second low power mode. In some examples, the power mode manager 1120 may switch, based on receiving the second command, the first subset of memory banks out of the first low power mode.[0138] In some examples, the power mode manager 1120 may maintain the second subset of memory banks in the second low power mode while switching the first subset of memory banks out of the first low power mode. In some examples, the power mode manager 1120 may maintain the second subset of memory banks in the second low power mode while performing the one or more access operations on the first subset of memory banks. In some examples, the power mode manager 1120 may switch, based on receiving the second command, the memory device out of the second mode by switching the first subset of memory banks out of the first low power mode and the second subset of memory banks out of the second low power mode. In some examples, the power mode manager 1120 may receive an indication of the second power consumption level, where the second power consumption level corresponds to one of a set of power consumption levels supported by the memory device for the second low power mode.[0139] In some examples, the power mode manager 1120 may receive information indicating an assignment of the first low power mode to the first subset of memory banks and the second low power mode to the second subset of memory banks. In some examples, the power mode manager 1120 may write an indication of the assignment to one or more mode registers. In some examples, the power mode manager 1120 may identify the first low power mode for the first subset of memory banks and the second low power mode for the second subset of memory banks based on the accessing, where the switching the first subset of memory banks to the first low power mode and the second subset of memory banks to the second low power mode is based on the identifying.[0140] In some examples, the power mode manager 1120 may write an indication of the power consumption level associated with the second low power mode to the one or more mode registers In some examples, the power mode manager 1120 may read, based on
receiving the command, the one or more mode registers. In some examples, the power mode manager 1120 may determine to operate the first memory bank in the first low power mode and the second memory bank in the second low power mode based on reading the one or more mode registers, where the operating is based on the determining. In some examples, the power mode manager 1120 may write a first set of values and a second set of values, where each of a set of memory banks included in the memory device is associated with a corresponding low power mode based on a respective combination of a first value from the first set of values and a second value from the second set of values.[0141] In some examples, the power mode manager 1120 may write an indication of a power consumption level associated with the second low power mode. In some examples, the power mode manager 1120 may write, for each of a set of memory banks included in the memory device, a respective indication of the first low power mode or the second low power mode. In some examples, the power mode manager 1120 may write, for each of a set of memory banks included in the memory device, a respective indication of one of a set of low power modes, the set of low power modes including the first low power mode, the second low power mode with a first power consumption level, and the second low power mode with a second power consumption level.[0142] In some examples, the power mode manager 1120 may switch, based on receiving the second signaling, the third memory bank from a respective first mode for the third memory bank to the third mode while maintaining the first memory bank in the second mode. In some examples, the power mode manager 1120 may switch the first memory bank to the first low power mode based on receiving the second command. In some examples, the power mode manager 1120 may switch the first memory bank out of the first low power mode and the second memory bank out of the second low power mode based on receiving the command for the memory device to exit the reduced power mode.[0143] In some examples, the power mode manager 1120 may, based on the command for the memory device to exit the reduced power mode, make the first memory bank is available for access before the second memory bank is available for access. In some examples, the power mode manager 1120 may switch the second memory bank out of the second low power mode based on receiving the command for the memory device to exit the reduced power mode.
[0144] In some cases, the first low power mode corresponds to a quicker wakeup time than the second low power mode. In some cases, the indication of the assignment includes one or more bitmaps that associate the first subset of memory banks with the first low power mode and the second subset of memory banks with the second low power mode. In some cases, the second mode is one of a set of low power modes supported by the memory device for the set of memory banks, each of the set of low power modes corresponding to a respective power consumption level that is lower than a power consumption level corresponding to an idle mode supported by the memory device for the set of memory banks.[0145] FIG. 12 shows a flowchart illustrating a method or methods 1200 that supports bank-configurable power modes in accordance with aspects of the present disclosure. The operations of method 1200 may be implemented by a memory device or its components as described herein. For example, the operations of method 1200 may be performed by a memory device as described with reference to FIG. 11. In some examples, a memory device may execute a set of instructions to control the functional elements of the memory device to perform the described functions. Additionally or alternatively, a memory device may perform aspects of the described functions using special-purpose hardware.[0146] At 1205, the memory device may operate a memory device in a first mode, the memory device including a set of memory banks. The operations of 1205 may be performed according to the methods described herein. In some examples, aspects of the operations of 1205 may be performed by an operating mode manager as described with reference to FIG. 11[0147] At 1210, the memory device may receive, while operating the memory device in the first mode, a command for the memory device to enter a second mode corresponding to less power consumption by the memory device than the first mode. The operations of 1210 may be performed according to the methods described herein. In some examples, aspects of the operations of 1210 may be performed by a command processing component as described with reference to FIG. 11.[0148] At 1215, the memory device may switch, based on receiving the command for the memory device to enter the second mode, the memory device into the second mode by switching a first subset of memory banks of the set to a first low power mode corresponding to a first power consumption level and a second subset of memory banks of the set to a second low power mode corresponding to a second power consumption level that is lower
than the first power consumption level. The operations of 1215 may be performed according to the methods described herein. In some examples, aspects of the operations of 1215 may be performed by a power mode manager as described with reference to FIG. 11.[0149] In some examples, an apparatus as described herein may perform a method or methods, such as the method 1200. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for operating a memory device in a first mode, the memory device including a set of memory banks, receiving, while operating the memory device in the first mode, a command for the memory device to enter a second mode corresponding to less power consumption by the memory device than the first mode, and switching, based on receiving the command for the memory device to enter the second mode, the memory device into the second mode by switching a first subset of memory banks of the set to a first low power mode corresponding to a first power consumption level and a second subset of memory banks of the set to a second low power mode corresponding to a second power consumption level that is lower than the first power consumption level.[0150] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for receiving, while operating the memory device in the second mode, a second command to switch the first subset of memory banks from the first low power mode to the first mode, and switching, based on receiving the second command, the first subset of memory banks out of the first low power mode.[0151] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for maintaining the second subset of memory banks in the second low power mode while switching the first subset of memory banks out of the first low power mode.[0152] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for performing one or more access operations on the first subset of memory banks after switching the first subset of memory banks out of the first low power mode, and maintaining the second subset of memory banks in the second low power mode while performing the one or more access operations on the first subset of memory banks.
[0153] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for receiving, while operating the memory device in the second mode, a third command for the memory device to exit the second mode, and switching, based on receiving the second command, the memory device out of the second mode by switching the first subset of memory banks out of the first low power mode and the second subset of memory banks out of the second low power mode.[0154] In some examples of the method 1200 and the apparatus described herein, the first low power mode corresponds to a quicker wakeup time than the second low power mode.[0155] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for receiving an indication of the second power consumption level, where the second power consumption level corresponds to one of a set of power consumption levels supported by the memory device for the second low power mode.[0156] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for receiving information indicating an assignment of the first low power mode to the first subset of memory banks and the second low power mode to the second subset of memory banks, and writing an indication of the assignment to one or more mode registers.[0157] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for accessing the one or more mode registers based on receiving the command for the memory device to enter the second mode, and identifying the first low power mode for the first subset of memory banks and the second low power mode for the second subset of memory banks based on the accessing, where the switching the first subset of memory banks to the first low power mode and the second subset of memory banks to the second low power mode may be based on the identifying.[0158] In some examples of the method 1200 and the apparatus described herein, the indication of the assignment includes one or more bitmaps that associate the first subset of memory banks with the first low power mode and the second subset of memory banks with the second low power mode.
[0159] Some examples of the method 1200 and the apparatus described herein may further include operations, features, means, or instructions for writing an indication of the power consumption level associated with the second low power mode to the one or more mode registers.[0160] FIG. 13 shows a flowchart illustrating a method or methods 1300 that supports bank-configurable power modes in accordance with aspects of the present disclosure. The operations of method 1300 may be implemented by a memory device or its components as described herein. For example, the operations of method 1300 may be performed by a memory device as described with reference to FIG. 11. In some examples, a memory device may execute a set of instructions to control the functional elements of the memory device to perform the described functions. Additionally or alternatively, a memory device may perform aspects of the described functions using special-purpose hardware.[0161] At 1305, the memory device may write, to one or more mode registers of a memory device, information that assigns a first low power mode to a first memory bank of the memory device and a second low power mode to a second memory bank of the memory device. The operations of 1305 may be performed according to the methods described herein. In some examples, aspects of the operations of 1305 may be performed by a power mode manager as described with reference to FIG. 11.[0162] At 1310, the memory device may receive, at the memory device, a command to reduce a level of power consumption for the memory device. The operations of 1310 may be performed according to the methods described herein. In some examples, aspects of the operations of 1310 may be performed by a command processing component as described with reference to FIG. 11.[0163] At 1315, the memory device may operate, based on receiving the command and the information, the first memory bank in the first low power mode and the second memory bank in the second low power mode. The operations of 1315 may be performed according to the methods described herein. In some examples, aspects of the operations of 1315 may be performed by an operating mode manager as described with reference to FIG. 11.[0164] In some examples, an apparatus as described herein may perform a method or methods, such as the method 1300. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable
by a processor) for writing, to one or more mode registers of a memory device, information that assigns a first low power mode to a first memory bank of the memory device and a second low power mode to a second memory bank of the memory device, receiving, at the memory device, a command to reduce a level of power consumption for the memory device, and operating, based on receiving the command and the information, the first memory bank in the first low power mode and the second memory bank in the second low power mode.[0165] Some examples of the method 1300 and the apparatus described herein may further include operations, features, means, or instructions for reading, based on receiving the command, the one or more mode registers, and determining to operate the first memory bank in the first low power mode and the second memory bank in the second low power mode based on reading the one or more mode registers, where the operating may be based on the determining.[0166] In some examples of the method 1300 and the apparatus described herein, writing the information to the one or more mode registers may include operations, features, means, or instructions for writing a first set of values and a second set of values, where each of a set of memory banks included in the memory device may be associated with a corresponding low power mode based on a respective combination of a first value from the first set of values and a second value from the second set of values.[0167] In some examples of the method 1300 and the apparatus described herein, writing the information to the one or more mode registers may include operations, features, means, or instructions for writing an indication of a power consumption level associated with the second low power mode.[0168] In some examples of the method 1300 and the apparatus described herein, writing the information to the one or more mode registers may include operations, features, means, or instructions for writing, for each of a set of memory banks included in the memory device, a respective indication of the first low power mode or the second low power mode.[0169] In some examples of the method 1300 and the apparatus described herein, writing the information to the one or more mode registers may include operations, features, means, or instructions for writing, for each of a set of memory banks included in the memory device, a respective indication of one of a set of low power modes, the set of low power modes
including the first low power mode, the second low power mode with a first power consumption level, and the second low power mode with a second power consumption level.[0170] FIG. 14 shows a flowchart illustrating a method or methods 1400 that supports bank-configurable power modes in accordance with aspects of the present disclosure. The operations of method 1400 may be implemented by a memory device or its components as described herein. For example, the operations of method 1400 may be performed by a memory device as described with reference to FIG. 11. In some examples, a memory device may execute a set of instructions to control the functional elements of the memory device to perform the described functions. Additionally or alternatively, a memory device may perform aspects of the described functions using special-purpose hardware.[0171] At 1405, the memory device may operate a set of memory banks in respective first modes, where the set of memory banks are within a memory device. The operations of 1405 may be performed according to the methods described herein. In some examples, aspects of the operations of 1405 may be performed by an operating mode manager as described with reference to FIG. 11.[0172] At 1410, the memory device may receive, at the memory device while operating the set of memory banks in the respective first modes, signaling that indicates to operate a first memory bank of the set in a second mode corresponding to a lower power consumption level than a respective first mode for the first memory bank. The operations of 1410 may be performed according to the methods described herein. In some examples, aspects of the operations of 1410 may be performed by a command processing component as described with reference to FIG. 11.[0173] At 1415, the memory device may switch, based on receiving the signaling, the first memory bank from the respective first mode for the first memory bank to the second mode while maintaining a second memory bank of the set in a respective first mode for the second memory bank. The operations of 1415 may be performed according to the methods described herein. In some examples, aspects of the operations of 1415 may be performed by a power mode manager as described with reference to FIG. 11.[0174] In some examples, an apparatus as described herein may perform a method or methods, such as the method 1400. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable
by a processor) for operating a set of memory banks in respective first modes, where the set of memory banks are within a memory device, receiving, at the memory device while operating the set of memory banks in the respective first modes, signaling that indicates to operate a first memory bank of the set in a second mode corresponding to a lower power consumption level than a respective first mode for the first memory bank, and switching, based on receiving the signaling, the first memory bank from the respective first mode for the first memory bank to the second mode while maintaining a second memory bank of the set in a respective first mode for the second memory bank.[0175] In some examples of the method 1400 and the apparatus described herein, the second mode may be one of a set of low power modes supported by the memory device for the set of memory banks, each of the set of low power modes corresponding to a respective power consumption level that may be lower than a power consumption level corresponding to an idle mode supported by the memory device for the set of memory banks, and the signaling includes an indication of a selected low power mode from the set of low power modes, the selected low power mode being the second mode.[0176] Some examples of the method 1400 and the apparatus described herein may further include operations, features, means, or instructions for receiving, at the memory device, second signaling that indicates to operate a third memory bank of the set in a third mode included in the set of low power modes, and switching, based on receiving the second signaling, the third memory bank from a respective first mode for the third memory bank to the third mode while maintaining the first memory bank in the second mode.[0177] In some examples of the method 1400 and the apparatus described herein, the signaling includes an identifier specific to the first memory bank.[0178] In some examples of the method 1400 and the apparatus described herein, the signaling includes an identifier of a group of banks that includes the first memory bank.[0179] In some examples of the method 1400 and the apparatus described herein, the signaling includes one or more identifiers corresponding to a range of bank addresses that includes a bank address for the first memory bank.[0180] FIG. 15 shows a flowchart illustrating a method or methods 1500 that supports bank-configurable power modes in accordance with aspects of the present disclosure. The operations of method 1500 may be implemented by a memory device or its components as
described herein. For example, the operations of method 1500 may be performed by a memory device as described with reference to FIG. 11. In some examples, a memory device may execute a set of instructions to control the functional elements of the memory device to perform the described functions. Additionally or alternatively, a memory device may perform aspects of the described functions using special-purpose hardware.[0181] At 1505, the memory device may receive a command for a memory device to enter a reduced power mode from a first power mode. The operations of 1505 may be performed according to the methods described herein. In some examples, aspects of the operations of 1505 may be performed by a command processing component as described with reference to FIG. 11.[0182] At 1510, the memory device may switch a first memory bank of the memory device to a first low power mode based on receiving the command, the first low power mode associated with a first power consumption level. The operations of 1510 may be performed according to the methods described herein. In some examples, aspects of the operations of 1510 may be performed by a power mode manager as described with reference to FIG. 11.[0183] At 1515, the memory device may switch a second memory bank of the memory device to a second low power mode based on receiving the command, the second low power mode associated with a second power consumption level that is lower than the first power consumption level. The operations of 1515 may be performed according to the methods described herein. In some examples, aspects of the operations of 1515 may be performed by a power mode manager as described with reference to FIG. 11.[0184] At 1520, the memory device may receive, while the first memory bank is in the first low power mode and the second memory bank is in the second low power mode, an exit command associated with the first low power mode. The operations of 1520 may be performed according to the methods described herein. In some examples, aspects of the operations of 1520 may be performed by a command processing component as described with reference to FIG. 11.[0185] At 1525, the memory device may switch, based on receiving the exit command, the first memory bank out of the first low power mode while maintaining the second memory bank in the second low power mode. The operations of 1525 may be performed according to
the methods described herein. In some examples, aspects of the operations of 1525 may be performed by a power mode manager as described with reference to FIG. 11.[0186] At 1530, the memory device may perform an access operation on the first memory bank while the second memory bank is in the second low power mode. The operations of 1530 may be performed according to the methods described herein. In some examples, aspects of the operations of 1530 may be performed by an operating mode manager as described with reference to FIG. 11.[0187] In some examples, an apparatus as described herein may perform a method or methods, such as the method 1500. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for receiving a command for a memory device to enter a reduced power mode from a first power mode, switching a first memory bank of the memory device to a first low power mode based on receiving the command, the first low power mode associated with a first power consumption level, switching a second memory bank of the memory device to a second low power mode based on receiving the command, the second low power mode associated with a second power consumption level that is lower than the first power consumption level, receiving, while the first memory bank is in the first low power mode and the second memory bank is in the second low power mode, an exit command associated with the first low power mode, switching, based on receiving the exit command, the first memory bank out of the first low power mode while maintaining the second memory bank in the second low power mode, and performing an access operation on the first memory bank while the second memory bank is in the second low power mode.[0188] Some examples of the method 1500 and the apparatus described herein may further include operations, features, means, or instructions for receiving, after switching the first memory bank out of the first low power mode, a second command for the memory device to enter the reduced power mode, and switching the first memory bank to the first low power mode based on receiving the second command.[0189] Some examples of the method 1500 and the apparatus described herein may further include operations, features, means, or instructions for receiving, while the first memory bank may be in the first low power mode and the second memory bank may be in the second low power mode, a command for the memory device to exit the reduced power mode, and switching the first memory bank out of the first low power mode and the second
memory bank out of the second low power mode based on receiving the command for the memory device to exit the reduced power mode.[0190] Some examples of the method 1500 and the apparatus described herein may further include operations, features, means, or instructions for , based on the command for the memory device to exit the reduced power mode, the first memory bank may be available for access before the second memory bank may be available for access.[0191] Some examples of the method 1500 and the apparatus described herein may further include operations, features, means, or instructions for receiving, while the first memory bank may be not in the first low power mode and the second memory bank may be in the second low power mode, a command for the memory device to exit the reduced power mode, and switching the second memory bank out of the second low power mode based on receiving the command for the memory device to exit the reduced power mode.[0192] It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, portions from two or more of the methods may be combined.[0193] An apparatus is described. The apparatus may include a set of memory banks within a memory device, where each memory bank of the set supports an access mode, a first low power mode corresponding to less power consumption than the access mode, and a second low power mode corresponding to less power consumption than the first low power mode and a controller coupled with the set of memory banks and configured to cause the apparatus to operate at least one memory bank of the set in a selected mode including one of the access mode, the first low power mode, or the second low power mode independent of whether other memory banks of the set are in the access mode, the first low power mode, or the second low power mode.[0194] Some examples of the apparatus may include one or more mode registers configured to store an assignment of the first low power mode to a first subset of the set of memory banks and the second low power mode to a second subset of the set of memory banks.[0195] Some examples may further include access the one or more mode registers based on the memory device receiving a command to reduce an amount of power consumption for
the memory device, and operate the first subset of the set of memory banks in the first low power mode and the second subset of the set of memory banks in the second low power mode based on accessing the one or more mode registers.[0196] In some examples, a power consumption level for the second low power mode may be selectable from among a set of power consumption levels, and the one or more mode registers may be further configured to store an indication of a selected power consumption level for the second low power mode.[0197] Some examples may further include switching a first subset of the set of memory banks out of the first low power mode and maintain a second subset of the set of memory banks in the second low power mode based on at least in part on the memory device receiving an exit command for the first low power mode.[0198] In some examples, each of the set of memory banks may be configured to be available for access operations with a first latency when switched out of the first low power mode and available for access operations with a second latency when switched out of the second low power mode, the first latency shorter than the second latency.[0199] Some examples may further include operating a first subset of the set of memory banks in the first low power mode and a second subset of the set of memory banks in the second low power mode based on the memory device receiving one or more commands indicating the first low power mode for the first subset of the set of memory banks and the second low power mode for the second subset of the set of memory banks.[0200] It is to be understand that aspects described herein with reference to mode registers or related commands (e g., MRW commands) may also be implemented using other types of registers or any other type of storage and related commands (e.g., commands to read or write such other types of registers or storage).[0201] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, it will be understood by a person of ordinary
skill in the art that the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0202] The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some cases, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0203] The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals can be communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0204] The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components from one another, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0205] The term “layer” used herein refers to a stratum or sheet of a geometrical structure. Each layer may have three dimensions (e.g., height, width, and depth) and may
cover at least a portion of a surface. For example, a layer may be a three-dimensional structure where two dimensions are greater than a third, e.g., a thin-film. Layers may include different elements, components, and/or materials. In some cases, one layer may be composed of two or more sublayers. In some of the appended figures, two dimensions of a three- dimensional layer are depicted for purposes of illustration.[0206] As used herein, the term “electrode” may refer to an electrical conductor, and in some cases, may be employed as an electrical contact to a memory cell or other component of a memory array. An electrode may include a trace, wire, conductive line, conductive layer, or the like that provides a conductive path between elements or components of memory array.[0207] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0208] A switching component or a transistor discussed herein may represent a field- effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be “off’ or
“deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0209] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0210] Though examples herein may in some cases be described in terms of one or more types of memory devices (e.g., DRAM or FeRAM memory devices), it is to be understood that the teachings herein may be applied to any type of memory device.[0211] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0212] The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0213] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more
instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of’ or “one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0214] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually
reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0215] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. |
The embodiments of the invention relate to a device having a first substrate comprising a transistor; a second substrate; an insulating layer in between and adjoining the first and second substrates; and an opening within the second substrate, the opening being aligned with the transistor; wherein the transistor is configured to detect an electrical charge change within the opening. Other embodiments relate to a method including providing a substrate comprising a first part, a second part, and an insulating layer in between and adjoining the first and second parts; fabricating a transistor on the first part; and fabricating an opening within the second part, the opening being aligned with the transistor; wherein the transistor is configured to detect an electrical charge change within the opening. |
1.A device containing:A first substrate containing a transistor;Second substrateAn insulating layer located between and adjacent to the first substrate and the second substrate; andAn opening, located in the second substrate, said opening being aligned with the transistor;The transistor is configured to detect a change in charge in the opening.2.The device of claim 1, wherein the transistor is a field effect transistor (FET).3.The device of claim 2, wherein the FET is a metal oxide semiconductor FET (MOSFET), a junction FET (JFET), a metal semiconductor FET (MESFET), or a high electron mobility (HEMFET).4.The device of claim 2, wherein the FET contains nanowires, nanocrystals, nanotubes, nanopillars, nanoslots, or patterned nanostructures.5.The device of claim 4 wherein the FET contains single-walled carbon nanotubes.6.The device of claim 1, wherein the first substrate and the second substrate independently contain a polymer, silicon, or glass.7.The device of claim 1, wherein the first substrate or the second substrate contains a silicon wafer.8.The device of claim 1, wherein the first substrate and the second substrate independently contain a microarray, a macroarray, a multiwell plate, a microfluidic device, an integrated circuit, a MEMS, or a combination thereof.9.The device of claim 1, further comprising a microprocessor capable of processing signals or data generated by the transistor.10.The device of claim 1, wherein the first substrate is connected to a supporting substrate.11.The device of claim 10, wherein said connection is made by a solder layer.12.The device of claim 1, wherein the first substrate is substantially flat and has a thickness of about 10 nm to about 1.0 mm.13.The device of claim 1, wherein the insulating layer contains silicon oxide.14.The device of claim 1, wherein the thickness of the insulating layer is from about 5.0 nm to about 100 nm.15.The device of claim 1, wherein the second substrate is substantially flat and has a thickness of about 0.5 [mu] m to about 10 mm.16.The device of claim 15, wherein the opening passes through a thickness direction of the second substrate.17.The device of claim 15 wherein the space occupied by the opening comprises a cube, a cylinder, a prism, or a frustum.18.The device of claim 17 wherein the size of the opening is from about 10 nm to about 5 μm.19.The device of claim 1, wherein the transistor is a FET and the opening is aligned with a channel region of the FET.20.The device of claim 1, wherein the change in charge comprises electrical disturbance, impedance, current, voltage, or light-induced charge separation.21.The device of claim 1, wherein the inner surface of the opening is functionalized to facilitate molecular binding.22.The device of claim 1, wherein the change in charge is caused by a molecular binding event on or near the inner surface of the opening.23.The device of claim 22, wherein the molecular binding event comprises binding of the first binding partner to the inner surface of the opening and binding of the second binding partner to the first binding partner.24.The device of claim 23, wherein the first binding partner or the second binding partner comprises a biomolecule.25.The device of claim 23, wherein the first binding partner comprises an antibody, an antigen, a receptor, a ligand, a protein, a peptide, a virus, a bacterium, a carbohydrate, a lipid, a polynucleotide, a nucleic acid, or a macromolecule.26.The device of claim 23, wherein the second binding partner comprises an antigen, an antibody, a protein, a peptide, a virus, a bacterium, a carbohydrate, a lipid, a polynucleotide, a nucleic acid, or a macromolecule.27.The device of claim 23, wherein the second binding partner comprises an antigen and the first binding partner comprises an antibody against the antigen.28.The device of claim 23, wherein the second binding partner comprises a peptide and the first binding partner comprises a receptor or ligand for the peptide.29.The device of claim 23, wherein the second binding partner comprises a first polynucleotide and the first binding partner comprises a complementary polynucleotide of the first polynucleotide.30.A method comprising:Providing a substrate comprising a first portion, a second portion, and an insulating layer located between and adjacent to the first portion and the second portion;Making a transistor on the first part; andMaking an opening in the second part, said opening being aligned with the transistor;The transistor is configured to detect a change in charge in the opening.31.The method of claim 30, wherein said substrate comprises a silicon wafer.32.The method of claim 30, wherein the providing of the substrate comprises implanting oxygen ions into a predetermined area of the substrate to produce an insulating layer, wherein the insulating layer separates the substrate into a first portion and a second portion.33.The method of claim 30, wherein the providing of the substrate comprises:Providing a first substrate and a second substrate;Oxidizing the surfaces of the first substrate and the second substrate; andBonding the first substrate and the second substrate through an oxidized surface;The first substrate forms the first portion, the second substrate forms the second portion, and the bonded oxidized surface forms the insulating layer.34.The method of claim 30, further comprising attaching the support substrate to the first substrate.35.The method of claim 34, wherein said connecting is achieved by welding means.36.The method of claim 30, further comprising independently fabricating a microarray, a macroarray, a multiwell plate, a microfluidic device, an integrated circuit, a MEMS, or a combination thereof on the first and second portions.37.The method of claim 30, further comprising manufacturing a microprocessor on the first or second portion, said microprocessor being capable of processing signals or data generated by the transistor.38.The method of claim 30, further comprising thinning the first portion or the second portion.39.The method of claim 30, wherein the first portion is substantially flat and has a thickness of about 10 nm to about 1.0 mm.40.The method of claim 30, wherein the thickness of the insulating layer is from about 5.0 nm to about 100 nm.41.The method of claim 30, wherein the second portion is substantially flat and has a thickness of about 1.0 μm to about 10 mm.42.The method of claim 39, wherein the opening passes through a thickness direction of the second substrate.43.The method of claim 30, wherein the transistor is a FET and the opening is aligned with a channel region of the FET.44.The method of claim 30, further comprising functionalizing the inner surface of the opening to facilitate molecular binding.45.A method comprising:A device is provided, the device comprising a first substrate including a transistor, a second substrate, an insulating layer located between and adjacent to the first substrate and the second substrate, and a second substrate An opening in the chip, the opening being aligned with the transistor;Providing the analyte on or near the interior surface of the opening; andA transistor is used to detect changes in charge on or near the inner surface of the opening.46.The method of claim 45, further comprising processing a signal or data generated by the transistor.47.45. The method of claim 45, wherein the change in charge comprises electrical disturbance, impedance, current, voltage, or light-induced charge separation.48.The method of claim 45, wherein the inner surface of the opening is functionalized to facilitate molecular binding.49.The method of claim 45, wherein the change in charge is caused by a molecular binding event on or near the interior surface of the opening.50.The method of claim 49, further comprising fixing the binding partner to an inner surface of the opening.51.50. The method of claim 50, wherein the providing of the analyte comprises binding the analyte to a binding partner.52.The method of claim 51, wherein the binding partner or analyte comprises a biomolecule.53.50. The method of claim 50, wherein the binding partner comprises an antibody, antigen, receptor, ligand, protein, peptide, virus, bacteria, carbohydrate, lipid, polynucleotide, nucleic acid, or macromolecule.54.The method of claim 51, wherein the analyte comprises an antigen, an antibody, a protein, a peptide, a virus, a bacterium, a carbohydrate, a lipid, a polynucleotide, a nucleic acid, or a macromolecule.55.The method of claim 52, wherein the analyte comprises an antigen and the binding partner comprises an antibody against the antigen.56.The method of claim 52, wherein the analyte comprises a peptide and the binding partner comprises a receptor or ligand for the peptide.57.The method of claim 52, wherein the analyte comprises a first polynucleotide and the binding partner comprises a complementary polynucleotide of the first polynucleotide.58.A device containing:A first substrate, a second substrate, and an insulation layer located between and adjacent to the first substrate and the second substrate; wherein the first substrate contains a transistor array and the second substrate contains An array of openings, each opening in at least a portion of the openings being aligned with one of the transistors.59.The device of claim 58, wherein each of at least a portion of the transistors is capable of detecting a change in charge in an opening aligned with the transistor.60.The device of claim 58, wherein at least a portion of the transistors are field effect transistors (FETs).61.The device of claim 58, wherein at least a portion of the transistors are independently addressable.62.The device of claim 58, wherein at least a portion of the inner surface of the opening is functionalized to facilitate molecular binding.63.The device of claim 58, wherein the inner surface of each of at least a portion of the openings has more than one binding partner bound thereto.64.The device of claim 63, wherein the binding partner bound to one opening comprises the same molecule.65.The device of claim 63, wherein at least two of the binding partners bound to one opening comprise different molecules.66.The device of claim 63, wherein a binding partner that binds to at least two of said openings comprises the same molecule.67.The device of claim 63, wherein the binding partners that bind to at least two of the openings comprise different molecules.68.A device comprising a substrate having a front surface and a back surface, a sensor node array on a first surface of the substrate, and a through-hole opening through a thickness of the substrate, wherein at least some of the sensor nodes Functionalized probe molecule through a via opening.69.The device of claim 68, further comprising external logic on the starting wafer for column and row selection to access a particular sensor node.70.The device of claim 69, further comprising a readout circuit comprising CMOS logic with a sense amplifier to detect a change in current when an analyte molecule is added to the via opening. |
Three-dimensional integrated circuit for analyte detectionRelated applications[001]no.Field of invention[002]Embodiments of the invention relate to devices and methods for detecting biomolecules such as analytes. Specifically, the embodiment includes using a semiconductor device including a transistor as an electrical sensor in biomolecule detection. The invention spans several disciplines, such as biochemistry, physics, microelectronics, immunology, molecular biology, and medical diagnostics.background[003]Rapid and specific detection of biomolecules and biological cells (such as proteins, DNA and RNA, viruses, peptides, antibodies, antigens, erythrocytes) for bioassays that are essential for research such as genomics, proteomics, diagnostics and pathology , White blood cells and platelets) have become increasingly important. For example, rapid and accurate detection of specific antigens and viruses is important to fight epidemic diseases such as AIDS, influenza and other infectious diseases. In addition, because methods for isolating and detecting cells and biomolecules are faster and more specific, the molecular-level origins of diseases are being elucidated at a rapid rate, and it is possible to usher in a new era of personalized medicine, developed for each patient Specific course of treatment. In order to take full advantage of this extended knowledge of disease phenotypes, new methods for the simultaneous detection of multiple biological molecules such as viruses, DNA and proteins are increasingly needed and required. Multiplex biological assays must be fast, sensitive, highly parallel, and ideally capable of diagnosing cell phenotypes in vivo.[004]One particular type of bioassay that is increasingly used in medical diagnostics, food, and environmental analysis is immunoassay. An immunoassay is a biochemical test that uses the reaction of an antibody with its antigen to measure the level of a substance in a biological fluid such as serum or urine. This assay utilizes the specific binding of an antibody to its antigen. Monoclonal antibodies are often used because they usually bind to only one part of a particular molecule, thus providing a more specific and accurate test that is less likely to be confused by the presence of other molecules. The selected antibody must have a high affinity for the antigen (if an antigen is available, a significant portion of it must bind to the antibody). In immunoassays, the presence of either an antigen or an antibody can be measured. For example, when detecting an infection, the presence of anti-pathogen antibodies is detected. To measure hormones such as insulin, insulin is used as the antigen.[005]Traditionally, in order to obtain numerical results, the liquid response to be measured must be compared to a standard of known concentration. This is usually done by making a standard curve on the graph, and then checking where the unknown reacts in the curve, thus finding the amount of the unknown. Detection of antibody or antigen content can be achieved by a variety of methods. One of the most common methods is to label the antigen or antibody. The label may consist of an enzyme, a radioisotope or a fluorophore.[006]Increasingly bioassays such as immunoassays and gene sequencing are being performed on microarrays such as DNA microarrays or protein microarrays. A microarray is a collection of micropoints (such as DNA spots or protein spots) containing probes attached to a solid surface (such as glass, plastic, or silicon chips) that form an array. Multiple probes can be assembled on a single substrate using techniques well known to those skilled in the art. Probes can bind to an analyte or group of analytes by hybridization. Examples of applications of such arrays include, but are not limited to, studies to determine which genes are active in cancer, studies to determine which genetic differences cause patients to have adverse reactions to drug treatments, studies of infectious diseases, and studies to determine whether patients have genetic mutations.[007]At present, the detection of chemical reactions or binding is achieved by a multi-step method, as shown in Figure 1. Label the analyte in the sample with a fluorescent or other label (eg, chemiluminescence, radioactivity, dye, etc.). The samples are washed on the array and the analytes bind to their complementary probes on the surface due to hybridization. When the binding to the probe occurs on the substrate, the label is bound to a certain position on the substrate. Illuminating the mark with the instrument produces points that are visible to the reader. Fluorescent markers are typically used and read out with an instrument using laser illumination and a CCD camera to digitize the position and brightness of the combined marker.Brief description of the drawings[008]FIG. 1 (Prior Art) illustrates a method for detecting an analyte using a fluorescent type label for a conventional microarray.[009]Figure 2 shows a biochemical sensor according to an embodiment of the invention. Figure 2a shows a plan view of a SOI FET for recording logic transistors used in microprocessor applications. Figure 2b shows that SOI devices can be used as sensors by exposing biochemical reagents to the channel region of a transistor. Figure 2c illustrates a SOIFET sensor manufactured using a three-dimensional wafer stacking technique.[010]FIG. 3 shows a cross-sectional view of a stacked wafer device that can be used as a biosensor in an embodiment of the present invention.[011]Figure 4 shows the effect of thin-body SOI devices and back substrate bias on the channel transmission of p-type transistor-containing devices.[012]Figure 5 shows a microarray with a biochemical sensor according to an embodiment of the invention.[013]Figure 6 shows a schematic of an assay that can be directly detected on-chip and digitally read and analyzed.Detailed Description[014]As used in the description and claims, the singular forms "a", "an" and "the" include plural forms unless the context clearly dictates otherwise. For example, the term "an array" may include multiple arrays unless the context clearly indicates otherwise.[015]"Electrical sensor", "biochemical sensor" or "sensor" means a substance or device that detects or senses an electrical signal (including, but not limited to, resistance, current, voltage, or power) due to electronic movement.[016]"Field effect transistor" or FET means a transistor that relies on an electric field to control the channel conductance in a semiconductor material. FETs have three terminals, commonly referred to as gate, drain, and source. The voltage applied between the gate terminal and the source terminal regulates the current between the source terminal and the drain terminal. Small gate voltage changes can cause large changes in current from source to drain, thus enabling the FET to amplify the signal. A field effect transistor (FET) is a transistor that relies on an electric field to channel conductance in a mineral semiconductor material. FETs have three terminals, commonly referred to as gate, drain, and source. The voltage applied between the gate terminal and the source terminal regulates the current between the source terminal and the drain terminal. Small gate voltage changes can cause large changes in current from source to drain, thus enabling the FET to amplify the signal. FETs can be used to amplify weak signals and can amplify analog or digital signals. They can also be used as voltage-controlled resistors and sensors in chemical and biological detection.[017]"Array" "macroarray" or "microarray" are substances (such as molecules) that are intentionally created to be attached to or fabricated on a substrate or solid surface (such as glass, plastic, silicon chips, or other materials that form an array) ), A collection of openings, microcoils, detectors, and / or sensors. These arrays can be used to simultaneously measure the expression levels of a large number (e.g., tens, thousands, or millions) of reactions or combinations. The array may also contain a small number of substances, such as several or 12 substances. The substances in the array may be the same or different from each other. Arrays can take a variety of formats, such as a library of soluble molecules; a library of compounds bound to resin beads, silica chips, or other solid support. The array is either a macro array or a micro array, depending on the size of the pads on the array. The pad size of a macro array is generally about 300 micrometers or more, and the macro array can be easily imaged by a gel and a blot scanner. The pad size of a microarray is typically less than 300 microns.[018]"Substrate", "support" and "solid support" means a material or a group of materials having one or more rigid or semi-rigid surfaces. In some aspects, at least one surface of the solid support is substantially flat, although in some aspects it may be desirable to have different molecules such as wells, bumps, pins, etched trenches, etc. Physically separate synthetic regions. In some aspects, the solid support will take the form of microbeads, resins, gels, microspheres, or other geometric structures.[019]The term "analyte", "target" or "target molecule" means a target molecule to be detected and / or analyzed, such as a nucleotide, oligonucleotide, polynucleotide, peptide or protein. Analytes, targets, or target molecules can be small molecules, biomolecules, or nanomaterials, such as, but not necessarily limited to, small molecules with biological activity, nucleic acid sequences, peptides and polypeptides, and small molecules that use biomolecules or can bind molecular probes Modified nanostructured materials, such as chemically modified carbon nanotubes, carbon nanotube bundles, nanowires, nanoclusters, or nanoparticles. The target molecule may be a fluorescently labeled antigen, antibody, DNA or RNA. "Biological analyte" means an analyte that is a biomolecule.[020]The term "capture molecule" means a molecule that is immobilized on a surface. Capture molecules typically (but not necessarily) bind to a target or target molecule. Capture molecules are usually antibodies, nucleotides, oligonucleotides, polynucleotides, peptides, or proteins, but can also be small molecules, biomolecules, or nanomaterials, such as, but not necessarily limited to, small molecules, nucleic acids, and biologically active Sequences, peptides, and polypeptides, and nanostructured materials that are chemically modified with biomolecules or small molecules that can bind to a target molecule that binds a probe molecule to form a capture molecule, target molecule, and probe molecule complex. For solid-phase immunoassays, the capture molecules are immobilized on the surface of the substrate and are antibodies specific for the target, the antigen to be tested. The capture molecule may be a fluorescently labeled antibody, protein, DNA or RNA. The capture molecule can or cannot bind only the target molecule or only the probe molecule.[021]The term "probe" or "probe molecule" means a molecule that is bound to a target molecule for use in the analysis of the target. Probes or probe molecules generally (but not necessarily) have a known molecular structure or sequence. The probe or probe molecule may or may not be attached to the array substrate. Probes or probe molecules are typically antibodies, nucleotides, oligonucleotides, polynucleotides, peptides, or proteins, including, for example, monoclonal antibodies, cDNA, or previously synthesized polynucleotides deposited on an array. Probe molecules are biomolecules capable of binding to a target molecule or for molecular recognition events. (In some literature, the definitions of the terms "target" and "probe" are contrary to the definitions provided herein). In an immunoassay, a probe molecule may be a labeled antibody that is specific for a target, that is, a test antigen. In this case, the capture molecules, target molecules, and probe molecules form a "sandwich." Polynucleotide probes require only the sequence information of a gene, and thus can utilize the genome sequence of an organism. In cDNA arrays, there may be cross-hybridization due to sequence homology among members of the gene family. Polynucleotide arrays can be specifically used to identify highly homologous members of a gene family as well as splice forms (exon-specific) of the same gene. Polynucleotide arrays of embodiments of the invention can also be used to allow detection of mutations and single nucleotide polymorphisms. The probe or probe molecule may be a capture molecule.[022]"Binding partner" means a molecule or aggregate that has a binding affinity for one or more analytes, targets, or other molecules. In this sense, the binding partner is either a "capture molecule" or a "probe molecule". Within the scope of embodiments of the present invention, virtually any molecule that has a binding affinity for a target analyte or target can be a binding partner, including but not limited to polyclonal antibodies, monoclonal antibodies, single chain antibodies, chimeras Antibodies, humanized antibodies, antibody fragments, oligonucleotides, polynucleotides, nucleic acids, aptamers, nucleic acid ligands, and any other known ligands that can bind to at least one target molecule. Although in some embodiments the binding partner specifically binds to a single target, in other embodiments the binding partner can bind to multiple targets with similar binding or binding domains.[023]"Binding" refers to the generation of a sufficiently stable complex between two or more substances to allow detection of interactions with a binding molecule complex, such as the interaction between a target and a capture or probe molecule. In certain embodiments of the invention, binding can also refer to the interaction between the second molecule and the target.[024]"Associated with" or "association" means a direct or indirect interaction between two or more substances that produces a sufficiently stable complex, such as the interaction between a target and a capture molecule or probe effect. For example, when a molecule or complex is directly bound to the substrate surface through another molecule or substance or is bound to another molecule or substance and the substrate surface, the molecule or molecular complex is "connected" to the substrate surface. In other words, when any member of the substance is directly bound to at least another member of the substance, the substances are "connected" to each other. In addition, components in an integrated device are "connected" to the device. For example, a transistor in a collective circuit is "connected" to the circuit.[025]The terms "label", "marker" and "sensing compound" are used interchangeably to mean a label or indicator that can be discerned by an observer, but not necessarily by a system used to identify an analyte or target. Marking can also be achieved through a pre-designed, detectable process. Labels are often used in bioassays to conjugate or link to substances that would otherwise be difficult to detect. At the same time, the markers usually do not change or affect the underlying assay process. Labels or signs used in bioassays include, but are not limited to, radioactive materials, magnetic materials, quantum dots, enzymes, liposome-based labels, chromophores, fluorophores, dyes, nanoparticles, quantum dots or quantum wells, composite organics -Inorganic nanoclusters, colloidal metal particles, or a combination thereof.[026]The terms "die", "polymer array chip", "array", "array chip" or "biochip" are used interchangeably and mean a collection of a large number of capture molecules arranged on a shared substrate, where the shared substrate can It is part of a silicon wafer, nylon strip or glass slide. When an array chip is used to analyze nucleotides, the term "DNA array" or "DNA array chip" is used. When analyzing proteins with an array chip, the term "protein chip" is used.[027]The term "chip" or "microchip" means a microelectronic device manufactured from a semiconductor material and having more than one integrated circuit or more than one device. A "chip" or "microchip" is typically a wafer slice, manufactured by dicing the wafer. A "chip" or "microchip" can include many micro-transistors and other electronic components on a single thin rectangular silicon, sapphire, germanium, silicon nitride, silicon germanium, or any other semiconductor material. A microchip can contain dozens, hundreds, or millions of electronic components.[028]"Micro-electro-mechanical systems (MEMS)" is the integration of mechanical components, sensors, actuators, and electronics on a common silicon substrate through microfabrication technology. Although electronic devices are manufactured using integrated circuit (IC) process steps (such as CMOS, bipolar, or BICMOS processes), it is possible to selectively etch away parts of a silicon wafer or add new structural layers to form a mechanical device or an electromechanical device compatible The "microfabrication" process makes micromechanical components. Microelectronic integrated circuits can be thought of as the "brain" of the system, while MEMS enhance this decision-making ability with "eyes" and "arms," allowing microsystems to sense and control the environment. Sensors collect information from the environment by measuring mechanical, thermal, biological, chemical, optical, and magnetic phenomena. The electronics then process the information from the sensors and, through certain decision-making capabilities, instruct the actuators to respond by moving, positioning, adjusting, pumping, and filtering, thereby controlling the environment to achieve some desired result or purpose. Because MEMS devices are manufactured using batch manufacturing techniques similar to those used for integrated circuits, small silicon chips can be given unprecedented levels of functionality, reliability, and complexity at relatively low cost.[029]A "microprocessor" is a processor on an integrated circuit (IC) chip. The processor may be more than one processor on more than one IC chip. A chip is usually a silicon chip with thousands of electronic components, used as a central processing unit (CPU) for a computer or computing device.[030]A "macromolecule" or "polymer" includes two or more monomers covalently linked. The monomers can be linked one at a time, or can be linked in a series of multiple monomers, commonly referred to as "oligomers." Therefore, for example, 1 monomer and a series of 5 monomers can be linked to form a 6-monomer macromolecule or polymer. Similarly, a series of 50 monomers can be linked to a series of 100 monomers to form a 150-monomer macromolecule or polymer. The term polymer as used herein includes, for example, linear or cyclic polymers of nucleic acids, polynucleotides, polynucleotides, polysaccharides, oligosaccharides, proteins, polypeptides, peptides, phospholipids, and peptide nucleic acids (PNA). Peptides include those having alpha-amino acids, beta-amino acids, or omega-amino acids. In addition, polymers include heteropolymers in which known drugs are covalently linked to any of the above, polyurethane, polyester, polycarbonate, polyurea, polyamide, polyethyleneimine, polyarylene sulfide, Polysiloxane, polyimide, polyacetate, or other polymers that will be apparent upon reference to the present disclosure.[031]As used herein, "nanomaterial" means a structure, device, or system having a size in the range of about 1-100 nanometers in length at the atomic, molecular, and macromolecular levels. It is preferred that nanomaterials have properties and functions due to their size, and can be manipulated and controlled at the atomic level.[032]The term "biomolecule" means any organic molecule that is part of a living organism. Biomolecules include nucleotides, polynucleotides, oligonucleotides, peptides, proteins, ligands, and receptors. "Biomolecular complex" means a structure composed of two or more types of biomolecules. Examples of biomolecular complexes include cells or virions. The cells may include, for example, bacteria, fungi, mammalian cells.[033]The term "nucleotide" includes deoxynucleotides and their analogs. These analogs are molecules that have some of the same structural characteristics as natural nucleotides, so that when incorporated into a polynucleotide sequence, they can hybridize to complementary polynucleotides in solution. Generally, these analogs are derived from natural nucleotides by replacing and / or modifying base, ribose or phosphodiester bonds. These changes can be tailored to stabilize or destabilize hybrid formation, or to increase the specificity of hybridization to a desired complementary polynucleotide sequence, or to enhance polynucleotide stability.[034]The term "polynucleotide" as used herein means a polymerized form of a nucleotide (of ribonucleotide or deoxynucleotide) of any length that includes purine and pyrimidine bases, or other natural, chemical or biochemical Modified, non-natural, or derived nucleotide bases. Polynucleotides according to embodiments of the present invention include DNA copies of deoxyribonucleotide (DNA), ribonucleotide (RNA), or ribonucleotide that can be isolated from natural sources, recombinantly produced, or artificially synthesized ( cDNA). Other examples of polynucleotides according to embodiments of the present invention include polyamide polynucleotides (PNA). Polynucleotides and nucleic acids can exist as single or double stranded. The backbone of a polynucleotide may contain sugar and phosphate groups typically found in RNA or DNA, or it may contain modified or substituted sugar or phosphate groups. Polynucleotides can include modified nucleotides, such as methylated nucleotides and nucleotide analogs. The nucleotide sequence can be interrupted by non-nucleotide components. Polymers composed of nucleotides (eg, nucleic acids, polynucleotides, and polynucleotides) may also be referred to herein as "nucleotide polymers."[035]An "oligonucleotide" is a polynucleotide having 2-20 nucleotides. Analogs also include protected and / or modified monomers conventionally used in polynucleotide synthesis. As is well known to those skilled in the art, polynucleotide synthesis employs a variety of base-protected nucleoside derivatives in which one or more nitrogen atoms of the purine and pyrimidine moieties are replaced by a group such as Tert-butyl, isobutyl and other groups are protected.[036]For example, a structural group is optionally added to a ribose or nucleoside base for incorporation of a nucleotide of a polynucleotide, such as methyl, propyl, or allyl at the 2'-0 position of ribose, or A fluoro group substituted for a 2'-O group, or a bromo group on a ribonucleoside base. 2'-O-methyl oligonucleotide (2'-O-MeORN) has a stronger affinity for complementary polynucleotides, especially RNA, than its unmodified counterpart. Alternatively, deazapurines and deazapyrimidines in which one or more nitrogen atoms in purines or pyrimidines are replaced by C atoms can also be used.[037]Polynucleotide phosphodiester bonds or "sugar-phosphate backbones" may also be substituted or modified, such as by methylphosphonates, O-methylphosphates, or phosphorothioates. Another example of a polynucleotide comprising such a modified bond for the purposes of the present disclosure includes a "peptide polynucleotide" in which a polyamide backbone is linked to a polynucleotide base or a modified polynucleotide base. Peptide polynucleotides comprising a polyamide backbone and bases found in natural nucleotides are commercially available.[038]Nucleotides with modified bases can also be used in embodiments of the invention. Examples of certain base modifications include 2-aminoadenine, 5-methylcytosine, 5- (propyn-1-yl) cytosine, 5- (propyn-1-yl) uracil, 5-bromo Uracil, 5-bromocytosine, hydroxymethylcytosine, methyluracil, hydroxymethyluracil, and dihydroxypentyluracil, which can be incorporated into polynucleotides to alter binding to complementary polynucleotides Affinity.[039]Groups can also be attached to various positions on the nucleoside sugar ring or the purine or pyrimidine ring, which can be stabilized by electrostatic interactions with the negatively charged phosphate ester backbone, or through interactions in large or small grooves Duplex. For example, adenine and guanine nucleotides may be substituted at the N2 position by imidazolylpropyl to improve the stability of the duplex. Generic base analogs such as 3-nitropyrrole and 5-nitroindole can also be included. A wide variety of modified polynucleotides suitable for use in embodiments of the invention are described in the literature.[040]When the target macromolecule is a peptide, the amino acid can be any amino acid, including an alpha-amino acid, a beta-amino acid, or an omega-amino acid. When the amino acid is an α-amino acid, an L-optical isomer or a D-optical isomer may be used. In addition, embodiments of the present invention also include unnatural amino acids such as β-alanine, phenylglycine and homoarginine. These amino acids are well known in the art.[041]A "peptide" is a polymer in which the monomers are amino acids, and the amino acids are linked together by amide bonds, or are called polypeptides. In the context of this specification, it should be recognized that the amino acid may be an L-optical isomer or a D-optical isomer. A peptide is two or more amino acid monomers in length, usually more than 20 amino acid monomers.[042]A "protein" is a long polymer of amino acids linked by peptide bonds and can be composed of two or more polypeptide chains. More specifically, the term "protein" means a molecule composed of more than one specific sequence of amino acid chains, the order being determined by the nucleotide sequence in the protein-encoding gene. Proteins are essential for the structure, function, and regulation of the cells, tissues, and organs of an organism. Each protein has a unique function. Examples are hormones, enzymes and antibodies.[043]The term "sequence" means a specific order of monomers in a macromolecule, which may be referred to herein as the sequence of the macromolecule.[044]The term "hybridization" means the process by which two single-stranded polynucleotides are non-covalently combined to form a stable double-stranded polynucleotide; three-stranded hybridization is theoretically feasible. The (usually) double-stranded polynucleotide produced is a "hybrid." The proportion of the population of nucleotides that form a stable hybrid is referred to herein as the "degree of hybridization." For example, hybridization means the formation of a hybrid between a probe polynucleotide (e.g., a polynucleotide of the invention including substitutions, deletions, and / or additions) and a particular target polynucleotide (e.g., an analyte polynucleotide), wherein The needle preferentially hybridizes to the specific target polynucleotide and does not substantially hybridize to a polynucleotide consisting of a sequence that is not substantially complementary to the target polynucleotide. However, those skilled in the art will recognize that the minimum length of a polynucleotide required to specifically hybridize with a target polynucleotide will depend on several factors: among them are the G / C content, mismatched bases (if any) Position, the degree of uniqueness of the sequence compared to the target polynucleotide population, and the chemical nature of the polynucleotide (e.g., methylphosphonate backbone, phosphorothioate, etc.).[045]Methods for performing polynucleotide hybridization assays have been fully developed in the art. Hybridization assay procedures and conditions will vary from application to application and are selected according to common binding methods known in the art.[046]It should be recognized that the ability of two single-stranded polynucleotides to hybridize will depend on factors such as the degree of complementarity and the stringency of the hybridization reaction conditions.[047]A "ligand" is a molecule or part of a molecule that is recognized by a particular receptor. Examples of ligands that can be studied with the present invention include, but are not limited to: agonists and antagonists of cell membrane receptors, toxins and poisons, viral epitopes, hormones, hormone receptors, peptides, enzymes, enzyme substrates, cofactors , Drugs (such as opioids, steroids, etc.), lectins, sugars, polynucleotides, nucleic acids, oligosaccharides, proteins, and monoclonal antibodies.[048]A "receptor" is a molecule that has an affinity for a given ligand. Receptors can be natural or artificial molecules. In addition, they can be used in their unaltered state or as aggregates with other substances. The receptor may be covalently or non-covalently linked to the binding member directly or through a specific binding substance. Examples of receptors useful in the present invention include, but are not limited to: antibodies, cell membrane receptors, monoclonal antibodies and antisera, drugs, polynucleotides that react with specific epitopes (e.g., on a virus, cell, or other substance) , Nucleic acids, peptides, cofactors, lectins, sugars, polysaccharides, cells, cell membranes and organelles. Receptors are sometimes referred to in the art as anti-ligands. The term "receptor" is used herein without any difference in meaning. A "ligand receptor pair" is formed when two macromolecules have been combined to form a complex through molecular recognition. Other examples of receptors that can be studied with the present invention include, but are not limited to:a) Microbial receptors: Ligands that determine the binding of receptors (such as specific transporters or enzymes essential for microbial survival) can be used to develop a new class of antibiotics. Antibiotics against opportunistic pathogenic fungi, protozoa, and those against bacteria currently used will be particularly valuable.b) Enzymes: For example, one type of receptor is the binding site of an enzyme, such as an enzyme responsible for cleaving neurotransmitters; determining the binding of certain receptors to regulate the action of enzymes that cleave different neurotransmitters Ligands can be used to develop drugs that can be used to treat neurotransmission diseases.c) Antibodies: For example, the present invention can be used to study ligand binding sites in antibodies that bind to epitopes of a target antigen; determining sequences that mimic epitopes can lead to the development of immunogens based on one or more of these sequences The vaccine may also lead to the development of related diagnostics or compounds useful in therapeutic treatments such as autoimmune diseases (for example, by blocking "anti-self" antibody binding).d) Nucleic acid: Nucleic acid sequences can be synthesized to establish DNA or RNA binding sequences.e) Catalytic polypeptide: A polymer, preferably a polypeptide, that can promote a chemical reaction involving the conversion of more than one reactant into more than one product. Such peptides generally include: a binding site, which is specific for at least one reactant or reaction intermediate; and an active functional group near the binding site, which can chemically modify the bound reactant.f) Hormone receptors: Examples of hormone receptors include receptors such as insulin and growth hormone. The identification of ligands that bind to the receptor with high affinity is beneficial for the development of oral substitutes for daily injections, for example, taken by diabetic patients to alleviate the symptoms of diabetes. Other examples are vasoconstrictor hormone receptors; the identification of those ligands that bind to the receptors can lead to the development of drugs that control blood pressure.g) Opioid receptors: The identification of ligands that bind to opioid receptors in the brain is beneficial for the development of less addictive alternatives to morphine and related drugs.[049]A "fluorophore" or "fluorescent compound" may include, but is not limited to, dyes, intrinsically fluorescent proteins, lanthanide-based phosphors, and the like. Dyes include, for example: rhodamine and derivatives such as Texas Red, ROX (6-carboxy-X-rhodamine), rhodamine-NHS and TAMRA (5 / 6-carboxytetramethylrhodamine NHS); fluorescein and derivatives Substances such as 5-bromomethylfluorescein and FAM (5'-carboxyfluorescein NHS), fluorescent yellow, IAEDANS, 7-Me2, N-coumarin-4-acetate, 7-OH-4-CH3 -Coumarin-3-acetate, 7-NH2-4CH3-coumarin-3-acetate (AMCA), monobromobimane, sulfonium trisulfonate such as Cascade Blue, and monobromotrimethyl-ammoniobimane.[050]The term "complementary" means that the topological structures of the interacting surfaces of the ligand molecule and its receptor are complementary or match each other. Thus, the receptor and its ligand can be described as complementary, and in addition, the contact surface features are complementary to each other.[051]The term "wafer" refers to a semiconductor substrate. The wafer can be made in various sizes and shapes. It can be used as a substrate for a microchip. The substrate can be covered or embedded with circuitry such as pads, vias, interconnect dots or scribe lines. The chip's circuitry can also be used for several purposes, such as as a microprocessor, memory, and / or communication capabilities. The circuit can be controlled by a microprocessor on the chip itself, or by a device external to the chip.[052]Embodiments of the present invention relate to a device and a method using a semiconductor device, wherein the semiconductor device has an electric sensor as a transistor to detect whether a biomolecule is present near the electric sensor. The semiconductor device's response to the presence of biochemicals is non-linear. The semiconductor device according to the embodiment of the present invention can often be prepared using the developed three-dimensional integrated circuit process technology, and can be manufactured to be quite sensitive to the presence of different biochemical substances or biomolecules. The device includes a transistor whose characteristics, such as a sub-threshold slope, change when the biomolecules are exposed by directing these biomolecules near the semiconductor device.[053]In the described embodiment, the change in the characteristics of the transistor when a biomolecule is brought near the electrical sensor is a reflection of a specific chemical and / or biological interaction and is detected by the electrical sensor, which can be used for chemical analysis And medical diagnostics integrated on-chip devices. In a specific embodiment, the electrical sensor may be a field effect transistor.[054]Embodiments of the invention also relate to devices and methods in which a substrate includes an electrical sensor array. The signals from the electrical sensors are detected and collected by circuitry on the substrate or in a separate device. One application of the electrical sensor array of the present invention is its application in a protein or DNA array for simultaneous analysis of multiple proteins or DNA. The substrate of an embodiment of the present invention may be part of an integrated device that also functions as a microarray or a macroarray, an integrated circuit, a microfluidic device, a MEMS, or a combination thereof. Therefore, the sample contained or processed by the device can also be analyzed by the integrated device, and the signal can be processed and analyzed.[055]Biological samples often contain thousands or even more types of biomolecules, and clinical diagnosis requires the measurement of multiple analytes for disease diagnosis. Currently, each analyte is measured separately, which requires multiple samples from the patient. The sensor of the embodiment of the present invention can be used as a multiplex assay, in which multiple analytes can be measured simultaneously.[056]In embodiments of the invention, analytes that can be detected include all types of antigens, such as proteins, polysaccharides, and small molecules coupled to proteins. The specific binding between an antigen and its corresponding antibody forms the basis of an immunoassay. Antibodies suitable for use in embodiments of the invention include monoclonal antibodies, polyclonal antibodies, recombinant antibodies, random peptides, and aptamers. Immunoassays suitable for use in embodiments of the present invention include solid-phase immunoassays based on the sandwich and competitive principles. Specific types of immunoassays are also included, such as enzyme-linked immunosorbent assays (ELESA) and electrochemical luminescence (ECL).[057]The analytes in the embodiments of the present invention also include nucleic acids (DNA and RNA), which can form double-stranded molecules by hybridization, that is, complementary base pairing. The specificity of nucleic acid hybridization can be caused by the interaction between electrically read charged target molecules (e.g. DNA, RNA, proteins) and chemically modified nanomaterials (e.g. carbon nanotubes, nanowires, nanoparticles, functionalized with DNA) Polarization changes to detect molecular and / or nanomaterial binding events, where chemically modified nanomaterials have complementary target probes (such as DNA, RNA, antibodies) and are connected to electrodes (such as Au, Pt). This specificity of complementary base-pairing also makes it possible to perform thousands of hybridizations simultaneously in the same experiment on a DNA chip (also known as a DNA array).[058]Molecular probes or capture molecules are immobilized on the surface of individually addressable electrical sensor arrays by surface functionalization techniques. The array according to the embodiment of the present invention may be a DNA array (a collection of DNA probes on a common base region) comprising a dense grid-type dot (also referred to as an element or a pad) arranged on a micro-support. ). Each dot can represent a different gene.[059]Capture molecules or probes in a DNA chip typically hybridize to a complex RNA or cDNA target produced by making a DNA copy of a complex complex of RNA molecules derived from a particular cell type (source). The composition of this target reflects the level of individual RNA molecules in the source. The intensity of the signal generated by the binding event from the DNA spots in the DNA chip after hybridization between the probe and the target represents the relative expression level of the gene of the source.[060]The DNA chip can be used for differential gene expression between samples (eg, healthy tissue versus diseased tissue) to search for various specific genes (eg, related to infectious factors) or for gene polymorphism and expression analysis. In particular, the DNA chip can be used to study the expression of various genes associated with various diseases to find the etiology of these diseases and enable accurate treatment.[061]With embodiments of the present invention, one can look for a specific nucleic acid segment of a gene, that is, a site with a specific base sequence in the gene being examined. This detection can be performed using a diagnostic polynucleotide, which is a synthetically assembled short single-stranded complementary polynucleotide (i.e., that binds (hybridizes) to a specific segment of the nucleic acid by AT or GC bonds). Base chain).[062]Unless otherwise stated, embodiments of the present invention can be implemented using conventional techniques of microelectronics, nanotechnology, organic chemistry, polymer technology, molecular biology (including recombinant technology), cell biology, biochemistry, and immunology, these Technology is within the skill of the art. Such conventional techniques include polymer array synthesis, immunoassay, hybridization, ligation, detection of molecules such as antibodies, and hybridization using labels. For specific descriptions of suitable technologies, refer to the following examples. However, of course other equivalent conventional procedures can also be used.[063]One embodiment of the invention relates to a device for detection analysis. The device includes an electrical sensor, which changes one of its characteristics when a biomolecule is introduced near the electrical sensor. According to an embodiment of the present invention, any sensor whose characteristics such as a sub-threshold slope is changed due to a change in the charge of the environment near the sensor can be used as the electrical sensor.[064]In a specific embodiment, the electrical sensor includes an electromagnetic sensor, a transistor, a resistance sensor, an electric power sensor, a magnetic sensor, a voltage sensor, or a current sensor. Certain detection systems can be used as part or all of electrical sensors, including ohmmeters, multimeters, galvanometers, electricity meters, leaf electroscopes, voltmeters, watt-hour meters, magnetic compasses, fluxgate compasses or magnetic strength meter.[065]In a further embodiment, the electrical sensor is a field effect transistor (FET). As discussed herein, FETs are transistors that rely on an electric field to control the "channel" conductance in a semiconductor material. FETs typically have three terminals, called the gate, drain, and source. The voltage applied between the gate terminal and the source terminal regulates the current between the source terminal and the drain terminal. In other words, in a FET, current flows along a semiconductor path called a channel. There is an electrode called a source at one end of the channel. There is an electrode called a drain at the other end of the channel. Although the physical diameter of the channel is fixed for a given FET, its effective electrical diameter can be varied by applying a voltage to control an electrode called the gate.[066]In one embodiment, the biomolecules can be directed near the gate region of the transistor, so that the charge of the biomolecules can generate a voltage between the gate and source of the FET, thus generating a current between the source and drain of the FET. As described herein, the two modes of operation and current strength of the FET can be measured, making it possible to detect the strength of the charge in the presence of the biomolecule and the biomolecule.[067]Many types of field effect transistors can be used in embodiments of the present invention, including junction FETs (JFETs) and metal oxide semiconductor FETs (MOSFETs) and N-type semiconductors (N-channel) or P-type semiconductors (P-channel). In a specific embodiment, the FET is a MOSFET, a JFET, a metal semiconductor FET (MESFET), or a high electron mobility (HEMFET).[068]In another embodiment of the present invention, various nano-materials can be used in the field effect transistor, especially as a channel between a source and a drain, for enhancing sensitivity and selectivity. In a specific embodiment, the FET includes nanowires, nanocrystals, or nanotubes, such as single-walled or multiple-walled nanotubes. The FET may also include nanopillars, nanogap, and patterned nanostructures.[069]In said embodiments of the invention, the analyte includes any target compound, molecule or aggregate for detection or analysis. Non-limiting examples of analytes include: antibodies, proteins, peptides, receptors, antigens, DNA, RNA, polynucleotides, nucleic acids, sugars, lipids, bacteria, macromolecules, allergens, sugars, polysaccharides, Glycoproteins, growth factors, cytokines, lipids, hormones, metabolites, cofactors, inhibitors, drugs, agents, poisons, explosives, pesticides, nutrients, toxins, chemical warfare agents, biological warfare agents, biological hazards Agents, pathogens, prions, radioisotopes, vitamins, carcinogens, mutagens, narcotics, heterocyclic aromatics, amphetamines, barbiturates, psychedelics, waste and pollutants.[070]In one embodiment of the invention, the analyte comprises a biomolecule. More specifically, analytes include antigens, antibodies, proteins, peptides, viruses, DNA, RNA, pale red, nucleic acids, sugars, lipids, bacteria, or macromolecules.[071]In the described embodiment of the invention, the change in charge generated by introducing a biomolecule near the sensor includes electrical disturbances, impedance, current, voltage, or light-induced charge separation caused by a change in charge. Disturbances can be sensed by electrical sensors in the form of current, potential, impedance, or field effects.[072]In this regard, embodiments of the invention may enable real-time detection of changes in charge caused by molecular events, such as biomolecular interactions, as described below. In certain embodiments of the invention, the detection of electrical disturbances, impedances, currents, voltages, or light-induced charge separation by the FET is distance-dependent. Specifically, when the distance between the biomolecule and the surface of the FET is in the nanometer (nm) range, the sensitivity of detection may depend on the distance. In some cases, biomolecules far from the sensor may not be detected. Therefore, in a specific embodiment of the invention, the distance between the biomolecule and the sensor may be less than 10 microns, preferably less than 1 micron, more preferably less than 1000 nanometers (nm), and most preferably less than 100 nm.[073]According to another embodiment of the invention, the device or electrical sensor is part of another device such as an integrated circuit. Thus, in one embodiment, the electrical sensor is connected to a substrate, which may include a polymer, silicon, or glass. In another embodiment, the substrate includes a microarray, a macroarray, a multiwell plate, a microfluidic device, an integrated circuit, a MEMS, or a combination thereof. The substrate may also include a microprocessor capable of processing signals or data detected by the electrical sensors.[074]In the embodiment of the present invention, specific materials that can be used as the substrate include, but are not limited to, polystyrene, polydimethylsiloxane (PDMS), silicon, glass, chemically functionalized glass, coating polymerization Glass, nitrocellulose-coated glass, uncoated glass, quartz, natural hydrogel, synthetic hydrogel, plastic, metal and ceramic. The substrate may include any platform or device currently used to perform immunoassays, DNA or protein microarray analysis. Thus, the substrate may include a microarray or a macroarray, a multiwell plate, a microfluidic device, an integrated circuit, a MEMS, or a combination thereof. In addition, the substrate may not be flat and may include beads, particles, or other shaped objects.[075]In another embodiment of the invention, the substrate includes a microprocessor that contains software or hardware to process signals or data from the device or electrical sensor. For example, phase / intensity information generated by a sensor as an electrical signal can be read to a microprocessor to convert and generate data, such as the type and amount of a particular analyte detected.[076]In another embodiment of the invention, the substrate includes a platform or device on which a chemical or biological analysis is performed. Specifically, the substrate may include a device for performing an immunoassay, such as an ELISA assay, in which an antibody / antigen / antibody sandwich type binding has been formed. The substrate may also include a DNA microarray assay in which a sandwich-type capture molecule / target DNA / probe binding has been formed. Thus, devices and inspections according to embodiments of the present invention may be part of a larger device or method in which sequential multiple procedures can be formed.[077]In the embodiment of the present invention, the one or more microfluidic channels may be used as part of a substrate, and they may be integrated devices, such as integrated circuits, microfluidic devices, or MEMS. Microfluidic channels or integrated devices thereof can be manufactured using techniques known to those skilled in the art or methods disclosed herein. For example, microfluidic channels can be fabricated using polydimethylsiloxane using softlithography. With these techniques, patterns with critical dimensions as small as 30nm can be produced. These techniques use a transparent elastic polydimethylsiloxane (PDMS) "imprint" with a patterned relief on the surface to create a feature. The imprint can be made by casting a pre-polymer on a master part patterned using conventional photolithographic techniques and on other target master parts. Several different technologies are collectively referred to as soft lithography.[078]Technologies used also include microfabricating silicon for microelectromechanical systems (MEMS), and patterning quartz with patterned thermoplastics. Unlike conventional photolithography, these techniques can produce features on curved substrates and reflective substrates, as well as quickly pattern large areas. The above techniques can be used to pattern a wide variety of materials, including metals and polymers. The method complements and expands existing nanolithography technology and provides a new approach for high-quality patterns and structures with feature sizes of about 30 nm.[079]Standard lithography on silicon wafers or silica glass can also be used to make devices according to embodiments of the present invention. A chamber or channel can be used for the device, and fluid flow can be controlled by pressure gradient, electric field gradient, gravity, thermal gradient, and the like. The label or label-conjugated molecule can be separated by a single- or multi-chambered planar device where the surface is treated with a polymer (polyethylene glycol (PEG) modified compound) that minimizes non-specific binding Modification.[080]Embodiments of the invention also include a device for detecting an analyte, which includes an array of electrical sensors and a connected complex. Specifically, the device includes an array of electrical sensors and a composite connected to a surface of at least a portion of each of the electrical sensors. In this embodiment, the complex contains a label that can produce a change in charge when exposed to radiation, and the electrical sensor connected can detect the change in charge.[081]Therefore, according to this embodiment, the device includes an array of pre-designed electrical sensors (such as field effect transistors). In a specific embodiment, at least a portion of the electrical sensors are individually addressable. In other words, the type, location, and electrical connection of individual sensors are detected and controlled as needed. This embodiment enables simultaneous multiplex detection and analysis of the analyte.[082]An embodiment of the present invention relates to a device including: a first substrate including a transistor; a second substrate; and an insulating layer located between the first substrate and the second substrate and in contact with the first substrate and The second substrate is adjacent; and an opening is located in the second substrate, the opening is aligned with the transistor; wherein the transistor is configured to detect a change in charge in the opening. The preferred transistor is a field effect transistor (FET). Preferably, the FET is a metal oxide semiconductor FET (MOSFET), a junction FET (JFET), a metal semiconductor FET (MESFET), or a high electron mobility (MEMFET). Preferably, the FET contains nanowires, nanocrystals, nanotubes, nanopillars, nanoslots, or patterned nanostructures. Preferably, the FET contains single-walled carbon nanotubes. Preferably, the first substrate and the second substrate independently comprise a polymer, silicon, or glass. Preferably, the first substrate and the second substrate independently include a microarray, a macroarray, a multi-wall plate, a microfluidic device, an integrated circuit, a MEMS, or a combination thereof.[083]The device may further include a microprocessor capable of processing signals or data generated by the transistor. Preferably the first substrate is connected to a support substrate, said connection being through an adhesive layer. Preferably, the first substrate is substantially flat and has a thickness of about 10 nm to about 1.0 nm. Preferably, the insulating layer contains silicon oxide. The thickness of the insulating layer is preferably about 5.0 nm to about 100 nm. Preferably, the second substrate is substantially flat and has a thickness of about 0.5 nm to about 10 nm. Preferably, the opening passes through the thickness direction of the second substrate. Preferably, the space occupied by the opening includes a cube, a cylinder, a prism, or a frustum. The size of the opening is preferably about 10 nm to about 5 μm. Preferably the transistor is a FET and the opening is aligned with the channel region of the FET. Preferred charge changes include electrical disturbances, impedance, current, voltage, or light-induced charge separation. Preferably the inner surface of the opening is functionalized to facilitate molecular binding. Preferably, the change in charge is caused by a molecular binding event on or near the interior surface of the opening. Preferably, the molecular binding event includes the binding of the first binding partner to the inner surface of the opening and the binding of the second binding partner to the first binding partner. Preferably the first or second binding partner comprises a biomolecule. Preferably the first binding partner comprises an antibody, antigen, receptor, ligand, protein, peptide, virus, bacteria, carbohydrate, lipid, polynucleotide, nucleic acid or macromolecule. Preferably the second binding partner comprises an antigen, antibody, protein, peptide, virus, bacteria, carbohydrate, lipid, polynucleotide, nucleic acid or macromolecule. Preferably the second binding partner comprises an antigen and the first binding partner comprises an antibody against the antigen. Preferably the second binding partner comprises a peptide and the first binding partner comprises a receptor or ligand for the peptide. Preferably the second binding partner comprises a first polynucleotide and the first binding partner comprises a complementary polynucleotide of the first polynucleotide.[084]Another embodiment of the present invention is directed to a method including providing a substrate including a first portion, a second portion, and an insulation positioned between and adjacent to the first and second portions. Layer; fabricating a transistor on the first portion; and fabricating an opening in the second portion, the opening being aligned with the transistor; wherein the transistor is configured to detect a change in charge within the opening. Preferably, the substrate comprises a silicon wafer. Preferably, the provision of the substrate includes implanting oxygen ions into a predetermined area of the substrate to create an insulating layer that separates the substrate into a first portion and a second portion. Preferably, the providing of the substrate includes providing a first substrate and a second substrate; oxidizing the surfaces of the first substrate and the second substrate; and bonding the first substrate and the first substrate through an oxidized surface; wherein the first One substrate forms the first portion, the second substrate forms the second portion, and the bonded oxidized surface forms an insulating layer.[085]In a variant, the method described above may further include attaching a support substrate to the first substrate. Preferably, the connection is performed by bonding means. The above method may further include independently fabricating a microarray, a macroarray, a multiwell plate, a microfluidic device, an integrated circuit, a MEMS, or a combination thereof on the first part and the second part. The above method may further include preparing a microprocessor on the first or second part, the microprocessor being capable of processing signals or data generated by the transistor. The above method may further include thinning the first portion or the second portion. Preferably the first portion is substantially flat and has a thickness of about 10 nm to about 1.0 mm. The thickness of the insulating layer is preferably about 5.0 nm to about 100 nm. Preferably the second portion is substantially flat and has a thickness of about 1.0 μm to about 10 mm. Preferably, the opening passes through the thickness direction of the second substrate. Preferably the transistor is a FET and the opening is aligned with the channel region of the FET. In one embodiment, the above method may further include functionalizing the inner surface of the opening to facilitate molecular binding.[086]A further embodiment of the invention relates to a method comprising: providing a device comprising a first substrate including a transistor, a second substrate, located between the first substrate and the second substrate, and An insulating layer adjacent to the two substrates; and an opening in the second substrate, the opening being aligned with the transistor; providing an analyte on or near the inner surface of the opening; and using a transistor to detect changes in charge on or near the inner surface of the opening . The method may further include processing signals or data generated by the transistor. Preferred charge changes include electrical disturbances, impedance, current, voltage, or light-induced charge separation. The inner surface of the opening is preferably functionalized to facilitate molecular binding. Preferably, the change in charge is caused by a molecular binding event on or near the inner surface of the opening. The method may further include fixing the binding partner to the inner surface of the opening. Preferably, the provision of the analyte includes binding the analyte to a binding partner. Preferably the binding partner or analyte comprises a biomolecule. Preferably, the binding partner comprises an antibody, antigen, receptor, ligand, protein, peptide, virus, bacteria, carbohydrate, lipid, polynucleotide, nucleic acid or macromolecule. Preferably, the analyte comprises an antigen, antibody, protein, peptide, virus, bacteria, carbohydrate, lipid, polynucleotide, nucleic acid, or macromolecule. Preferably the analyte comprises an antigen and the binding partner comprises an antibody against the antigen. Preferably the analyte comprises a peptide and the binding partner comprises a receptor or ligand for the peptide. Preferably the analyte comprises a first polynucleotide and the binding partner comprises a complementary polynucleotide of the first polynucleotide.[087]Other embodiments of the present invention relate to a device including a first substrate, a second substrate, and an insulating layer located on and adjacent to the first substrate and the second substrate; wherein the first substrate The slice contains an array of transistors, and the second substrate contains an array of openings, each opening in at least a portion of the openings being aligned with one of the transistors. It is preferred that each of the at least a portion of the transistors is capable of detecting a change in charge in an opening aligned with the transistor. Preferably at least a portion of the transistors are field effect transistors (FETs). Preferably at least a part of the transistors are individually addressable. Preferably, at least a portion of the inner surface of the transistor is functionalized to facilitate molecular binding. Preferably, the inner surface of each of at least a portion of the openings has more than one binding partner bound thereto. Preferably, the binding partners that bind to one opening comprise the same molecule. It is preferred that at least two of the binding partners that bind to one opening contain different molecules. It is preferred that the binding partners that bind to at least two of the openings comprise the same molecule. Preferably, the binding partners that bind to at least two of the openings comprise different molecules.[088]Other embodiments of the present invention are directed to a device comprising a substrate having a front surface and a back surface, a sensor node array on a first side of the substrate, and via opening through a thickness of the substrate, at least one of which These sensor nodes contain probe molecules that are functionalized by the back of the substrate through a through-hole opening. The device may also include external logic on the startwafer for column and row selection to access specific sensor nodes. The device may further include a readout circuit including CMOS logic, the CMOS logic unit having a sense amplifier to detect a change in current when an analyte molecule is added to the through hole opening.[089]Embodiments of the present invention will now be explained by the following examples.[090]Examples[091]Example 1: A sensor according to an embodiment of the present invention[092]The sensor according to the embodiment of the present invention is shown in FIG. 2, which shows a SOIFET (Silicon Insulator Field Effect Transistor) as a biochemical sensor. Figure 2a shows a plan view of a logic transistor that records SOI FETs used as microprocessor applications. SOI FETs feature ultra-thin silicon fins placed on top of the buried oxide; the thickness of the Si film is adjusted according to the gate length of the transistor-the smaller the gate length, the thinner the Si film is required to maintain Static electricity in the trench. Figure 2b shows that SOI devices can be used as sensors by bringing biochemicals into contact with the channel region of a transistor. Changes in biochemicals have changed the transmission characteristics of transistor devices and have therefore been used as sensors. Figure 2c illustrates a SOIFET sensor manufactured using three-dimensional wafer stacking technology.[093]Figure 1c shows the final device used as a SOIFET sensor. The processing of the MOSFET transistor is first completed on an SOI substrate. Thereafter, it was flip-chip mounted and brazed to a handle wafer for mechanical support. The SOI wafer substrate is then thinned from the top to a thickness ranging from 0.5 microns to 5 microns. After the patterning step, through-hole openings are made on the thinned substrate and etched into the buried oxide to produce through silicon vias. In some cases, a timed wet etching method can be used to further thin the buried oxide layer.[094]The wafer with the SOI device can be connected to the operation wafer with a brazing layer, as shown in FIG. 3. The device wafer is then thinned to a few microns, and through-silicon vias (openings) can then be fabricated. Biomolecules can be added to the vias, and they can change the characteristics of the device. Therefore, the semiconductor device of the embodiment of the present invention can be used as a sensor.[095]Referring to FIG. 3, transistors and other devices are fabricated on a first silicon-containing substrate, which is located on an insulating layer (such as an oxide layer) and the insulating layer is located on a second silicon-containing substrate. A dummy bonding layer made of a copper block is fabricated on top of the insulating layer or the first silicon-containing substrate. Brazing occurs in the dielectric passivation layer of both the active SOI wafer and the operation wafer. Precisely align and solder (for example, by thermocompression bonding) an operation wafer with a similar solder layer (which is a copper layer) to the first silicon-containing substrate, making another wafer suitable for the operation wafer, which is used in grinding and Provides mechanical support during deep corrosion processing. After soldering, the second silicon-containing substrate, which is an SOI substrate wafer containing active transistors, is thinned to a few microns by mechanical grinding or etching. Through-silicon vias corresponding to the transistor devices are manufactured by etching. The silicon is selectively etched until it reaches the buried oxide layer. At this time, the buried oxide layer can be further thinned to a thickness of 50-500 Angstroms. This will be used as the back gate oxide for the transistor channel. The biomolecule to be measured can be added to the through hole, and the charge associated with the biomolecule will change the transmission characteristics of the sensor.[096]Example 2: Backside substrate biasing grooves for a sensor according to an embodiment of the present inventionImpact of Road Transmission[097]Figure 4 (top) shows a schematic view of a thin-body SOI device that can be added to a sensor according to an embodiment of the present invention. Figure 4 (bottom) shows the effect of backside substrate bias on the channel transmission of a P-type transistor device. Figure 4 shows that the substrate bias voltage can be coupled to the channel and change the device's threshold voltage, which affects the drive current.[098]Example 3: Microarray with a sensor according to an embodiment of the present invention[099]FIG. 5 shows steps in a method of manufacturing a microarray with a sensor according to an embodiment of the present invention. The sensor of Figure 3 will constitute a node in the node array. At least some (preferably each) of these sensor nodes can be functionalized with probe molecules with unique characteristics through the backside through silicon via openings. The density of the array is determined by the backside via size and spacing, and alignment tolerances. In the SOI starter chip, the peripheral logic for row and column selection formed by CMOS logic technology is used to access specific sensor nodes; the readout circuit can also be implemented by CMOS logic using a sense amplifier to detect the addition of analyte molecules to the through hole openings. The current (I reading) changes with time.[0100]Example 4: Detection and analysis of biomolecules by embodiments of the present inventionmethod[0101]Figure 6 shows a method for detecting and analyzing biomolecules using a microarray with a sensor according to an embodiment of the present invention. Methods for sample preparation, hybridization, direct on-chip detection, and digital readout and analysis are standard techniques well known to those skilled in the art.[0102]This application discloses several numerical range definitions that support any range within the disclosed numerical range, even if precise range definitions are not stated verbatim in the specification, as embodiments of the present invention can be implemented throughout the disclosed numerical ranges. In addition, the entire disclosures of patents and publications, if any, cited in this application are incorporated herein by reference in their entirety. |
Described herein are microelectronics packages and methods for manufacturing the same. The microelectronics package may include a transmitter, a receiver, and a package stiffening element. The packagestiffening element may be in electrical communication with the transmitter and the receiver. The package stiffening element may be configured to act as an antenna for both the transmitter and the receiver. |
1.A microelectronic package comprising:launcher;Receiver;A package stiffening element in electrical communication with the transmitter and the receiver, the package stiffening element being configured to act as an antenna for both the transmitter and the receiver.2.The microelectronic package of claim 1 wherein a conventional antenna is not included in the microelectronic package.3.The microelectronic package of any of claims 1 and 2, wherein the package reinforcement component supports a plurality of communication types.4.The microelectronic package of any of claims 1-3, wherein the package stiffening element forms a dipole antenna, a single antenna, an array loop antenna, a patch antenna, or a hybrid antenna.5.The microelectronic package of any of claims 1-4, wherein the package reinforcement elements are placed along a perimeter of the microelectronic package.6.The microelectronic package of any of claims 1-5, wherein the package reinforcement element at least partially surrounds the emitter, the receiver, and one or more tubes forming the microelectronic package core.7.The microelectronic package of any of claims 1-6, wherein the package reinforcement element is bonded to the microelectronic package via a non-conductive adhesive.8.A microelectronic package according to any of claims 1-7, further comprising an electromagnetic interference shield surrounding the transmitter and the receiver, the package reinforcement element surrounding a portion of the electromagnetic interference shield.9.The microelectronic package of any of claims 1-8, wherein the package stiffening element has a resonant frequency of approximately 1000 MHz.10.The microelectronic package of any of claims 1-9, wherein the package stiffening element has a peak implementation gain of about 10 dB.11.A microelectronic package comprising:Ground planea plurality of dies electrically coupled to the ground plane;Transceiver;A package stiffening element is located proximate the ground plane and in electrical communication with the transceiver, the package stiffening element being configured to function as an antenna for the transceiver.12.The microelectronic package of claim 11 wherein a conventional antenna is not included in the microelectronic package.13.A microelectronic package according to any of claims 11 and 12, wherein the package reinforcement element supports a plurality of communication types.14.A microelectronic package according to any of claims 11-13, further comprising an electromagnetic interference shield surrounding the transceiver and the plurality of dies, the package reinforcement element surrounding the electromagnetic interference shield At least part.15.A microelectronic package according to any of claims 11-14, wherein the package reinforcement element comprises a plurality of sections, each of the plurality of sections supporting a different type of communication.16.The microelectronic package of any of claims 11-15, wherein the package stiffening element extends beyond a perimeter of the ground plane.17.A method of fabricating a microelectronic package, the method comprising:Attaching a plurality of dies to the substrate;Tuning the package stiffening element to have a predetermined resonant frequency;Attaching the package stiffening element to the substrate;The package stiffening element is electrically coupled to at least one of the plurality of dies such that the package stiffening element is an antenna for both the transmitter and the receiver.18.The method of claim 17 further comprising forming a ground plane proximate the plurality of dies, wherein the package stiffening element is adjacent a perimeter of the ground plane.19.The method of claim 17 further comprising forming a ground plane proximate the plurality of dies, wherein the package stiffening element extends beyond a perimeter of the ground plane.20.The method of any of claims 17-19, wherein tuning the package stiffening element to have the predetermined resonant frequency comprises tuning the package stiffening element to have a resonant frequency of approximately 1000 MHz.21.The method of any of claims 17-20, wherein tuning the package stiffening element to have the predetermined resonant frequency comprises tuning the package stiffening element to have a peak implementation gain of about 10 dB. |
Integrated reinforcement antenna on the packageTechnical fieldThe embodiments described herein relate generally to microelectronic packaging and microelectronic package fabrication. Some embodiments relate to the use of package stiffening elements as antennas.Background techniqueThe microelectronic package can include a transmitter, a receiver, or a transceiver. The transmitter, receiver or transceiver can transmit and receive electromagnetic waves during operation. Electromagnetic waves may allow devices in which the microelectronic package is contained to communicate wirelessly with other devices.DRAWINGSIn the drawings, which are not necessarily to scale, like reference numerals Like reference numerals with different letter subscripts may indicate different examples of similar components. The drawings illustrate various embodiments discussed herein by way of example and not limitation.FIG. 1 illustrates a microelectronic package in accordance with embodiments disclosed herein.FIG. 2A illustrates a stiffening element in accordance with embodiments disclosed herein.Figure 2B shows a comparison of loss versus frequency for the stiffening element of Figure 2A.Figure 2C shows a comparison of peak gain versus frequency for the stiffening element of Figure 2A.FIG. 3A illustrates a stiffening element in accordance with embodiments disclosed herein.Figure 3B shows a comparison of loss versus frequency for the stiffening element of Figure 3A.Figure 3C shows a comparison of peak gain versus frequency for the stiffening element of Figure 3A.FIG. 4A illustrates a stiffening element in accordance with embodiments disclosed herein.Figure 4B shows a comparison of loss versus frequency for the stiffening element of Figure 4A.Figure 4C shows a comparison of peak gain versus frequency for the stiffening element of Figure 4A.5A and 5B illustrate a microelectronic package in accordance with embodiments disclosed herein.6A and 6B illustrate a microelectronic package in accordance with embodiments disclosed herein.FIG. 7 illustrates an exemplary method for fabricating a microelectronic package in accordance with embodiments disclosed herein.FIG. 8 illustrates an exemplary process flow for fabricating a microelectronic package in accordance with embodiments disclosed herein.Figure 9A shows a radiation pattern of a stiffening element in accordance with embodiments disclosed herein.Figure 9B illustrates a return loss profile of a stiffening element in accordance with embodiments disclosed herein.FIG. 10A illustrates a single magnetic loop antenna in accordance with embodiments disclosed herein.FIG. 10B illustrates a radiation pattern of the single magnetic loop antenna of FIG. 10A in accordance with embodiments disclosed herein.FIG. 10C illustrates a return loss profile of the single magnetic loop antenna of FIG. 10A in accordance with embodiments disclosed herein.FIG. 11 shows an exemplary schematic diagram of a computing device in accordance with embodiments disclosed herein.Detailed waysAs disclosed herein, photo frame stiffening elements or stiffeners can be used in microelectronic packages. The reinforcing element can be made of metal. The stiffening element can have a cost of 15-20 minutes/reinforcement. The reinforcement can be used to strengthen the package to prevent structural warpage during use and surface mount technology (SMT) reflow.As disclosed herein, an integrated reinforcement antenna (oPiS antenna) on the package can be used to add antenna properties to the frame reinforcement for use as an integrated wireless antenna. For example, the stiffeners can be molded into individual, array, ring, patch, hybrid, etc. packaged antennas. The oPiS antenna can be used as a solution for wireless applications such as, but not limited to, 5G, Intel WiGig, Intel WiDi, wireless charging, and the like.Turning now to the drawings, Figure 1 illustrates a microelectronic package 100 in accordance with embodiments disclosed herein. Microelectronic package 100 may include a radio frequency transmitter and/or receiver (collectively referred to as transceiver 102), a platform controller hub (PCH) 104, and a central processing unit (CPU) 106. Other components such as memory dies, additional processing dies, and the like may also be included in the microelectronic package 100. Transceiver 102, PCH 104, CPU 106, and other components of package 100 may be surrounded or shielded by electromagnetic interference (EMI) shield 110 from being affected by stiffening element 108. The EMI shield 110 can be engaged with various components of the package 100.The stiffening element 108 can act as an antenna for the transceiver 102. For example, as disclosed herein, the stiffening element 108 can be configured to function as an antenna, and conventional antennas can be excluded from the package 100. As used herein, the term conventional antenna includes the individual elements of those skilled in the art that will be understood to be generally included with the package for wireless communication.As disclosed herein, the stiffening element 108 can be fabricated from a conductive material such as a metal. The stiffening element 108 can be configured or can be tuned to have a predetermined resonant frequency and gain. For example, the stiffening element 108 can be trimmed and shaped to have an electrical length that produces a resonant frequency of approximately 1000 MHz and a peak achieved gain of approximately 10 dB.The stiffening element 108 can include a slit 112 that acts as an antenna feed. For example, the slit 112 may be part of a radiation slit for forming the magnetic loop antenna shown in FIG. 2A, the electric dipole design shown in FIG. 3A, and the magnetic loop design shown in FIG. 4A.As disclosed herein, the stiffening element 108 can be optimized or tuned to achieve the desired resonant frequency, loss, and gain. For example, Figures 2B, 3B, and 4B illustrate the return loss of the antenna design shown in Figures 2A, 3A, and 4A, respectively. In addition to being tuned to achieve the desired resonant frequency, loss, and gain, the stiffening element 108 can also be fabricated to include antenna components.For example, as shown in FIG. 2A, by using the full perimeter length of the stiffener element 200 shown in FIG. 2A, the resonant frequency of the stiffener element 200 can be tuned to a peak implementation gain as low as 1.00 GHz and 1.00 GHz, They are shown in Figures 2B and 2C, respectively. As shown in FIG. 2A, the stiffening element 200 can include a single feed point 202.The stiffener element 300 shown in FIG. 3A can include a first unit 302 and a second unit 304 having feed points 306 and 308. By having two cells having a semi-perimeter length of the stiffener element 300 shown in Figure 3A, the resonant frequency of the stiffener element 300 can be tuned to a higher frequency. In the example shown in Figure 3A, the resonant frequency and the perceptible peak implementation gain start at 4.20 GHz, as shown in Figures 3B and 3C. Although FIG. 3A shows the stiffening element 300 of unit 302 and unit 304 having a semi-perimeter length (equal length), the perimeter lengths of unit 302 and unit 304 may be different. For example, the first unit 302 can have a quarter perimeter length and the second unit 304 can have a three-quarter perimeter length.To further optimize antenna radiation with better input impedance matching, the stiffening element 400 shown in Figure 4A can have a magnetic loop antenna design. Feed points 402 and 404 of stiffening element 400 are located on outer rings 406 and 408 of first section 410 and second section 412. The configuration shown in Figure 4A can widen the bandwidth of the resonant frequency and peak implementation gain, as shown in Figures 4B and 4C.5A and 5B show cross-sectional views of microelectronic packages 500 and 550 of embodiments disclosed herein. Microelectronic packages 500 and 550 can include a substrate 502. Substrate 502 can be attached to a printed circuit board (PCB) (not shown) via one or more solder joints 504. Substrate 502 may also include one or more vias containing one or more feed lines 506. Feeder 506 can be used to electrically couple stiffening element 508 to silicon component 510 as disclosed herein. Silicon component 510 can be a component of transceiver 102 or can be electrically coupled to transceiver 102.As disclosed herein, the stiffening element 508 can include a positive stiffening element 508+ and a negative stiffening element 508-. As disclosed herein, positive stiffening element 508+ and negative stiffening element 508- can be tuned. A conductive adhesive 512 can be used to secure the stiffening element 508 to the feed line 506. Conductive adhesive 512 may fall on solder resist layer 514 and pass through feed line 506 via one or more solder resist openings 516. Silicon component 510 can be coupled to feed line 506 via one or more solder bumps 518. The underfill 520 can be used to further secure the silicon component 510 to the solder resist layer 514.As shown in FIG. 5A, the stiffening elements 508 can be tuned and trimmed such that the sidewalls 522 and 524 are flush with the sidewalls 526 and 528 of the microelectronic package 500. As shown in FIG. 5B, the stiffening elements 508 can be tuned and trimmed such that the sidewalls 522 and 524 extend beyond the sidewalls 526 and 528 of the microelectronic package 550.The stiffening elements 508 that extend beyond the sidewalls 526 and 528 can provide a larger metal area and thus allow for the configuration of the electrical length of the antenna formed by the stiffening element 508. For example, a larger metal area may allow the stiffening element 508 to act as an antenna designed to operate at a lower frequency / requiring longer wavelengths / requiring longer electrical lengths to resonate. Moreover, the stiffening element 508 can be constructed using only extensions (ie, portions that extend beyond the sidewalls 526 and 528) to resonate at a higher frequency over the same stiffener length. This is due to the fact that the propagation medium of the extension (below the extension) is air rather than the package substrate, wherein the air exhibits a higher propagation velocity (ie, a shorter wavelength, compared to the package substrate, And a shorter electrical length is needed to resonate). In addition, other dielectric materials may be utilized to fill the dielectric beneath the extension as another variable for configuring the antenna electrical characteristics of the stiffening element 508.6A and 6B show cross-sectional views of a microelectronic package 600 of embodiments disclosed herein. Microelectronic package 600 can include a substrate 602. Substrate 602 can be attached to a printed circuit board (PCB) (not shown) via one or more solder joints 604. Substrate 602 may also include one or more vias including one or more feed lines 606. Feeder 606 can be used to electrically couple stiffening element 608 to silicon component 610 as disclosed herein. Silicon component 610 can be a component of transceiver 102 or can be electrically coupled to transceiver 102.As disclosed herein, the stiffening element 608 can include a positive stiffening element 608+ and a negative stiffening element 608-. As disclosed herein, positive stiffening element 608+ and negative stiffening element 608- can be tuned. Conductive adhesive 612 can be used to secure stiffening element 608 to feed line 606. Conductive adhesive 612 may fall on solder resist layer 614 and pass through feedthrough 606 via one or more solder resist openings 616. Silicon component 610 can be coupled to feed line 606 via one or more solder bumps 618. The underfill 620 can be used to further secure the silicon feature 610 to the solder resist layer 614.Silicon component 610 can be surrounded by EMT shield 630 on one or more sides. The EMI shield 630 can shield the silicon component 610 from electromagnetic interference caused by electromagnetic waves generated or received by the stiffening component 608. Additionally, the EMI shield 630 can shield the stiffening element 608 from electromagnetic interference from electromagnetic fields generated by the silicon component 610.To facilitate further shielding of the silicon component 610 and the stiffening component 608 from EMI, a metal layer 632 can be formed between the stiffening component 610 and the substrate 602. The stiffening element 610 can be secured to the metal layer 630 via a non-conductive adhesive 634. Metal layer 632 can be secured to solder resist layer 614 via conductive adhesive 636. Conductive adhesive 636 can extend through one or more vias 638 and connect metal layer 632 to ground plane 640. Thus, the metal layer 632 can be grounded. Metal layer 632 can isolate stiffening element 610 from surface routing interference for EMC mitigation.The EMI shield 630 can be used to prevent electromagnetic interference from the silicon dies coupled to the stiffening elements from causing unintentional radiation. The EMI shield 630 can be grounded and can be grounded directly to the ground plane 640 or any other ground reference.As shown in FIG. 6, the stiffening element 608 can be tuned and trimmed such that the sidewalls 622 and 624 are flush with the sidewalls 626 and 628 of the microelectronic package 600. As shown in FIG. 5B, the stiffening element 608 can be tuned and trimmed such that the sidewalls 622 and 624 extend beyond the sidewalls 626 and 628 of the microelectronic package 600.As disclosed herein, stiffening elements such as stiffening elements 108, 508, and 608 can be formed as antennas for use on microelectronic packages such as microelectronic packages 100, 500, and 600. In other words, the stiffening elements disclosed herein can replace conventional antennas, allowing for the construction of microelectronic packages and allowing the microelectronic package to enable wireless communication without the need for separate antenna elements.FIG. 7 illustrates an exemplary method 700 for fabricating a microelectronic package. Method 700 can begin at stage 702, where one or more dies can be attached to a substrate. The die can be attached to one or more routing layers as is known in the art.From stage 702, method 700 can proceed to stage 704 where a metal layer can be formed. As disclosed herein, the metal layer can be formed such that the package stiffening element will be located proximate to the metal layer, and thus the metal layer acts as a shield against magnetic interference. Moreover, as disclosed herein, forming the metal layer can include electrically coupling the metal layer to a ground plane.From stage 704, method 700 can proceed to stage 706, at which stage package reinforcement elements can be formed. As disclosed herein, forming the package reinforcement element can include forming a slit in the metal frame. Forming the package stiffening element further includes also forming any number of antenna types. For example, the package stiffening element can be formed as a single antenna, an array loop antenna, a patch antenna, a magnetic loop antenna, or a hybrid antenna.From stage 706, method 700 can proceed to stage 708, at which stage the package reinforcement elements can be tuned. For example, as disclosed herein, the package stiffening element can be trimmed or tuned to have a resonant frequency of approximately 1000 MHz. The package stiffening element can also be tuned to have a peak implementation gain of approximately 10 dB. Additionally, the package stiffening element can be tuned to have a perimeter that is less than about 10% of the wavelength of the received electromagnetic wave.From stage 708, method 700 can proceed to stage 710, at which stage the EMI shield can be attached. As disclosed herein, an EMI shield can cover a die, a transmitter, a receiver, a transceiver, and the like. Additionally, the EMI shield can be electrically coupled to the metal layer or ground plane. The EMI shield can be formed via a sputtering technique. Moreover, the EMI shield can be pre-formed and then bonded to the die or substrate.From stage 710, method 700 can proceed to stage 712 where the package reinforcement elements can be attached. For example, the package reinforcement element can be attached to the microelectronic package and coupled to one or more of the dies such that the package reinforcement element is configured to act as an antenna for a transmitter, receiver, or transceiver. As disclosed herein, a package stiffening element can be used as an antenna such that a conventional antenna does not need to be attached to a microelectronic package.Although the various stages of method 700 have been described in an order in this disclosure, the various stages of method 700 are not necessarily performed in the order described herein. For example, the package stiffening element can be attached to the microelectronic package prior to attachment of the EMI shield. Additionally, the package stiffening element can be attached before the die is attached to the substrate.FIG. 8 shows a process flow 800 for fabricating a microelectronic package. Process flow 800 can begin with post-assembly package 802, and package reinforcement element 804 can be received from the manufacturer. The assembled package 802 can be fabricated using known techniques and includes routing layers, ground planes, metal layers, substrates, dies, etc., as disclosed herein. The package stiffening element 804 can be received from a self-reinforcing component manufacturer. The stiffener component manufacturer can fabricate the package stiffener 804 as disclosed herein such that the package stiffener is tuned to act as an antenna.Once the assembled package 802 and package reinforcement component 804 are received, the package weighting component 804 can be attached to the assembled package 802. For example, as disclosed herein, a conductive or non-conductive adhesive can be used to attach portions of the package reinforcement element 804 to the assembled package 802.Once the package reinforcement element 804 is attached to the assembled package 802, the EMI shield 806 can be attached to the assembled package 802. The EMI shield 806 can also be attached by the manufacturer of the assembled package 802.9A and 9B show a radiation pattern 902 and a graph 904 showing the return loss of a single package having a single stiffener element (ie, a magnetic loop antenna formed by the stiffening element 200) that has not been placed on a printed circuit board. . In contrast, as shown in FIG. 10A, even when a single package 1002 having a single stiffener element 1004 (i.e., a magnetic loop antenna formed by the stiffening element 200) is soldered to a large printed circuit board 1006, even if a printed circuit is present The large ground plane of the board has its radiation pattern 4008 held in the upward facing direction and the resonant frequencies remain the same, as shown in Figures 10B and 10C.Figure 11 shows a system level diagram in accordance with one embodiment of the present invention. For example, Figure 11 depicts an example of an electronic device (eg, a system) including an electronic package disclosed herein. Figure 11 is included to illustrate an example of a higher level device application of the present invention. In one embodiment, system 1100 includes, but is not limited to, a desktop computer, a laptop, a netbook, a tablet, a notebook computer, a personal digital assistant (PDA), a server, a workstation, a cellular telephone, a mobile computing device, a smart phone, the Internet. Device or any other type of computing device. In some embodiments, system 1100 is a system on a chip ("SoC") system.In one embodiment, processor 1110 has one or more processing cores 1112 and 1112N, where 1112N represents an Nth processor core within processor 110, where N is a positive integer. In one embodiment, system 1100 includes a plurality of processors, including 1110 and 105, wherein processor 105 has logic similar or equivalent to that of processor 1110. In some embodiments, processing core 1112 includes, but is not limited to, prefetch logic for fetching instructions, decoding logic for decoding instructions, execution logic for executing instructions, and the like. In some embodiments, processor 1110 has a cache 1116 to cache instructions and/or data of system 1100. The cache 1116 can be organized into a hierarchy that includes one or more levels of cache.In some embodiments, the processor 1110 includes a memory controller 1114 operative to perform the memory 1130 that enables the processor 1110 to access and communicate with the memory 1130, including the volatile memory 1132 and/or the non-volatile memory 1134. The function. In some embodiments, processor 1110 is coupled to memory 1130 and chipset 1120. The processor 1110 can also be coupled to the wireless antenna 1178 to communicate with any device configured to transmit and/or receive wireless signals. In one embodiment, the wireless antenna interface 1178 operates according to, but is not limited to, according to the IEEE 802.11 standard and its related series, Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMAX, or any form of wireless communication protocol. Moreover, as disclosed herein, the wireless antenna interface 1178 can be a stiffening element as disclosed herein.In some embodiments, volatile memory 1132 includes, but is not limited to, synchronous dynamic random access memory (SDRAM), dynamic random access memory (DRAM), RAMBUS dynamic random access memory (RDRAM), and/or any other type. Random access memory. Non-volatile memory 1134 includes, but is not limited to, flash memory, phase change memory (PCM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), or any other type of non-volatile memory device. .Memory 1130 stores information and instructions to be executed by processor 1110. In one embodiment, the memory 1130 may also store temporary variables or other intermediate information when the processor 1110 executes the instructions. In the illustrated embodiment, chipset 1120 is coupled to processor 1110 via point-to-point (PtP or P-P) interfaces 1117 and 1122. Chipset 1120 enables processor 1110 to be connected to other components in system 1100. In some embodiments of the invention, interfaces 1117 and 1122 operate in accordance with a PtP communication protocol (eg,Quick Channel Interconnect (QPI), etc.). In other embodiments, different interconnections can be used.In some embodiments, chipset 1120 is operable to communicate with processors 1110, 1105N, display device 1140, and other devices 1172, 1176, 1174, 1160, 1162, 1164, 1166, 1177, and the like. Chipset 1120 can also be coupled to wireless antenna 1178 to communicate with any device configured to transmit and/or receive wireless signals. Moreover, as disclosed herein, the wireless antenna interface 1178 can be a stiffening element as disclosed herein.Chipset 1120 is coupled to display device 1140 via interface 1126. Display 1140 can be, for example, a liquid crystal display (LCD), a plasma display, a cathode ray tube (CRT) display, or any other form of visual display device. In some embodiments of the invention, processor 1110 and chipset 1120 are incorporated into a single SOC. In addition, chipset 1120 is coupled to one or more buses 1150 and 1155 that interconnect various components 1174, 1160, 1162, 1164, and 1166. Buses 1150 and 1155 can be interconnected via bus bridge 1172. In one embodiment, chipset 1120 is coupled to non-volatile memory 1160, mass storage device 1162, keyboard/mouse 1164, and network interface 1166 via interfaces 1124 and/or 1104, smart television 1176, consumer electronics 1177, and the like.In one embodiment, mass storage device 1162 includes, but is not limited to, a solid state drive, a hard drive, a universal serial bus flash drive, or any other form of computer data storage medium. In one embodiment, network interface 1166 is implemented by any type of well-known network interface standard including, but not limited to, an Ethernet interface, a universal serial bus (USB) interface, and a peripheral component interconnect (PCI). A fast interface, a wireless interface, and/or any other suitable type of interface. In one embodiment, the wireless interface operates according to, but is not limited to, according to the IEEE 802.11 standard and its related series, Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMAX, or any form of wireless communication protocol. Moreover, as disclosed herein, the wireless interface can be a stiffening element as disclosed herein.Although the modules shown in FIG. 11 are depicted as separate blocks within system 1100, the functions performed by some of these blocks may be integrated into a single semiconductor circuit, or may be implemented using two or more separate integrated circuits. For example, although cache 1116 is depicted as a separate block within processor 1110, cache 1116 (or selected aspects of 1116) may be incorporated into processor core 1112.Additional notes and examples:Example 1 is a microelectronic package comprising: a transmitter; a receiver; and a package stiffening element in electrical communication with the transmitter and the receiver, the package stiffening element configured to act as the transmitter and An antenna of both of the receivers.In Example 2, the subject matter of Example 1 optionally includes, wherein a conventional antenna is not included in the microelectronic package.In Example 3, the subject matter of any one or more of Examples 1-2 optionally includes, wherein the package enhancement element supports a plurality of communication types.In Example 4, the subject matter of any one or more of Examples 1-3 optionally includes, wherein the transmitter and the receiver are components of a transceiver.In Example 5, the subject matter of any one or more of Examples 1-4 optionally includes, wherein the package stiffening element forms a magnetic loop antenna.In Example 6, the subject matter of Example 5 optionally includes the % of the wavelength of the received electromagnetic wave.In Example 7, the subject matter of any one or more of Examples 1-6, optionally, wherein the package stiffening element forms a dipole antenna, a single antenna, an array loop antenna, a patch antenna, or a hybrid antenna.In Example 8, the subject matter of any one or more of Examples 1-7 optionally includes, wherein the package reinforcement element is placed along a perimeter of the microelectronic package.In Example 9, the subject matter of any one or more of Examples 1-8, optionally, wherein the package stiffening element at least partially surrounds the emitter, the receiver, and forms the microelectronic package One or more dies.In Example 10, the subject matter of any one or more of Examples 1-9, optionally, wherein the package reinforcement element is bonded to the microelectronic package via a non-conductive adhesive.In Example 11, the subject matter of any one or more of Examples 1-10 optionally includes an electromagnetic interference shield surrounding the transmitter and the receiver, the package reinforcement element surrounding the electromagnetic interference shield part.In Example 12, the subject matter of any one or more of Examples 1-11 optionally includes MHz.In Example 13, the subject matter of any one or more of Examples 1-12 optionally includes dB.In Example 14, the subject matter of any one or more of Examples 1-13, optionally, wherein the package reinforcement element comprises a first portion and a second portion, the first portion being in electrical communication with the transmitter and The second portion is in electrical communication with the receiver.In Example 15, the subject matter of any one or more of Examples 1-14 optionally includes, wherein the package enhancement element comprises a plurality of portions, each of the plurality of portions supporting a different communication type.In Example 16, the subject matter of any one or more of Examples 1-15 optionally includes, wherein the package stiffening element extends beyond a ground plane of the microelectronic package.In Example 17, the subject matter of any one or more of Examples 1-16 optionally includes, wherein the package stiffening element forms a Wi-Fi antenna.In Example 18, the subject matter of any one or more of Examples 1-17 optionally includes, wherein the package reinforcement element forms a Bluetooth antenna.In Example 19, the subject matter of any one or more of Examples 1-18 optionally includes, wherein the package stiffening element forms a near field communication antenna.In Example 20, the subject matter of any one or more of Examples 1-19 optionally includes, wherein the package stiffening element forms a cellular antenna.Example 21 is a microelectronic package comprising: a ground plane; a plurality of dies electrically coupled to the ground plane; a transceiver; and a package reinforcement located proximate the ground plane and in electrical communication with the transceiver An element, the package reinforcement element being configured to act as an antenna of the transceiver.In Example 22, the subject matter of any one or more of Examples 1-21 optionally includes, wherein a conventional antenna is not included in the microelectronic package.In Example 23, the subject matter of any one or more of Examples 21-22 optionally includes, wherein the package enhancement element supports a plurality of communication types.In Example 24, the subject matter of any one or more of Examples 21-23 optionally includes wherein the package stiffening element forms a magnetic loop antenna.In Example 25, the subject matter of Example 24 optionally includes % of the wavelength of the received electromagnetic wave.In Example 26, the subject matter of any one or more of Examples 21-25 optionally includes, wherein the package stiffening element forms a dipole antenna, a single antenna, an array loop antenna, a patch antenna, or a hybrid antenna.In Example 27, the subject matter of any one or more of Examples 21-26 optionally includes, wherein the package reinforcement element is placed along a perimeter of the ground plane.In Example 28, the subject matter of any one or more of Examples 21-27, optionally, wherein the package stiffening element at least partially surrounds the transceiver and the plurality of dies.In Example 29, the subject matter of any one or more of Examples 21-28, optionally, wherein the package reinforcement element is bonded to the microelectronic package via a non-conductive adhesive.In Example 30, the subject matter of any one or more of Examples 21-29 optionally includes an electromagnetic interference shield surrounding the transceiver and the plurality of dies, the package reinforcement component surrounding the electromagnetic interference At least part of the shield.In Example 31, the subject matter of any one or more of Examples 21-30 optionally includes MHz.In Example 32, the subject matter of any one or more of Examples 21-31 optionally includes dB.In Example 33, the subject matter of any one or more of Examples 21-32, optionally, wherein the package reinforcement element comprises a first portion and a second portion, the first portion being in electrical communication with the transmitter and The second portion is in electrical communication with the receiver.In Example 34, the subject matter of any one or more of Examples 21-33 optionally includes, wherein the package enhancement element comprises a plurality of portions, each of the plurality of portions supporting a different communication type.In Example 35, the subject matter of any one or more of Examples 21-34 optionally includes wherein the package stiffening element extends beyond a perimeter of the ground plane.In Example 36, the subject matter of any one or more of Examples 21-35 optionally includes, wherein the package stiffening element forms a Wi-Fi antenna.In Example 37, the subject matter of any one or more of Examples 21-36 optionally includes, wherein the package reinforcement element forms a Bluetooth antenna.In Example 38, the subject matter of any one or more of Examples 21-37 optionally includes, wherein the package stiffening element forms a near field communication antenna.In Example 39, the subject matter of any one or more of Examples 21-38 optionally includes, wherein the package stiffening element forms a cellular antenna.Example 40 is a method of fabricating a microelectronic package, the method comprising: attaching a plurality of dies to a substrate; tuning the package stiffening element to have a predetermined resonant frequency; attaching the package stiffening element to the a substrate; and electrically coupling the package stiffening element to at least one of the plurality of dies such that the package stiffening element is an antenna for both a transmitter and a receiver.In Example 41, the subject matter of Example 40 optionally includes forming a ground plane proximate the plurality of dies, wherein the package stiffening element is proximate a perimeter of the ground plane.In Example 42, the subject matter of any one or more of Examples 40-41 optionally includes forming a ground plane proximate the plurality of dies, wherein the package stiffening element extends beyond a perimeter of the ground plane.In Example 43, the subject matter of any one or more of Examples 40-42 optionally includes, wherein electrically coupling the transmitter to at least one of the plurality of dies comprises positioning a feed point To be close to the transmitter.In Example 44, the subject matter of any one or more of Examples 40-43 optionally includes MHz.In Example 45, the subject matter of any one or more of Examples 40-44 optionally includes dB.In Example 46, the subject matter of any one or more of Examples 40-45 optionally includes forming an electromagnetic interference shield that at least partially surrounds the plurality of dies.In Example 47, the subject matter of Example 46 optionally includes, wherein attaching the package reinforcement element comprises attaching the package reinforcement element such that the package reinforcement element at least partially surrounds the electromagnetic interference shield.In Example 48, the subject matter of any one or more of Examples 40-47 optionally includes forming the package stiffening element to create a single antenna.In Example 49, the subject matter of any one or more of Examples 40-48 optionally includes forming the package stiffening element to create an array loop antenna.In Example 50, the subject matter of any one or more of Examples 40-49 optionally includes forming the package stiffening element to create a patch antenna.In Example 51, the subject matter of any one or more of Examples 40-50 optionally includes forming the package stiffening element to establish a hybrid antenna.In Example 52, the subject matter of any one or more of Examples 40-51 optionally includes forming the package stiffening element to establish a magnetic loop antenna.In Example 53, the subject matter of Example 52 optionally includes % of the wavelength of the received electromagnetic wave.Example 54 is a microelectronic package comprising: a module for emitting a first electromagnetic wave, the transmitting module being attached to a substrate; a module for receiving a second electromagnetic wave, the receiving module being attached to the lining And a reinforcement module in electrical communication with both the transmitting module and the receiving module, the reinforcing module forming an antenna for the transmitting module and the receiving module.In Example 55, the subject matter of Example 54 optionally includes wherein the microelectronic package does not include a conventional antenna.In Example 56, the subject matter of any one or more of Examples 54-55 optionally includes a module for shielding the reinforcement module from electromagnetic radiation emitted from a die located proximate the reinforcement module .In Example 57, the subject matter of any one or more of Examples 54-56 optionally includes a grounding module located proximate to the reinforcement module.In Example 58, the subject matter of any one or more of Examples 54-57 optionally includes a grounding module, wherein the package stiffening element extends beyond a perimeter of the grounded module.In Example 59, the subject matter of any one or more of Examples 54-58 optionally includes, wherein the enhancement module supports a plurality of communication types.In Example 60, the subject matter of any one or more of Examples 54-59 optionally includes, wherein the reinforcement module forms a magnetic loop antenna, a dipole antenna, a single antenna, an array loop antenna, a patch antenna, or Hybrid antenna.In Example 61, the subject matter of any one or more of Examples 54-60 optionally includes, wherein the reinforcement module is placed along a perimeter of the microelectronic package.In Example 62, the subject matter of any one or more of Examples 54-61 optionally includes, wherein the enhancement module at least partially surrounds the transmitting module, the receiving module, and forming the microelectronic package One or more dies.In Example 63, the subject matter of any one or more of Examples 54-62 optionally includes, wherein the reinforcement module is bonded to the microelectronic package via a non-conductive adhesive.In Example 64, the subject matter of any one or more of Examples 54-63 optionally includes MHz.In Example 65, the subject matter of any one or more of Examples 54-64 optionally includes dB.In Example 66, the subject matter of any one or more of Examples 54-65, optionally, wherein the reinforcement module includes a first portion and a second portion, the first portion being in electrical communication with the transmitting module and The second portion is in electrical communication with the receiving module.In Example 67, the subject matter of any one or more of Examples 54-66 optionally includes, wherein the enhancement module includes a plurality of portions, each of the plurality of portions supporting a different communication type.The above detailed description includes references to the drawings that form part of the detailed description. The drawings illustrate, by way of illustration, specific embodiments of the invention. These embodiments are referred to herein as "examples." Such examples may include elements in addition to those shown or described. However, examples including the elements shown or described are also contemplated. Furthermore, any combination of those elements shown or described with respect to particular examples (or one or more aspects thereof) shown or described herein or relative to other examples (or one or more aspects thereof) is also contemplated. Or an example of the arrangement.The disclosures, patents, and patent documents referred to herein are hereby incorporated by reference in their entirety in their entireties in the the the the the the the the the Given the inconsistent usage between this document and those incorporated by reference, the usage in the incorporated references is a supplement to the usage of this article; for irreconcilable inconsistencies, the usage in this article is dominant. .As used herein, the term "a", as used in the Herein, the term "or" is used to mean non-exclusive or such that "A or B" includes "A instead of B", "B instead of A", and "A and B" unless otherwise indicated. In the appended claims, the terms "including" and "in which" are used as a generic term synonym for the corresponding terms "comprising" and "wherein". Furthermore, in the following claims, the terms "including" and "comprising" are open-ended, that is, in addition to the elements recited in the claims. A system, device, article or process is still considered to be within the scope of the claims. Furthermore, in the following claims, the terms "first", "second" and "third", etc. are used merely as labels, and are not intended to imply the numerical order of the objects.The above description is intended to be illustrative, and not restrictive. For example, the examples described above (or one or more aspects thereof) can be used in conjunction with other examples. For example, other embodiments may be utilized by one of ordinary skill in the art after reviewing the above description. The Abstract section will allow the reader to quickly ascertain the subject matter of the present disclosure, and it is understood that the abstract will not be used to interpret or limit the scope or meaning of the claims. Moreover, in the above Detailed Description, various features may be combined together to simplify the present disclosure. However, the claims may not clarify the features disclosed herein, as embodiments may include a subset of the features. Moreover, embodiments may include fewer features than those disclosed in the specific examples. Thus, the following claims are hereby incorporated into the Detailed Description The scope of the embodiments disclosed herein is defined by the full scope of the appended claims |
An anti-malware approach uses a storage drive with the capability to lock selected memory areas. Platform assets such as OS objects are stored in the locked areas and thus, unauthorized changes to them may not be made by an anti-malware entity. |
CLAIMS What is claimed is: 1. A computing platform, comprising: a storage drive having lock logic to lock a group of storage drive memory blocks containing platform assets to be protected, wherein the protected assets are not modifiable except by a qualified user having secret information to unlock the locked blocks through the lock logic. 2. The computing platform of claim 1, in which the platform assets include operating system (OS) modules. 3. The computing platform of claim 1, comprising a pre-boot authentication agent to authenticate a first user to access the storage drive for platform operation but not to be able to unlock the locked memory blocks. 4. The computing platform of claim 3, comprising a storage management agent to enable a second, qualified user to unlock the memory blocks and modify the platform assets. 5. The computing platform of claim 1, comprising a quarantine module to identify platform asset change requests from an unauthorized user and store the change request in a quarantine region of storage drive memory. 6. The computing platform of claim 5, in which the quarantine module returns a palatable message to the OS even though a change request is not to be implemented. 7. The computing platform of claim 1, in which the lock logic is Opal compliant. 8. A storage drive system, comprising; a storage drive with lock logic to lock portions of memory as read-only except to a user with a qualified password; and a memory storage medium having stored instructions for a driver for the storage drive, the driver including a pre-boot authentication agent to implement a shadow master boot record (MBR) operation and a storage management agent to enable a qualified user with the qualified password to make changes to OS assets stored in the locked memory portion. 9. The system of claim 8, in which the lock logic includes firmware in a controller for the storage drive. 10. The system of claim 8, in which the storage drive comprises a quarantine module to identify unauthorized changes to the locked assets and to return a response to an entity making the request, the response to not impede operation of a platform using the storage drive. 11. The system of claim 8, in which the storage drive is a solid state drive using flash memory as its storage medium. 12. A platform comprising: a chip having a processor to execute an operating system (OS), the OS having a storage driver including a storage management module to enable a qualified user to access locked sectors of memory in a storage drive, the locked sectors to store OS assets, the qualified user being able to make changes to the assets. 13. The platform of claim 12, in which the qualified user is a third-party anti-malware service. 14. The platform of claim 12, in which unauthorized change requests cause a message to be returned to the requesting entity, the message to not impair operation of the platform. |
STORAGE DRIVE BASED ANTIMALWARE METHODS AND APPARATUSES BACKGROUND Malware (or malicious code) is a catch-all term used to refer to various types of software that can cause problems or damage a computer. It encompasses viruses, worms, Trojan horses, macro viruses, rootkit malware, and backdoors. Malware has evolved to be increasingly more stealthy and targeted. Malware has become stealthier, in some cases, hiding deep inside the core operating system by infecting kernel modules (e.g., rootkits). Rootkits, especially the ones executing with Ring 0 privileges are very difficult or impossible to detect by current anti- virus solutions (AVS). For example, Ring 0 rootkits may feed incorrect information to anti-virus solutions and thereby disrupt their normal functioning. Accordingly, new approaches for protecting platforms against malware may be desired. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements. Figure 1 is a block diagram of a storage drive with an anti-malware lock capability in accordance with some embodiments. Figure 2 is a diagram showing a platform with a lock capable storage drive based anti-malware solution in accordance with some embodiments. Figure 3 is a diagram of a portion of a more detailed embodiment of a lock capable, storage drive anti-malware approach for a computing platform in accordance with some embodiments. Figure 4 is a diagram showing different stages for implementing an anti-malware scheme for a computing platform in accordance with some embodiments. DETAILED DESCRIPTION Malware may attack the storage subsystem, including storage drive locations, where core operating system modules such as registry hives, dynamic link library (DLL) modules, kernel modules, master boot record modules, etc. are stored. In some embodiments, certain storage drive locations (e.g., sectors, blocks, and/or ranges of the same) containing such modules are protected by their storage drive control logic so that unauthorized updates are prohibited. Figure 1 is a block diagram showing a storage drive device 102 in accordance with some embodiments. It may be of any suitable non- volatile writeable technology such as a solid-state drive (SSD), magnetic hard disk drive (HDD), or thumb drive, to mention just a few. Storage drive 102 generally comprises read/writeable memory units (blocks or sectors) 110 and a storage drive controller 105 coupled between the memory and a host to write data from the host into the memory blocks and to read data from the memory blocks back to the host. The host typically corresponds to a platform operating system, whether it is for a computer with one or more processors or for a portable device with a system-on- chip processor. The storage drive controller 105 has lock logic 106 (e.g., implemented within firmware with available writeable memory) for providing locked memory blocks 112 that are only modifiable with an appropriate key or password to be evaluated by the lock logic 106. There are storage drives currently available that may provide such a capability for selectively "locking" out portions of the memory in this manner. For example, Opal is a standard defined by the Trusted Computing Group for providing storage drives with various security features including the just described memory lock capabilities. With an anti-malware scheme in accordance with some embodiments, at least some OS modules are stored in locked memory blocks and not allowed to be modified unless by a valid user such as an authorized trusted third-party entity with the appropriate password. Figure 2 is a block diagram of an exemplary platform having a storage drive with locked memory blocks for holding OS components and preventing them from being modified by unauthorized entities. The platform has a CPU chip 202 and a platform I/O chip 222 coupled together via a direct media interconnect (DMI) link via DMI interfaces 213/223. The platform also includes a storage drive 102 (e.g., a solid state drive in accordance with a drive of Figure 1) coupled to a storage drive host controller 225. The CPU chip 311 comprises one or more processor cores 203, a graphics processor 204, low level cache (LLC) 205, memory controller 207, a display interface controller 209, and a PCI Express interface controller 211. (cooperating devices such as memory a display, network interfaces, and other peripheral devices that may be part of the depicted platform when in operation are not depicted for convenience, but nonetheless, may be part of various different embodiments.) One or more of the cores 203 execute operating system software (OS space) 240. The OS software includes a storage driver 242 to facilitate data transfers between the platform and the storage drive 102. The storage driver 242 includes anti-malware block lock modules 244 for implementing an anti- malware scheme, it works in cooperation with the lock logic 106 to enable configuration and implementation of locked storage drive blocks to protect OS assets and at the same time, to allow the system to run without appreciable encumbrance. The PIO chip 222 includes various peripheral device interfaces such as a USB2 interface 232, audio interface 234, PCIe interface 230, and USB3 interface 225. It also includes a power management controller (PMC) 228 to manage power allocation and some of the power management policies for the platform. The PIO chip also includes a storage drive host controller 226 for controlling data transfers between the storage drive and the other parts of the platform. For example, the host controller 226 could utilize an AHCI or an SATA compliant controller. (The Advanced Host Controller Interface (AHCI) is a programming-specification which defines the operation of Serial ATA host-controllers (also known as host bus adapters) in a non implementation-specific manner. The specification describes a system memory structure for computer hardware vendors in order to exchange data between host system memory and the attached storage-devices. Figure 3 is a diagram of a more detailed implementation of a platform 300 with an anti-malware scheme in accordance with some embodiments. This depiction focuses on the host OS 240 and storage drive 102. Also shown is a trusted remote service 302 for configuring and updating the locked sector system. The remote service could correspond to a third-party anti-malware service or information technology entity connected to the platform via a network link. T he trusted remote service would have the appropriate password or key for configuring and updating locked sectors in the drive. (Note that the use of a remote third party is not necessary for implementing the invention. For example, the platform's user could have the information and be permitted to configure the storage drive and update locked sector information.) The OS space (or Host OS) 240 has a pre-boot authentication agent (PBAA) 343, a storage configuration agent (SCA) 345, a storage management agent (SMA) 347, and a storage filter driver 349. The storage drive 102 has a drive controller 105 and nonvolatile, writeable memory (e.g., magnetic, flash, etc.) 110. The storage drive controller 105 has lock logic 106 and quarantine module 307. The memory, among other things, comprises a set of locked platform assets blocks 312 and quarantine data blocks 322 that are also locked. (The vast, remaining portion of memory will typically not be locked but used in other platform capacities such as facilitating application code and data space, file storage, etc.) The pre-boot authentication agent 343 is used to authenticate a user for normal platform operation. For example, the storage drive may use a self encryption technique, and the pre-boot authentication agent allows the user to enable storage drive decryption, so that it can be used with the platform, by presenting to the controller 105 an appropriate storage drive password. (This would normally not be the same as the password or key used for locking techniques described herein.) The storage drive controller (e.g., an Opal compliant drive controller) 105 has the ability to implement a shadow MBR (master boot record) to perform pre-boot authentication in cooperation with the pre-boot authentication agent 343. Traditionally (e.g., with non self-encrypting drives), the Master Boot Record is typically contained in the first sector of the platform's primary drive. The MBR identifies where the active partition is, and then starts the boot program for the boot sector of that partition. The boot sector identifies where the operating system is located and enables the boot information to be loaded into the computer's main memory. On the other hand, in some embodiments, with a shadow MBR implementation, after booting, when the BIOS (or equivalent) attempts to read the master boot record, the drive gets redirected to a shadow MBR, typically a Linux or MS-DOS kernel (depending on the implemented platform). Then, the user or administrator authenticates against an encrypted key hiding on the drive (e.g., in a memory sector or in writeable, non-volatile memory such as flash within the storage drive controller/firmware domain. This method (e.g., with an Opal compliant drive) may be used for encrypting data stored on the drive, as well as for locking selected storage drive blocks. The storage configuration agent (SCA) 345, through the storage filter driver 349, facilitates initial configuration of the lock parameters for the storage drive when the platform is known to be in a clean state, e.g., after it is initially manufactured. It may establish an initial lock/unlock password, and it also may define memory blocks (312) to be locked for storing key platform assets (DLLs, registry values, etc.). The storage management agent (SMA) 347 allows for the lock parameters to be modified over time by an appropriate user, e.g., a trusted third-party entity or the computer user, itself, having the lock password. Through the SMA, changes to OS parameters and the memory blocks where they are stored and locked may be changed. In some embodiments, these changes are stored in the quarantine section 322, and then the changes are actually implemented the next time the system boots, when it is assumed/known to be clean. The Quarantine module receives the normal storage drive data transfer requests from the OS (by way of the storage filter driver 349) and forward them to the storage drive controller if they are not write attempts implicating a locked section of memory. If a request to write or otherwise change data in a locked memory section comes into the drive (through the quarantine filter module 307), then it is diverted to the quarantine memory section 322 and evaluated later to be effectuated at the next boot-up if deemed acceptable. At the same time, the quarantine module 307 generates responses and sends them back to the OS space so that it is not left hanging. The quarantine module generates appropriate responses to satisfy the OS and not unreasonably impede platform operation. Figure 4 shows different stages of an anti-malware scheme in accordance with some embodiments. At 402, the storage drive is provisioned. Next, at 404, the storage drive is configured for its specific platform host. At 406, pursuant to runtime operation, lock and quarantine strategies are implemented. At 408, the configuration is updated. Note that at least in some cases, these are general stages that may be preformed out of order, and they all may not necessarily be performed in the same session. The provisioning stage involves initially setting up a storage drive when the platform is known to be clean. In this stage, The lock logic is configured on the storage drive by authenticating to the administrative service provider (a trusted third-party entity or an actual platform user). Ideally, this will be performed by an enterprise IT, OEM, or point-of-sale person, e.g., offering an anti-virus-solution bundle, or alternatively, it may be done by an experienced user. Initially, the locking functionality, e.g., in the lock logic, is enabled. This may be done, depending on vendor specific procedures, using special operation commands (e.g., negative logical block address writes, etc.) or other suitable procedures. Next, two lock-users (e.g., two Opal users) are created and authentication is set for these users. One user may be used to authenticate using a shadow MBR stage via the pre- boot authentication agent 343, and the other user may be qualified to get access to the locked memory. Either user may be used to then install the OS on the drive. Note that at this stage, the OS will be clean and have no infections. The configuration stage will now be discussed. The storage configuration agent (SCA) is executed to determine the sectors or logical block addresses (LB As) associated with the OS objects and master boot record. At the end of the execution, a list of sectors/LBAs, which need to be protected, will have been generated. The locking range(s), which correspond to the sectors/LBAs discovered by the SCA, are then created in the lock logic 106. Next, another set of ranges are created for the quarantine data section. These ranges may be configurable and capacity dependent. The ranges are made "read-only" except to the qualified user, e.g., one of the users created in the provisioning stage. Next, the locks are set for power. That is, in case of power cycle, the lock logic (e.g., Opal FW) should automatically lock the ranges. At this point, the drive is ready for use. The runtime stage will now be described. Assume that a malicious agent tries to override an OS object, which has been locked in a locked block area (312). Since the user associated with the protected ranges has not authenticated the change to the object, the change is denied. (Note that this change request denial may even occur for the user of the platform, e.g., if he/she is trying to install an application. They may not have the user clearance and necessary password/key to make such changes via a storage management agent, although, this user would likely have the storage drive encryption/decryption key to be able to otherwise use the drive with the platform, via a shadow MBR through the pre- boot authentication agent.) From here, the storage filter driver "wraps" the error (e.g., ATA error) as an OS palatable error. In some implementations, this means that it returns an error message or other message to the OS that will not cause it to hang or otherwise be unreasonably impaired. (Note that the identification of an unauthorized change to locked area 312 and the subsequent return of a palatable message may take place in the storage filter driver, lock logic, quarantine module, or some combination of some or all of the same.) In some embodiments, the quarantine module 307 detects the change request for a protected asset in locked area 312 and writes the change request content (as if it would have been granted) into the locked quarantine area 322 and returns the palatable error message to the OS. The update stage will now be discussed. A qualified user reboots the system and authenticates through the pre-boot authentication agent 343. The OS (including the protected platform assets) is then loaded. These assets are clean because they have been protected as described above. The storage management agent (SMA) is launched. It is also clean and protected because it was, in most cases, stored as a protected module in the locked assets area 312. The SMA may then contact a trusted remote service (usually the same qualified user that was accepted to make the updates in the first place) to process approved updates. Alternatively, the SMA may authenticate the content (e.g., requested OS module changes) that are in the quarantined area 322 for legitimacy. (For example, it could compare them against a log or script of changes approved by the trusted third party, e.g., IT administrator. The SMA could also authenticate against the lock logic (e.g., Opal firmware) per the credentials from, e.g., an anti-malware remote server (ARS). The SMA can further receive the payload from the ARS or update the protected OS assets with contents from the quarantine area. The SMA logs out the qualified user and again, confirms that the protected ranges are "read-only". From here, the SMA may exit, and the OS boot process continues. Note at this point that malicious code could execute on the platform, but the key OS assets are locked in protected sectors. In the preceding description and following claims, the following terms should be construed as follows: The terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" is used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" is used to indicate that two or more elements co-operate or interact with each other, but they may or may not be in direct physical or electrical contact. The invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. For example, it should be appreciated that the present invention is applicable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chip set components, programmable logic arrays (PLA), memory chips, network chips, and the like. It should also be appreciated that in some of the drawings, signal conductor lines are represented with lines. Some may be thicker, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines. It should be appreciated that example sizes/models/values/ranges may have been given, although the present invention is not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the FIGS, for simplicity of illustration and discussion, and so as not to obscure the invention. Further, arrangements may be shown in block diagram form in order to avoid obscuring the invention, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the present invention is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the invention, it should be apparent to one skilled in the art that the invention can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting. |
Apparatuses, methods and storage medium associated with integrated packaging for a stack of semiconductor dice of different sizes are disclosed. In embodiments, an apparatus including dice of different sizes may include a first die having a first side and a second side opposite the first side and a second smaller die having a first side and a second side opposite the first side the second side. The second side of the first die may be smaller than the first side of the second die and may be coupled thereto such that a portion of the first side of the second die is exposed. The apparatus may include wires coupled with and extending from the portion of the first side of the second die through a casing to a redistribution layer coupled with a side of the casing, to electrically couple the dice. Other embodiments may be disclosed and/or claimed. |
1.A device that includes:a first die having a first side and a second side opposite the first side;a second die having a first side and a second side opposite the first side, the second side of the first die being smaller than the first side of the second die, and with the second tube The first side of the core is coupled such that a portion of the first side of the second die is exposed;a first plurality of leads coupled to the portion of the first side of the second die and extending from the portion of the first side of the second die;a second plurality of leads coupled to the first side of the first die and extending from a first side of the first die;An outer casing surrounding the first plurality of leads and the second plurality of leads and covering the first side of the first die and the portion of the first side of the second die, wherein An end of the first plurality of leads and the second plurality of leads are at a first side of the outer casing opposite the second side directly adjacent the first die and the second die Exposure;a redistribution layer (RDL) coupled to the first side of the housing and electrically coupled to the first plurality of leads and the second plurality of leads.2.The apparatus of claim 1, wherein the RDL electrically connects at least one of the first plurality of leads and at least one of the second plurality of leads.3.The device of claim 1 further comprising an external interface electrically coupled to the RDL.4.The apparatus of claim 1 wherein the first side of the outer casing is smooth and the exposed ends of the first and second leads are scored or polished.5.The device of claim 1 wherein the first plurality of leads are bonded to pads on the portion of the first side of the second die.6.The device of claim 1 wherein the second plurality of leads are bonded to pads on a first side of the first die.7.The apparatus of claim 1 wherein said first plurality of leads are longer than said second plurality of leads.8.The apparatus of claim 1, wherein the ends of the first plurality of leads and the second plurality of leads including the first plurality of leads and the second plurality of leads The portion is perpendicular to the RDL.9.The apparatus of claim 8 wherein the end portions of the first plurality of leads and the other ends of the second plurality of leads are respectively associated with the first side of the first die and the first The first side of the two dies is vertical.10.The apparatus of any of claims 1-8, wherein ends of the first plurality of leads and the second plurality of leads are coplanar with a first side of the outer casing.11.A method comprising:Coupling a first die having a first side and a second side opposite the first side and a second die having a first side and a second side opposite the first side, wherein the first die a second side that is smaller than a first side of the second die and coupled to a first side of the second die such that a portion of the first side of the second die is exposed;Forming a first lead extending from the portion of the first side of the second die and a second lead extending from a first side of the first die and covering the first of the first die Side and outer casing of the portion of the first side of the second die, wherein the ends of the first plurality of leads and the second plurality of leads are in the outer and outer portions of the outer casing a first core opposite the second side adjacent to the second die is exposed; andA redistribution layer (RDL) coupled to the first side of the outer casing and electrically coupled to the first plurality of leads and the second plurality of leads is formed.12.The method of claim 11 further comprising:Bonding the first plurality of wires to pads on the portion of the first side of the second die;The second plurality of leads are bonded to pads on the first side of the first die.13.The method of claim 11 wherein forming the outer casing further comprises:Covering the first side of the first die and the portion of the first side of the second die with a dielectric material;A portion of the dielectric material is removed to expose the ends of the first plurality of leads and the second plurality of leads.14.The method of claim 13 further comprising grinding a surface of the dielectric material to remove the portion of the dielectric material.15.The method of any of claims 11-13, further comprising forming solder balls on the pads of the RDL.16.A system comprising:a first device having a single semiconductor package for a stack of different sized semiconductor dies of the first device, wherein a dimension of the first semiconductor die of the stack is greater than a dimension of the second semiconductor die of the stack adjacent to the first semiconductor die in the stack such that the second semiconductor die exposes a surface of the first semiconductor die,Wherein the single semiconductor package includes a housing surrounding a lead extending from at least an exposed surface of the first semiconductor die to a plane coplanar with an end of the stack;A second device coupled to at least one of the leads via an external interface of the semiconductor package.17.The system of claim 16 wherein said external interface secures said first device to a circuit board of said second device.18.The system of claim 16 wherein at least one of the leads provides an electrical connection between the first semiconductor die and the second semiconductor die.19.The system of claim 16 wherein at least one of the leads provides an electrical connection between the external interface and at least one of the first semiconductor die or the second semiconductor die.20.The system of any of claims 16-19, wherein the outer casing encloses a lead extending from a surface of the second semiconductor die.21.A device that includes:a module for encapsulating a side of a lead extending from a surface of a first semiconductor die of a stack of semiconductor dies of different sizes;Wherein the surface is exposed by a second semiconductor die that is smaller than the first semiconductor die and adjacent to the first semiconductor die in the stack;A module for connecting the exposed end of at least one of the leads to an exposed end of at least one of the leads extending from the exposed surface of the second semiconductor die.22.The device of claim 21 further comprising means for attaching an external electrical interface of the device to a circuit board.23.The device of claim 21 wherein the encapsulation module exposes an end of the lead.24.The device of claim 23 wherein said end of said lead and said encapsulation module are coplanar.25.The apparatus of any of claims 21-24, further comprising means for attaching the leads to the exposed surfaces of the first semiconductor die and the second semiconductor die. |
Multi-chip package for different sized dieRelated applicationThis application claims priority to U.S. Application Serial No. 15/197,494, filed on Jun. 29, s.Technical fieldThe present disclosure relates to techniques for multi-chip packaging for different sized die.Background techniqueThe background art provided herein is intended to generally give the context of the present disclosure. The materials described in this section are not prior art to the claims in this application, and are not considered to be prior art in this section unless otherwise indicated herein.Flip chip and / or TSV (through silicon via) technology can be used in multi-chip packages (MCP). However, to avoid design complexity and/or costs associated with flip chip and/or TSV technology, or for other reasons, wire bonding may be preferred. In a typical wire bonding example, a multi-chip package (MCP) can utilize a printed circuit board (PCB). Bonding fingers on the PCB can be placed around one or more chips on the PCB to provide contact points corresponding to external devices. The leads may be suspended from the surface of the chip that is raised relative to the surface of the PCB toward the bonding fingers.As the bond density (the number of bond wires) increases, the required headroom between the suspended leads (to avoid short circuits) may become unfeasible and/or the size of the bond fingers of the PCB may need to be large to make it necessary The leads can be attached to the substrate, which can affect the xy scale of the package. Moreover, in a multi-level case (ie, a stack of dies), it may not be possible to bond pads on the level furthest from the PCB before bonding all of the pads on the level near the PCB.DRAWINGSThe embodiments will be readily understood by the following detailed description in conjunction with the drawings. To facilitate this description, like reference numerals indicate like structural elements. Embodiments are illustrated by way of example and not limitation in the drawings.FIG. 1 illustrates a die for a multi-chip package of different sized die in accordance with various embodiments.2 illustrates a cross-sectional view of an example of the multi-chip package of FIG. 1 including a housing surrounding a lead extending from a surface of the chip, in accordance with various embodiments.3 shows an isometric view of the example of FIG. 2 showing the ends of the leads exposed by the housing, in accordance with various embodiments.4 illustrates a cross-sectional view of the example of FIG. 3 including a redistribution layer (RDL) coupled to an end of a lead, in accordance with various embodiments.FIG. 5 illustrates a process for forming an integrated circuit package for a stack of semiconductor dies of different sizes.FIG. 6 illustrates an exemplary computing device that can employ the devices and/or methods described herein in accordance with various embodiments.Detailed waysDisclosed herein are apparatus, methods, and storage media associated with integrated packages for stacks of semiconductor dies of different sizes. In an embodiment, an apparatus including dies of different sizes may include a first die having a first side and a second side opposite the first side and a second side having a first side and a first side opposite the first side The second smaller die. The second side of the first die may be smaller than the first side of the second die and may be coupled thereto such that a portion of the first side of the second die is exposed. The apparatus can include a lead coupled to the portion of the first side of the second die and extending from the portion through the housing to a redistribution layer (RDL) coupled to one side of the housing to electrically couple the die. In some embodiments, the apparatus can include a lead coupled to the first side of the first die and extending therefrom through the housing to the RDL. Other embodiments may be disclosed and/or claimed.In the following detailed description, reference will be made to the claims It is understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the disclosure. Therefore, the following detailed description is not to be considered in aAspects of the disclosure are disclosed in the accompanying specification. Alternative embodiments of the present disclosure and equivalents thereof may be devised without departing from the spirit or scope of the disclosure. It should be noted that like elements disclosed below are indicated by like reference numerals in the drawings.Various operations may be described as a plurality of discrete acts or operations in sequence, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as implying that the operations must be dependent on the order. In particular, these operations may not be performed in the order presented. The described operations may be performed in a different order than the described embodiments. In additional embodiments, various additional operations may be performed and/or the operations described may be omitted.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B) or (A and B). For the purposes of the present disclosure, the phrase "A, B and/or C" means (A), (B), (C), (A and B), (A and C), (B and C) or (A) , B and C).The description may use the phrase "in an embodiment," which may refer to one or more of the same or different embodiments. Moreover, the terms "including", "having", and the like, used in connection with the embodiments of the present disclosure are synonymous.As used herein, the term "circuitry" may refer to or include the following or part of: an application specific integrated circuit (ASIC), an electronic circuit, a processor executing one or more software or firmware programs (shared, dedicated) Or group) and/or memory (shared, dedicated or group), combinational logic, and/or other suitable components that provide the described functionality.FIG. 1 illustrates a die 110 and a die 120 of a multi-chip package 100 for different sized die in accordance with various embodiments. One side of the first die 110 can be coupled to the larger side of the second die 120 such that a portion 126 of one side of the second die 120 can be exposed.Lead 125 may be coupled to and extend from exposed portion 126 of one side of second die 120, for example, lead 125 may be bonded to pad 121 on exposed portion 126 of one side of second die 120 . The shorter leads 115 may extend from a side of the first die 110 opposite the coupling side, for example, the leads 115 may be bonded to the pads 111 of the first die 110. Only some of the leads 115 and 125 are shown for the sake of brevity; however, it should be understood that the leads 115 and 125 may be formed on some (eg, all) of the pads 111 and 121.Referring now to FIG. 2, a cross-sectional view of an example of the multi-chip package 100 of FIG. 1 is shown. The outer casing 130 surrounds the leads 115 and 125 and may cover the exposed portion of one side of the second die 120 and the side of the first die 110 that extends out of the lead 115.In the illustrated embodiment, the outer casing 130 does not cover the side of the second die 120 that is opposite the side of the exposed portion of the second die 120. However, in other embodiments, the outer casing may cover sides that correspond to both ends of the stack.Referring now to FIG. 3, an isometric view of the example of FIG. 2 is shown showing the ends of leads 115 and 125 that are exposed by outer casing 130. In some embodiments, a portion of the outer casing 130 can be removed to expose the ends of the leads 115 and 125. As in Figure 1, only some of the leads 115 and 125 are shown for the sake of brevity, only some of the exposed ends of the leads 115 and 125 are shown for the sake of brevity.In some embodiments, a grinding process can be applied to the outer casing 130 such that the outer casing is coplanar with the ends of the leads 115 and 125. However, in other embodiments, any removal process such as chemical etching, plasma etching, grinding, or the like, or a combination thereof, may be used to remove portions of the outer casing 130.Referring now to FIG. 4, a cross-sectional view of the example of FIG. 3 including a redistribution layer (RDL) 150 coupled to the ends of leads 115 and 125 is shown. In the redistribution layer technique, additional circuitry can be formed on the wafer after wafer fabrication, which allows electrical signals to be routed from one location at the top of the wafer to another. In some examples, the redistribution layer technique can utilize an additional metal layer (typically one to two layers) in combination with a dielectric layer. The metal and dielectric layers can be interwoven to maintain the circuitry on the top metal layer that is electrically isolated from the bottom metal layer. Some redistribution layers include additional metal layers on the integrated circuit that are in contact with the integrated circuit, such as I/O (input/output) pads of integrated circuits available in other locations.The RDL 150 can be formed using any redistribution layer technique (eg, currently available or later developed). The RDL 150 can provide a chip-to-chip interconnect to couple selected connections of the first die 110 to selected connections of the second die 120. The RDL 150 can provide device-to-device interconnections via an external interface of a multi-chip package. For example, a first device having a multi-chip package can be coupled to an external second device via an external interface of the first device. The external interface of the first device may be RDL 150 and/or a structure formed on RDL 150, such as solder balls 155 or some of the connections used to bring RDL 150 from the multi-chip package for interface to external devices. Other structure. In some embodiments, the RDL 150 can include conductive regions 152 that are coplanar with the dielectric layer 151, and solder balls 155 can be on these conductive regions 152.FIG. 5 illustrates a process 400 for forming an integrated circuit package for a stack of semiconductor dies of different sizes. In some embodiments, one of the dies may be similar to die 110 (FIG. 1), and the other of the dies may be similar to die 120 (FIG. 1).Block 401 of process 400 can include coupling a first die having a first side and a second side opposite the first side to a second die having a first side and a second side opposite the first side. The second side of the first die is smaller than the first side of the second die and coupled to the first side of the second die such that a portion of the first side of the second die is exposed. Block 402 can include a pad that bonds the wire to the portion of the first side of the second die, and a wire that bonds the wire to the first side of the first die.In block 403, process 400 can include covering the first side of the first die and the portion of the first side of the second die with a dielectric material. In block 404, a portion of the dielectric material can be removed to expose the ends of the leads. In some embodiments, removing may include grinding a surface of the dielectric material to remove portions of the dielectric material to expose at a first side of the outer casing opposite the second side directly adjacent to the first die and the second die The end of the lead. Grinding can go down to the end of the lead. For example, the ends of the leads can be polished and/or scored by grinding, for example, partially polished or scored.In block 405, a conductive layer, such as a redistribution layer (RDL), may be formed over the remainder of the dielectric and the ends of the leads. The RDL can couple the selected connection of the first die to the selected connection of the second die. The first side of the RDL can be directly adjacent the end of the lead and the second side opposite the first side can include a conductive area. These conductive regions may be referred to herein as "pads" on the second side of the RDL. In some examples, the pads may be coplanar with the dielectric regions of the second side of the RDL.In block 406, process 400 can include forming a conductive structure (eg, a solder ball or other portion of an external interface) on the RDL (eg, on a pad of the RDL) to be between at least one of the dies and the external device Provide interconnection. In some embodiments, solder balls may be formed on pads on the second side of the RDL to couple the multi-chip package to a circuit board of a printed circuit board (PCB) such as an external device.In contrast to the wire bonding method using a bonding finger having a stacked body surrounding the chip and the leads are directed from the chip to the bonding to the lower hanging PCB, the multi-chip package formed using the process 400 may have an xy scale, the xy dimension being parallel to the above-described die The lateral dimension measured in the plane of one of the first or second faces, which may correspond to the surface area of the side of the die that is furthest from the RDL. In some examples, the xy dimension of the package may be the same as the surface area of the side of the die that is furthest from the RDL. Moreover, in contrast to wire bonding methods using PCBs with bonding fingers, a multi-chip package formed using process 400 may not require a substrate such as a PCB.Referring again to FIG. 1, in some embodiments, at least one of the first die 110 or the second die 120 is a field programmable gate array (FPGA) including logic silicon, which may include other examples (eg, the same The memory die stack of the memory die) has a different bond density (eg, a larger bond density). Bond pads 111 and 121 may be disposed on more than one side of first die 110 and second die 120 using bond densities associated with logic silicon. In this particular example, a 4-sided bond pad arrangement is shown for each of the first die 110 and the second die 120. In some embodiments, one of the dies of the multi-chip package may include an N-side bond pad arrangement, and another of the multi-chip packages may include an X-side bond pad arrangement, where X and N may be different values And at least one of X or N may be greater than one.In the illustrated example, first die 110 and second die 120 have the same geometry but have different dimensions (eg, different areas of the coupling side). In some embodiments, different sized dies in the stack can have different geometries, for example, the square side of one dies can be coupled to the rectangular side of another dies. In some embodiments, a rectangular side die having a long side that is equal in length to the edge of the square side die can be coupled to one side of the square side die to expose a portion of the side of the square side die. The rectangular side dies can be centered on the square side dies to expose two separate portions of the square side. In any embodiment, the thickness of one die of the stack may be different than the thickness of another die of the stack.In the illustrated example, the stack includes two dies. However, in other examples, the stack can include any number of dies, such as 2-4 or more dies. The die size setting in the stack can be completely heterogeneous (in a stack of square side dies with fully heterogeneous sizing, the largest dies can be on one end of the diamond stack) Next, in turn, the smaller die, until the smallest die on the other end of the diamond stack) or partially heterogeneous (the subset of two or more dies of the stack have the same size). Moreover, the functionality of the dies in the stack can be completely heterogeneous (eg, the unique function of each die in the stack), or partially heterogeneous (where two or more dies are Subsets have the same non-unique features, such as memory).In a partially heterogeneously sized stack, one or more of the dies of one end of the stack may be logic silicon (if more than one, may have different sizes), and the other end of the stack The die of the portion may include two or more of the same type of memory die of the same size memory die, each memory die having bond pads on only one or both sides. Two or more memory dies each having bond pads on only one or both sides may be arranged as a tile (with overhangs) to expose the bond pads so that the leads may be attached thereto . Two or more memory dies may have non-unique functions, for example, may all be memories. The stack can include any combination of logic/memory dies, such as NAND (non-logical AND), DRAM (dynamic random access memory), 3D XPoint (three-dimensional intersection), SoC (system on a chip), FPGA (ie, multi-chip) ), etc., and combinations thereof.Although the examples described herein utilize leads, other examples may utilize any elongated conductive structure that can remain vertical (eg, fully vertical) for a period of time after formation or attachment, where the time period is sufficient to apply the mold A period of time to position the conductive structure relative to the surface of the die. Such elongated conductive structures can include conductive posts, conductive rods, and the like on (eg, attached or formed on) a surface, or a combination thereof.The length of the longest lead in the multi-chip package formed according to process 400 (FIG. 5) may be approximately 500 [mu]m, as compared to the length of the lead used in suspension may be approximately 2-3 mm. The reduced length improves signal integrity compared to the suspended lead approach.In some examples, the leads may extend from the exposed portion of the larger die to the RDL, and other conductive structures (eg, pads or the like, which are not necessarily elongated) may be located between one side of the smaller die and the RDL. Referring again to the leads, the length of the leads in the dual die stack can be the thickness of the smaller die (this thickness refers to the measurement of the distance between the aforementioned sides of such a die) plus the corresponding RDL and smaller tube The extra thickness of the thin portion of the molded part between one side of the core (this thickness here refers to the measurement of the distance between the side of the outer casing directly adjacent to the smaller die and the opposite side of the outer casing) . The length of the conductive structure between one side of the smaller die and the RDL can be equal to the additional length. In some embodiments, these conductive structures can be part of the RDL. For example, the RDL can include pads similar to the pads between the dielectrics 151 (FIG. 5) on both sides of the RDL. The pads on the RDL side closest to the smaller die can directly contact the selected pads of the smaller die.In some examples, the count of interconnects provided by the RDL (including chip-to-chip and/or device-to-device interconnects) may be at least a few hundred or several thousand, and the count of leads extending from the surface of the die may be approximately At least a few hundred or a few thousand. The RDL and the leads can route signals as follows: 1) from the chip to the chip; 2) from the chip to the external interface on the side of the RDL opposite the stack; and/or 3) from the chip to the RDL the same as the stack The external interface on the side (in this case, there can be more than one multi-chip stack connected to the same RDL).FIG. 6 illustrates an exemplary computing device 500 that can employ the devices and/or methods described herein in accordance with various embodiments. In various embodiments, any component of, for example, storage 520 (eg, a non-volatile storage (eg, a solid state drive) having two or more chip field programmable gate array architectures) may be previously referenced to FIG. - The first device of the multi-chip packaging technology described in Figure 5. The first device can have a single semiconductor package for a stack of different sized semiconductor dies of the first device. In various embodiments, computing device 500 can include a printed circuit board (PCB) 502; however, in alternative embodiments, various components can be coupled without the need to employ PCB 502. In various embodiments, any component, such as reservoir 520, can be physically and electrically coupled to printed circuit board (PCB) 502 and electrically coupled to some other device via PCB 502, or in embodiments without PCB 502. The external interface of one device is directly physically and electrically coupled to other devices.Computing device 500 may include other components that may or may not be physically and electrically coupled to PCB 502, depending on its application. These other components may include one or more processors 504 (one shown), at least one communication chip 506. In other embodiments, communication chip 506 can be part of one or more processors 504. For these embodiments, one or more processors 504 and communication chips 506 can be disposed thereon. These components include, but are not limited to, a memory controller (not shown), a volatile memory (eg, a dynamic random access memory (DRAM), not shown), an additional non-volatile memory such as a read only memory (ROM). (not shown), flash memory (not shown), I/O controller (not shown), digital signal processor (not shown), cryptographic processor (not shown), graphics processor 530, One or more antennas 528, a display (not shown), a touch screen display 532, a touch screen controller 546, a battery 536, an audio codec (not shown), a video codec (not shown), a global positioning system ( GPS) device 540, compass 542, accelerometer (not shown), gyroscope (not shown), speaker 550, camera 552, and mass storage device (eg, hard disk drive, solid state drive, compact disk (CD), digital versatile Disk (DVD) (not shown), etc. As previously mentioned, any of these components may be the first device of the multi-chip packaging technology previously described with reference to Figures 1-5.Communication chip 506 can enable wired and/or wireless communication of data to and from computing device 500. The term "wireless" and its derivatives may be used to describe a circuit, apparatus, system, method, technique, communication channel, or the like that can communicate data over a non-solid medium using modulated electromagnetic radiation. The term does not imply that the associated device does not include any circuitry, although in some embodiments they may not. Communication chip 506 may implement any of a number of wireless standards or protocols, including but not limited to IEEE 702.20, Long Term Evolution (LTE), LTE-Advanced (LTE-A), General Packet Radio Service (GPRS), Evolutionary Data Optimization. (Ev-DO), Evolved High Speed Packet Access (HSPA+), Evolved High Speed Downlink Packet Access (HSDPA+), Evolved High Speed Uplink Packet Access (HSUPA+), Global System for Mobile Communications (GSM), Enhanced Type Data Rate GSM Evolution (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Worldwide Interoperability for Microwave Access (WiMAX), Bluetooth, its derivatives, and Any other wireless protocol designated as 3G, 4G, 5G, and higher. Computing device 500 can include a plurality of communication chips 506. For example, the first communication chip 506 can be dedicated to short-range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 506 can be dedicated to applications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others. Long distance wireless communication.In various implementations, computing device 500 can be a laptop, netbook, notebook, ultrabook, smart phone, tablet computer, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer , scanners, monitors, set-top boxes, entertainment control units (eg, game consoles or car entertainment units), digital cameras, home appliances, portable music players, or digital video recorders. In other embodiments, computing device 500 can be any other electronic device that processes data.Any combination of one or more computer usable or computer readable media may be utilized. The computer usable or computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (non-exhaustive lists) of computer readable media will include the following: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, transmission media such as the Internet or intranet, or magnetic storage media. It is noted that the computer usable or computer readable medium may even be a paper or another suitable medium on which the program can be printed, as the program can be electronically captured via, for example, optical scanning of paper or other media, and then if necessary Compile, interpret, or otherwise process the program in an appropriate manner, and then store the program in computer memory. In the context of this document, a computer-usable or computer readable medium can be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer usable medium can include a propagated data signal having computer usable program code embodied therein, either in baseband or as part of a carrier. The computer usable program code can be transmitted using any suitable medium, including but not limited to wireless, wireline, optical cable, RF, and the like.Computer program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++, and a "C" programming language. Or a general process programming language like a programming language. The program code can be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on the remote computer, or entirely on the remote computer or server. In the latter case, the remote computer can be connected to the user's computer via any type of network, including a local area network (LAN) or a wide area network (WAN), or can form a connection to an external computer (eg, using an Internet service provider) Internet).ExampleExample 1 is an apparatus comprising a multi-chip package, the apparatus comprising: a first die having a first side and a second side opposite the first side; having a first side and a second side opposite the first side a second die, the second side of the first die being smaller than the first side of the second die and coupled to the first side of the second die such that a portion of the first side of the second die is exposed a first plurality of leads coupled to and extending from the first side of the second die; a second coupled to the first side of the first die and extending from the first side of the first die a plurality of leads; an outer casing surrounding the first and second plurality of leads and covering the first side of the first die and the portion of the first side of the second die, wherein the first and second plurality of leads An end is exposed on a first side of the outer casing opposite the second side directly adjacent the first and second dies; and is coupled to the first side of the outer casing and electrically coupled to the first and second plurality of leads Redistribution layer (RDL).Example 2 includes the subject matter of Example 1, and the RDL electrically connects at least one of the first plurality of leads and at least one of the second plurality of leads.Example 3 includes the subject matter of any of Examples 1-2, and the external interface is electrically connected to the RDL.Example 4 includes the subject matter of any of Examples 1-3, and the first side of the outer casing is smooth and the exposed ends of the first and second leads are scored or polished.Example 5 includes the subject matter of any of Examples 1-4, and the first plurality of leads are bonded to pads on the portion of the first side of the second die.Example 6 includes the subject matter of any of Examples 1-5, and the second plurality of wires are bonded to pads on the first side of the first die.Example 7 includes the subject matter of any of Examples 1-6, and the first plurality of leads are longer than the second plurality of leads.Example 8 includes the subject matter of any of Examples 1-7, and the end portions of the first plurality of leads and the second plurality of leads including the ends of the first and second plurality of leads are perpendicular to the RDL.Example 9 includes the subject matter of any of Examples 1-8, and the end portions of the other ends of the first and second plurality of leads are respectively associated with the first side of the first die and the first side of the second die vertical.Example 10 includes the subject matter of any of Examples 1-9, and the ends of the first and second plurality of leads are coplanar with the first side of the outer casing.Example 11 includes a method of multi-chip packaging, the method comprising: coupling a first die having a first side and a second side opposite the first side and having a first side and a second side opposite the first side a second die, wherein a second side of the first die is smaller than a first side of the second die and coupled to a first side of the second die such that a portion of the first side of the second die is Exposing; forming a first lead extending from a portion of the first side of the second die and a second lead extending from the first side of the first die and covering the first side and the second die of the first die The outer casing of the portion of the first side, wherein the ends of the first and second plurality of leads are exposed on a first side of the outer casing opposite the second side directly adjacent the first and second dies And forming a redistribution layer (RDL) coupled to the first side of the outer casing and electrically coupled to the first and second plurality of leads.Example 12 includes the subject matter of Example 11, and bonding a first plurality of wires to the pads on the portion of the first side of the second die; and bonding the second plurality of wires to the first of the first die The pad on the side.Example 13 includes the subject matter of any of Examples 11-12, and forming the outer casing further comprising: covering the first side of the first die and the portion of the first side of the second die with a dielectric material; and removing the dielectric material Portions to expose the ends of the first and second plurality of leads.Example 14 includes the subject matter of any of Examples 11-13, and grinding a surface of the dielectric material to remove portions of the dielectric material.Example 15 includes the subject matter of any of Examples 11-14, and forming a solder ball on the pads of the RDL.Example 16 is a system including a multi-chip package, the system comprising: a first device having a single semiconductor package for a stack of different sized semiconductor dies of a first device, wherein The dimension of the first semiconductor die is greater than a dimension of the second semiconductor die of the stack adjacent the first semiconductor die in the stack such that the second semiconductor die exposes a surface of the first semiconductor die, wherein A single semiconductor package includes a housing surrounding a lead extending from at least an exposed surface of the first semiconductor die to a plane coplanar with an end of the stack; and a second device coupled to at least one of the leads via an external interface of the semiconductor package .Example 17 includes the subject matter of Example 16, and the external interface secures the first device to the circuit board of the second device.Example 18 includes the subject matter of any of Examples 16-17, and at least one of the leads provides an electrical connection between the first semiconductor die and the second semiconductor die.Example 19 includes the subject matter of any of Examples 16-18, and at least one of the leads provides an electrical connection between the external interface and at least one of the first semiconductor die or the second semiconductor die.Example 20 includes the subject matter of any of Examples 16-19, and the outer casing encloses a lead extending from a surface of the second semiconductor die.Example 21 is an apparatus comprising a multi-chip package, the apparatus comprising: a module for encapsulating a side of a lead extending from a surface of a first semiconductor die of a stack of semiconductor dies of different sizes; The surface is exposed by the second semiconductor die, the second semiconductor die being smaller than the first semiconductor die and adjacent to the first semiconductor die in the stack; and for exposing the exposed end of at least one of the leads A module connected to the exposed end of at least one of the leads extending from the exposed surface of the second semiconductor die.Example 22 includes the subject matter of Example 21, and a module for attaching an external electrical interface of the device to the circuit board.Example 23 includes the subject matter of any of Examples 21-22, and the encapsulation module exposes the ends of the leads.Example 24 includes the subject matter of any of Examples 21-23, and the ends of the leads and the encapsulation modules are coplanar.Example 25 includes the subject matter of any of Examples 21-24, and a module for attaching leads to the exposed surfaces of the first and second semiconductor dies. |
Embodiments of the invention provide single loading mechanism that both pushes a semiconductor package against a socket and pushes a cooling solution against the semiconductor package. This loading mechanism may take up less motherboard real estate than if two different attachment and loading mechanisms were used. |
1.A computer device comprising:socket;Loading mechanisma cooling device connected to the loading mechanism; andWherein the loading mechanism is movable between a first open position and a second closed position, and the second closed position can simultaneously:Applying a force to the integrated circuit package between the loading mechanism and the socket to press the integrated circuit package onto the socket;A force is applied to the cooling device to press the cooling device onto the integrated circuit package.2.The device of claim 1 wherein said loading mechanism is hingedly coupled to said socket.3.The device of claim 2, wherein the loading mechanism is moveable by a motion arc centered on a hinge that hinges the loading mechanism to the socket, and the second closed position is substantially in the arc of motion One end.4.The apparatus of claim 2 further comprising a stem with an arm that moves along the moving arc and has a compression tab that is configured to compress when the loading mechanism approaches the second closed position The position of the loading mechanism, the lever arm enables the compression projection to apply a force to the loading mechanism, which in turn applies force to the integrated circuit package and the cooling device.5.The apparatus of claim 4 wherein said loading mechanism comprises an outer frame, said cooling means being located in a central region of said outer frame.6.The device of claim 1 wherein said cooling device is attached to said loading mechanism.7.The apparatus of claim 6 wherein said cooling means comprises a heat pipe connected to the heat exchanger.8.A computer device comprising:a socket disposed on a printed circuit board;An integrated circuit connected to the socket;Means adjacent to the integrated circuit for cooling the integrated circuit;A force applying device for applying force to both the integrated circuit and the means for cooling the integrated circuit.9.The device of claim 8 wherein the means for cooling the integrated circuit comprises a thermally conductive material that conducts heat generated by the integrated circuit, and is coupled to the thermally conductive material to evacuate heat received from the integrated circuit to The heat dissipation part of the surrounding environment.10.The apparatus of claim 9 wherein the means for cooling the integrated circuit comprises a heat pipe that transfers heat from the integrated circuit.11.The device of claim 8 wherein said integrated circuit is a microprocessor.12.The apparatus of claim 8 wherein the means for cooling the integrated circuit is coupled to the force applying means and is not independently connected to the printed circuit board.13.The device of claim 8 wherein the force applying means applies a force to the means for cooling the integrated circuit, and wherein the means for cooling the integrated circuit in turn transmits at least a portion of the applied force to the integrated circuit .14.The apparatus of claim 13 wherein the means for cooling the integrated circuit comprises a heat pipe.15.The apparatus of claim 13 wherein the means for cooling the integrated circuit comprises a heat sink.16.The device of claim 8 wherein the force applying device comprises a rod.17.The device of claim 8 wherein the force applying device comprises a screw.18.An apparatus for receiving an integrated circuit, comprising:An LGA socket that accepts an integrated circuit;a loading device that applies a force to an integrated circuit within the LGA socket substrate to electrically connect the integrated circuit to the LGA socket substrate;Therein, the device does not have a loading device that applies force to the cooling device or integrated circuit without applying force to the other of the cooling device or integrated circuit.19.The apparatus of claim 18, wherein said loading device indirectly applies force to said integrated circuit by said cooling means without directly applying force to said integrated circuit.20.The apparatus according to claim 19, wherein said cooling means comprises a heat pipe having a reinforcing member on the outer circumference. |
Single loading mechanism that applies force to both the cooling device and the integrated circuit packageTechnical fieldField of the Invention This invention relates generally to computer devices, and more particularly to a loading mechanism for loading integrated circuit packages and cooling devices.Background techniqueSemiconductor devices, such as microprocessor chips, are typically mounted in a package and attached to a printed circuit board (PCB) such as a motherboard through a socket. The socket interfaces with the wiring on the package to distribute power and signals from the package (and semiconductor device) to other devices. There are several processes for forming a connection between a socket and a package, including a pin grid array (PGA), a ball grid array (BGA), and a land grid array (LGA).The LGA socket includes a spring-loaded contact that interfaces with a conductive pad on the packaged semiconductor device. The socket can be soldered to the motherboard with BGA contacts (eg, solder balls) under the socket. When the package is inserted into the socket and a force is applied to the package, the spring bearing contact is pressed against the gasket of the package. This pressure ensures a reliable electrical connection between the motherboard and the package.The available area on the motherboard is limited, especially in small form factor devices such as laptop computers and the like. A portion of this area is used to connect the loading device that presses the contact of the package onto the socket. Another part of the area is used to connect a cooling solution that prevents overheating of the semiconductor device. The cooling device can also have a second loading device that presses the cooling device onto the semiconductor device.Summary of the inventionAccording to an aspect of the invention, there is provided a computer apparatus comprising: a socket; a loading mechanism; a cooling device coupled to the loading mechanism; and wherein the loading mechanism is closable in a first open position and a second Moving between positions, the second closed position can simultaneously: apply force to the integrated circuit package between the loading mechanism and the socket to press the integrated circuit package onto the socket; and apply force to the cooling device Pressing the cooling device onto the integrated circuit package.According to another aspect of the present invention, a computer apparatus is provided comprising: a socket disposed on a printed circuit board; an integrated circuit connected to the socket; and an adjacent integrated circuit for cooling the integrated circuit And means for applying a force to both the integrated circuit and the means for cooling the integrated circuit.In accordance with still another aspect of the present invention, an apparatus for receiving an integrated circuit is provided, comprising: an LGA socket for receiving an integrated circuit; a loading device that applies a force to an integrated circuit within the LGA socket substrate to enable the integration The circuit is electrically connected to the LGA socket substrate; and wherein the device has no loading device that applies force to the cooling device or integrated circuit without applying force to the cooling device or another of the integrated circuit.DRAWINGS1a is a cross-sectional side view illustrating one embodiment of an apparatus having a loading mechanism that applies a force to couple an integrated circuit to a socket and also presses a cooling device onto the integrated circuit package.FIG. 1b is a top plan view further illustrating an embodiment of the apparatus described with respect to FIG. 1a.Figure 1c is a cross-sectional side view illustrating an embodiment of the apparatus when the load member is in the open position, depicted with respect to Figure 1a.2 is a top plan view illustrating an embodiment in which a cooling device is coupled to an additional cooling component.Figure 3 is a plan view illustrating an embodiment of a loading member.4 is a top plan view illustrating another embodiment of a loader.Figure 5 is a top plan view of an embodiment illustrating the application of a separate cooling device and load member to the integrated circuit package.Figure 6 is a cross-sectional side view illustrating another embodiment without a separate cooling device and load.Detailed waysIn various embodiments, a device that uses a separate loading mechanism to apply force to both the semiconductor device and the cooling device is described. In the following description, various embodiments will be described. One skilled in the relevant art will recognize, however, that various embodiments may be practiced without one or more specific details or by alternative and/or additional methods, materials or components. In other instances, well-known structures, materials, or operations are not described or described in detail in order to clarify aspects of the various embodiments of the invention. Also, the detailed figures, materials, and configurations are set forth to provide a thorough understanding of the invention. However, the invention may be practiced without these specific details. In addition, the various embodiments shown in the figures are to be construedThe word "one embodiment" or "an embodiment" as used throughout the specification means that a particular feature, structure, material or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, but does not In each embodiment. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" In addition, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. In other embodiments, various additional layers and/or structures may be included, and/or the features described are omitted.1a is a cross-sectional side view illustrating one embodiment of an apparatus having a loading mechanism 100 that applies a force to couple an integrated circuit package to a socket and also presses a cooling device onto the integrated circuit package. In the illustrated embodiment, the socket 104 is coupled to the motherboard 102, which may be, for example, in a personal computer such as a laptop or desktop computer. While the socket 104 is depicted as being coupled to a "motherboard" 102, in other embodiments, the socket 104 can be coupled to any type of printed circuit board 102 or other suitable support structure.In one embodiment, the socket 104 is a land grid array (LGA) socket having spring-loaded contacts that interface with conductive pads on the integrated circuit package 106. In other embodiments, the sockets 104 can be different types of sockets for which the force to press the integrated circuit package 106 to the socket 106 is suitable. The socket 104 is a structure by which the integrated circuit package 106 is electrically connected or otherwise communicatively coupled to other components of the device 100.The integrated circuit package 106 can be any type of integrated circuit. In an embodiment, integrated circuit package 106 can be a microchip. In other embodiments, other types of integrated circuit packages 106 can be used. A force suitable to press the integrated circuit package 106 onto the socket 104 can be applied in the direction 118 to help provide good contact between the integrated circuit 106 and the socket 104.Cooling device 112 may be included in device 100 to exclude heat from integrated circuit package 106 during operation. Any suitable cooling device 112 can be used, such as a heat pipe, heat sink or other type of cooling device 112. A force suitable to press the cooling device 112 onto the integrated circuit package 106 can be applied in the direction 118 to help the cooling device 112 and the integrated circuit package 106 produce good contact for thermal conduction therebetween.In one embodiment, there is a separate loading mechanism that provides a force 118 to both press the integrated circuit package 106 onto the socket 104 and press the cooling device 112 onto the integrated circuit package 106. In the embodiment shown in FIG. 1a, the single loading mechanism can be considered to be or include a load member 108 hingedly coupled to the motherboard 102 by a hinge 110. When in the closed position, the load member 108 is pushed down onto the integrated circuit package 106 to press the integrated circuit package 106 onto the socket 104. In the illustrated embodiment, the cooling device 12 is attached to the carrier 108 such that when the carrier 108 is in the closed position, the cooling device 112 is pressed onto the integrated circuit package 106. In some embodiments, the cooling device 112 is coupled to the carrier 108 and is not independently coupled to the motherboard 102.In an embodiment, when the loader 108 is in the closed position, the cooling device 112 can be pressed onto the integrated circuit package 106, and the cooling device 112 can transfer force from the load 108 to the integrated circuit package 106 for integration. The circuit package 106 is pressed onto the socket 104. Such an embodiment may be free of direct contact between the carrier 108 and the integrated circuit package 106. In such an embodiment, such a force can be applied to the cooling device 112 using other types of loading mechanisms than the illustrated load member 108.In the illustrated embodiment, there is a protrusion 114 that is coupled to the shaft 116. The projection 114 rotates about the shaft 116 to force the load 108 down and cause a force 118 on the cooling device 112 and the integrated circuit package 106. The tab 114 also holds the loader 108 in place. In other embodiments, other structures may be used to cause the force 118 on the cooling device 112 and the integrated circuit package 106.FIG. 1b is a top plan view further illustrating an embodiment of the apparatus 100 described above with respect to FIG. 1a. In the embodiment illustrated in Figure Ib, the load member 108 is a piece of metal or other rigid material suitable for applying a suitable force. The loading member 108 includes an outer frame and a central opening (which is not apparent in FIG. 1b due to the presence of the cooling device 112) through which the cooling device 112 can contact the integrated circuit package 106 such that heat can be extracted from the integrated circuit package 106 passed away. In other embodiments, the load member 108 can have other shapes.In the illustrated embodiment, the carrier 108 is attached to the hinge 110 on one side. On the other side, the projections 114 are pressed down onto the load member 108, causing the load member 108 to apply force to the integrated circuit package 106 and the cooling device 112. The projection 114 is coupled to a shaft 116 that is coupled to the rod 120. The rod 120 is used to rotate the projection 114 from the open position to the closed position to press down on the load member 108. The lever 120 can also be locked in the closed position such that after the projection 114 is moved to the closed position, the force 118 is continuously applied.Since the projection 114 in the closed position is pressed down onto the load member 108 and the integrated circuit package 106 is positioned between the load member 108 and the receptacle 104, the load member 108 presses the integrated circuit package 106 onto the receptacle 104. Since the cooling device 112 is coupled to the load member 108, the cooling device 112 is pressed onto the integrated circuit package 106 by the load member 108. Thus, a separate mechanism (loading member 108) applies force 118 to both the integrated circuit package 106 and the cooling device. In other embodiments, the loading mechanism can press the cooling device 112 onto the integrated circuit package 106 to press the integrated circuit package 106 onto the socket 104. In embodiments where there is no direct contact between the load members 108, a separate mechanism can still apply force 118 to the integrated circuit package 106 and the cooling device simultaneously.FIG. 1c is a cross-sectional side view illustrating an embodiment of apparatus 100 when loader 108 is in an open position as described above with respect to FIG. 1a. As can be seen in the embodiment shown in Figure Ic, the cooling device 112 is coupled to the loader 108. Cooling device 112 can be coupled to loader 108 using any suitable method.Both the loading portion 108 and the rod 120 can move in an arc. As seen in Figure 1c, the loading member 108 has a moving arc A whereby the loading member 108 can be in an open position (as seen in Figure 1c) and a closed position (such as seen in Figures 1a and 1b). Exercise. Likewise, the rod 120 has a moving arc B whereby the load member 108 can be moved between an open position (as seen in Figure 1c) and a closed position (such as seen in Figures 1a and 1b). As the load member 108 approaches the closed position, the rod 120 can compress the load member 108 as the rod 120 moves to its closed position. When the lever 120 is fully moved to its closed position, it can cause the projection 114 to apply force to the load member 108 to move the load member 108 to its closed position, in which position the load member is integrated with the integrated circuit package 106 and cooled. Device 112 applies a force.2 is a top plan view illustrating an embodiment in which the cooling device 112 is coupled to an additional cooling component. For example, the cooling device 112 can include one or more heat pipes 202 that transfer heat from the integrated circuit package 106 to a heat exchanger 204, such as a heat sink, which helps transfer heat from the integrated circuit package 106 to the surrounding environment. In embodiments where the loader 108 is hingedly coupled to the motherboard 102, one or more of the additional cooling components can be moved simultaneously with the cooling device 112 and the loader 108. In other embodiments, there may be a flexible connection between the cooling device and other cooling components, or the cooling device 112 may be connected to the additional cooling component until the loading portion 108 is in the closed position. In other embodiments, other configurations may exist. Additional cooling components can be placed in any suitable location.FIG. 3 is a top plan view illustrating an embodiment of the carrier 108. In this embodiment, the load member 108 has an outer frame 304 having a central opening 302 through which the cooling device 112 can contact the integrated circuit package 106. While the load member 108 is so illustrated, it may take other forms in other embodiments. For example, the loader 108 may not have a continuous frame that completely surrounds its perimeter.FIG. 4 is a top plan view illustrating another embodiment of the carrier 108. In this embodiment, the loader 108 includes an interface board 402. In the closed position, the interface board 402 can be in contact with or in close proximity (eg, separated by a thin layer of thermal interface material) to the integrated circuit package 106 to efficiently transfer heat from the integrated circuit package 106 to the cooling device 112.In some embodiments, the load member 108 can be part of the cooling device 112 rather than a separate component. Thus, interface board 402 can be part of cooling device 112 that transfers heat from integrated circuit package 106 to cooling device 112, and outer frame 304 can be a different portion of cooling device 112. Either or both of the outer frame 304 and the interface board 402 can apply force to the integrated circuit package 106.5 is a top plan view illustrating an embodiment in which the integrated device package 106 is biased 118 by the cooling device 112 without a top view of the separate cooling device 112 and the load member 108. In such an embodiment, the cooling device 112 can include a stiffener 502 that does not occur in a similar cooling device 112 that does not simultaneously apply force to the integrated circuit package 106. In the embodiment shown in FIG. 5, the cooling device 112 includes a heat pipe 202. A loading mechanism (not shown) applies a force to the heat pipe 202, which in turn applies a force to the underlying integrated circuit package 106 (illustrating the position profile 504 of the integrated circuit package 106).The cooling device 112 includes a stiffener 502 that allows the cooling device 112 to apply force to all appropriate regions of the integrated circuit package 106. As shown, the stiffener 502 is on the perimeter. In other embodiments, the stiffener may be present at other locations in place of the perimeter stiffener 502 or as a stiffener other than the perimeter stiffener 502. For example, the stiffeners 502 can occur at locations where the cooling device 112 is forced to allow the cooling device 112 to dispense a relatively uniform force to the integrated circuit package 106, rather than bending and applying significant unevenness. The force is applied to the integrated circuit package 106.Figure 6 is a cross-sectional side view illustrating another embodiment of the separate cooling device 112 and frame-type load member 108 without the ones shown in Figures 1a-1c. More specifically, the cooling device 112 is a load and the loading mechanism 602 applies a force to the cooling device 112 to apply force to the integrated circuit package 106 through the cooling device 112. For example, the cooling device 112 can be a heat sink with a fan, and the loading mechanism 602 can be one or more screws that connect the cooling device 112 to the motherboard 102. When tightened, the screw 602 can depress the cooling device 112 to cause the force 118 to press the cooling device 112 onto the integrated circuit package 106 and press the integrated circuit package 106 onto the socket 104. While the screw 602 is referred to as a loading mechanism, any other suitable type of loading mechanism can be utilized, such as a spring support clip or other mechanism. Screw 602 or other loading mechanism 602 may also be considered a "loader" 108 because it can apply force to both cooling device 112 and integrated circuit package 106.As shown in FIG. 6, some embodiments may not have a separate device that connects the cooling device 112 and the loader 108 to the motherboard 102. More specifically, there is no single device or group of devices (loading mechanism 602) to connect and load the cooling device 112 and the loading member 108 (they are the same item in the embodiment of Figure 6). Such a configuration saves on the motherboard 102 relative to the device 100 in which the separate connections are used (one for the cooling device 112 and the other for the load 108 for loading the integrated circuit package 106 onto the socket 104 separately) Space.The foregoing description of the embodiments of the invention has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. The description and the claims that follow include some terms such as left, right, top, bottom, top, bottom, top, bottom, first, second, etc. are merely illustrative and are not to be construed as limiting. For example, a term indicating a relative vertical position means that the device face (or effective surface) of the substrate or integrated circuit is in the "top" surface of the substrate; with a standard terrestrial frame as a reference, actually lining The bottom can be in any direction such that the "top" face of the substrate can be lower than the "bottom" face and still fall within the meaning of the term "top". The term "above" (including in the claims) as used herein does not mean that the first layer is "above" the second layer or directly above the second layer and is in direct contact with the second layer; There may be a third layer or other structure between the layer and the second layer above the first layer. Embodiments of the devices or articles described herein can be manufactured, used, or packaged in a variety of locations and orientations. Many modifications and variations of the present invention will be apparent to those skilled in the <RTIgt; Those skilled in the art will recognize various equivalent combinations and substitutions for the various components shown in the figures. The scope of the invention is therefore not to be limited the |
A method for reducing power in a system is provided according to aspects of the present disclosure. The system includes a chip, and a volatile memory. The method includes entering a sleep state, and exiting the sleep state. Entering the sleep state includes placing the volatile memory in a self-refresh mode, wherein the volatile memory stores one or more binary images and the volatile memory is powered in the sleep state, and collapsing multiple power supply rails on the chip. Exiting the sleep state includes restoring power to the multiple power supply rails on the chip, taking the volatile memory out of the self-refresh mode, and running the one or more binary images on one or more sub-systems on the chip. |
CLAIMS1. A method for reducing power in a system, the system including a chip, and a volatile memory, the method comprising:entering a sleep state, wherein entering the sleep state comprises:placing the volatile memory in a self-refresh mode, wherein the volatile memory stores one or more binary images and the volatile memory is powered in the sleep state; andcollapsing multiple power supply rails on the chip; and exiting the sleep state, wherein exiting the sleep state comprises:restoring power to the multiple power supply rails on the chip;taking the volatile memory out of the self-refresh mode; and running the one or more binary images on one or more sub-systems on the chip.2. The method of claim 1, wherein the one or more images includes at least one of an operating system image, a modem image, a power resource image, or a software application image.3. The method of claim 2, wherein the one or more sub-systems includes a central processing unit, a modem, or a resource power manager.4. The method of claim 1, wherein the multiple power supply rails include a core power supply rail for powering core logic on the chip, and a memory power supply rail for powering internal memory on the chip.5. The method of claim 1, further comprising:performing a cold boot when the system is initially power on, wherein performing the cold boot comprises:loading the one or more binary images from a non-volatile memory into the volatile memory; andauthenticating the one or more binary images.6. The method of claim 5, wherein the one or more binary images are not reloaded from the non-volatile memory into the volatile memory when the system exits the sleep state.7. The method of claim 6, wherein the one or more binary images include a readonly (RO) portion and a read-writable (RW) portion, a compressed copy of the RW portion of the one or more binary images is stored on the volatile memory, and exiting the sleep state further comprises uncompressing the compressed copy of the RW portion of the one or more binary images.8. The method of claim 5, wherein the one or more binary images are not reauthenticated when the system exits the sleep state.9. The method of claim 5, wherein the one or more binary images include a readonly (RO) portion and a read-writable (RW) portion, and exiting the sleep state further comprises reloading the RW portion of the one or more binary images from the nonvolatile memory into the volatile memory, wherein the RO portion of the one or more binary images is not reloaded from the non-volatile memory into the volatile memory when the system exits the sleep state.10. The method of claim 1 , wherein entering the sleep state comprises transferring instructions from an internal memory on the chip to the volatile memory, and exiting the sleep state comprises transferring the instructions from the volatile memory to the internal memory on the chip.1 1. The method of claim 1 , wherein exiting the sleep state is performed upon expiration of a wakeup timer.12. A system, comprising:a volatile memory;a power controller configured to control power to multiple power rails;a sleep state controller configured to place the system in a sleep state by placing the volatile memory in a self-refresh mode, wherein the volatile memory stores one or more binary images, and outputting a sleep-state enter signal to the power controller, wherein the power controller is further configured to collapse the multiple power supply rails in response to the sleep-state enter signal, to power the volatile memory in the sleep state, and to restore power to the multiple power supply rails in response to a wakeup event; anda boot loader configured to take the volatile memory out of the self-refresh mode when the power to the multiple power supply rails is restored, and to cause one or more sub-systems to run the one or more binary images on the volatile memory.13. The system of claim 12, wherein the one or more images includes at least one of an operating system image, a modem image, a power resource image, or a software application image.14. The system of claim 13, wherein the one or more sub-systems includes a central processing unit, a modem, or a resource power manager.15. The system of claim 12, wherein the multiple power supply rails include a core power supply rail for powering core logic on a chip, and a memory power supply rail for powering internal memory on the chip.16. The system of claim 12, wherein the boot loader is further configured to perform a cold boot when the system is initially power on, wherein performing the cold boot comprises:loading the one or more binary images from a non-volatile memory into the volatile memory; andauthenticating the one or more binary images.17. The system of claim 16, wherein the boot loader is configured to skip reloading the one or more binary images from the non-volatile memory into the volatile memory when the system exits the sleep state.18. The system of claim 17, wherein the one or more binary images include a readonly (RO) portion and a read-writable (RW) portion, a compressed copy of the RW portion of the one or more binary images is stored on the volatile memory, and the boot loader is configured to uncompress the compressed copy of the RW portion of the one or more binary images when the system exits the sleep state.19. The system of claim 16, wherein the boot loader is configured to skip authenticating the one or more binary images when the system exits the sleep state.20. The system of claim 16, wherein the one or more binary images include a readonly (RO) portion and a read-writable (RW) portion, and wherein the boot loader is further configured to reload the RW portion of the one or more binary images from the non-volatile memory into the volatile memory when the system exits the sleep state, and skip reloading the RO portion of the one or more binary images from the non-volatile memory into the volatile memory when the system exits the sleep state. |
QUICK ENERGY EFFICIENT REBOOT FROM ULTRA-LOW POWER MODEFOR A SYSTEM ON A CHIPCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to and the benefit of Non-Provisional Application No.15/458,843 filed in the U.S. Patent and Trademark Office on March 14, 2017, the entire content of which is incorporated herein by reference.BACKGROUNDField[0002] Aspects of the present disclosure relate to power management, and more particularly, to reducing power in a sleep state.Background[0003] It is desirable to conserve power in a mobile device in order to extend the battery life of the mobile device. In this regard, a system on a chip (SoC) in the mobile device may employ various power-saving techniques to conserve power. One technique is to place the SoC in a sleep state (low-power state) when certain features of the SoC are not in use (e.g., certain applications are not active).SUMMARY[0004] The following presents a simplified summary of one or more embodiments in order to provide a basic understanding of such embodiments. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor delineate the scope of any or all embodiments. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later.[0005] A first aspect relates to a method for reducing power in a system. The system includes a chip, and a volatile memory. The method includes entering a sleep state, and exiting the sleep state. Entering the sleep state includes placing the volatile memory in a self- refresh mode, wherein the volatile memory stores one or more binary images and the volatile memory is powered in the sleep state, and collapsing multiple power supply rails on the chip. Exiting the sleep state includes restoring power to the multiple power supply rails on the chip, taking the volatile memory out of the self-refresh mode, and running the one or more binary images on one or more sub-systems on the chip.[0006] A second aspect relates to a system. The system includes a volatile memory, a power controller, a sleep state controller, and a boot loader. The power controller is configured to control power to multiple power rails, and the sleep state controller is configured to place the system in a sleep state by placing the volatile memory in a self-refresh mode, wherein the volatile memory stores one or more binary images, and outputting a sleep- state enter signal to the power controller. The power controller is further configured to collapse the multiple power supply rails in response to the sleep-state enter signal, to power the volatile memory in the sleep state, and to restore power to the multiple power supply rails in response to a wakeup event. The boot loader is configured to take the volatile memory out of the self-refresh mode when the power to the multiple power supply rails is restored, and to cause one or more sub-systems to run the one or more binary images on the volatile memory.[0007] To the accomplishment of the foregoing and related ends, the one or more embodiments include the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects of the one or more embodiments. These aspects are indicative, however, of but a few of the various ways in which the principles of various embodiments may be employed and the described embodiments are intended to include all such aspects and their equivalents.BRIEF DESCRIPTION OF THE DRAWINGS[0008] FIG. 1 shows an example of a system for implementing a sleep state according to aspects of the present disclosure.[0009] FIG. 2 shows an example of power rails that are collapsed in the sleep state according to aspects of the present disclosure.[0010] FIG. 3 is a flowchart illustrating an example of a cold boot flow according to aspects of the present disclosure.[0011] FIG. 4 is a flowchart illustrating an example of a procedure for entering the sleep state according aspects of the present disclosure.[0012] FIG. 5 is a flowchart illustrating an example of a procedure for exiting the sleep state according to aspects of the present disclosure. [0013] FIG. 6 illustrates a flow for performing a cold boot or exiting the sleep state according to certain aspects of the present disclosure.[0014] FIG. 7 shows an example of a memory interface for interfacing with a double data rate(DDR) memory according to certain aspects of the present disclosure.[0015] FIG. 8 shows a block diagram of a system for entering and exiting the sleep state according to aspects of the present disclosure.[0016] FIG. 9 is a flowchart illustrating an exemplary method for reducing power in a system according to certain aspects of the present disclosure.DETAILED DESCRIPTION[0017] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0018] It is desirable to conserve power in a mobile device in order to extend the battery life of the mobile device. In this regard, a system on a chip (SoC) in the mobile device may employ various power-saving techniques to conserve power. One technique is to place the SoC in a sleep state (low-power state) when certain features of the SoC are not in use (e.g., certain applications are not active).[0019] In the current lowest-power sleep state (deepest sleep state) of an SoC, power to core logic on the SoC is reduced by collapsing a core (CX) power supply rail that provides power to the core logic. In the sleep state, the logic state of the SoC is stored (retained) in internal memory (on-chip memory), which typically includes registers. This allows the SoC to quickly exit from the sleep state without having to reload the logic state into the SoC. Power to the internal memory is provided by a memory (MX) power supply rail that is separate from the CX power supply rail. A drawback of the current lowest- power sleep state is that retention of the logic state of the SoC in the sleep state requires maintaining power to the internal memory, which reduces power savings. Thus, the current lowest-power sleep state still incurs a power penalty at the SoC. [0020] To further reduce power, the SoC may be shut down, and later booted up using a cold boot procedure (flow) when the SoC is needed. The cold boot includes loading images (e.g., boot image, operating system image, etc.) from a non-volatile memory (e.g., flash memory) into a dynamic random access memory (DRAM) memory (e.g., double data rate (DDR) memory) that is external to the SoC. The cold boot also includes one or more authentication procedures to verify that the images have not been altered (e.g., by a malicious program). Sub-systems (e.g., CPU, modem, etc.) on the SoC run the images on the external DRAM memory to place the SoC in a working state, at which point the SoC is ready for use.[0021] Shutting down the SoC reduces power compared with the sleep state by collapsing all of the power rails in the SoC (e.g., CX power supply rail and MX power supply rail). However, this approach requires booting up the SoC in a cold boot up which is time consuming and consumes power. For example, it may take several seconds to load images from the non-volatile memory into the DRAM memory during a cold boot up. This is because the access speed for the non-volatile memory (e.g., flash memory) is relatively slow. The long boot time makes this approach undesirable for many users who desire quick access to the SoC.[0022] Embodiments of the present disclosure provide a new sleep state that is lower power than the current sleep state discussed above while having an exit latency that is shorter than a cold boot, as discussed further below.[0023] FIG. 1 shows an example of a system 100 in a mobile device according to certain aspects of the present disclosure. The system 100 includes an SoC 110, a power management integrated circuit (PMIC) 120, a flash memory 130, and a DDR memory 135. In this example, the SoC 110 includes one or more central processing units (CPUs) 140, a modem 142, infrastructure 144, and other sub-systems 146. The SoC 110 also includes multi-media processors 148, a resource power manager 150, and instruction memory (IMEM) 152.[0024] The PMIC 120 is configured to provide power to the SoC 110 via multiple power supply rails. In this regard, the PMIC 120 includes multiple voltage regulators (labeled Rail-1 to Rail-5), in which each voltage regulator converts an input voltage (e.g., from a battery or another power source) into a respective supply voltage, and provides the supply voltage to one or more respective power supply rails. In this example, the PMIC 120 also includes a first low-dropout (LDO) regulator 122, a second LDO regulator 124, and a third LDO regulator 126. Each of the LDO regulators 122, 124 and 126 is configured to regulate the supply voltage from voltage regulator Rail-1 to produce a respective regulated supply voltage, and provide the respective regulated supply voltage to a respective component in the system 100, as shown in FIG. 1. The PMIC 120 also includes a PMIC controller 128 configured to control the voltage regulators Rail-1 to Rail-5, and the LDO regulators 122, 124 and 126, as discussed further below. In FIG. 1, power distribution from the PMIC 120 to the flash memory 130, the DDR memory 135 and the SoC 110 is shown by solid arrows.[0025] The flash memory 130 is configured to store binary images for the SoC 110. In the example in FIG. 1, the images are labeled Binary# 1 to Binary# n. Since the flash memory 130 is non-volatile, the flash memory 130 is able to store the images when powered off. The flash memory 130 may be implemented with a NAND flash memory or another type of flash memory.[0026] The DDR memory 135 is configured to store the images and data for the SoC 110, and typically provides much faster access speeds than the flash memory 130. Unlike the flash memory, the DDR memory 135 is volatile and requires power to maintain the contents stored in the DDR memory 135.[0027] When the SoC 110 is first powered on by the PMIC 120, the SoC 110 boots up according to a cold boot procedure (flow). In one example, the SoC 110 may boot up in response to a power reset signal ("RESIN") from the PMIC 120. The cold boot flow may include the following:Power on power supply rails to SoC 110;Initialize flash memory 130;Initialize external DDR memory 135;Load images from the flash memory 130 into the DDR memory 135; Authenticate the images; andLoad initial state from images in the DDR memory 135 into the SoC 110 to place SoC 110 in working state.[0028] In one example, a boot loader loads the images from the flash memory 130 into the DDR memory 135 and authenticates the images, as discussed further below. The images may include images for initializing and operating the CPUs 140, the modem 142, the infrastructure 144, and other sub-systems 146. [0029] As discussed above, the boot time of the cold boot is relatively long. The long boot time is due to loading images from the flash memory 130 into the DDR memory 135 (e.g., which may take several seconds) and authenticating the images.[0030] After initial power up, the SoC 110 may later be placed in the new sleep state to conserve power. The new sleep state is referred to as a rock bottom sleep plus (RBSp) state in the present disclosure. However, it is to be understood that the new sleep state is not limited to this term. In certain aspects, the SoC 110 includes power-control logic 155 configured to signal to the PMIC controller 128 to enter the RBSp state and exit the RBSp state, as discussed further below.[0031] In one example, the SoC 110 may be placed in the RBSp state when one or more of the CPUs 140 completes a task and is not scheduled to start the next task for a certain time duration. In this example, one of the CPUs 140 may initiate an RBSp entry flow to place the SoC 110 in the RBSp state. The RBSp entry flow may include setting an RBSp valid value to true and storing the RBSp valid value in the DDR memory 135, the flash memory 130, the power-control logic 155, or the PMIC controller 128. In general, the RBSp valid value can be stored in any non-collapsed or non-volatile memory/entity that can hold the required value(s) in the RBSp state. As discussed further below, the RBSp valid value allows the boot loader to later determine whether the SoC 110 is exiting from the RBSp state. The RBSp entry flow may also include setting a wakeup time value in the power-control logic 155 to indicate when the SoC 110 is to exit the RBSp state. The wakeup time value may be based on a time that one or more of the CPUs 140 are scheduled to start the next task. The RBSp entry flow may also include placing the DDR memory 135 in a self-refresh mode to retain the contents of the DDR memory 135 in the RBSp state.[0032] The power-control logic 155 may then input a RBSp enter signal to the PMIC controller 128 signaling to the PMIC controller 128 to enter the RBSp state. In response, the PMIC controller 128 shuts down the SoC 110 except for the power-control logic 155 by collapsing power supply rails to the SoC 110 (e.g., the CX and MX power supply rails). If the power-control logic 155 is located off the SoC (e.g., the power-control logic 155 is located on the PMIC 120 or a separate chip), then the SoC may be completely shut off. FIG. 2 indicates the power supply rails that are collapsed in the RBSp state by an "X" next to each power rail that is collapsed. In this example, all of the power supply rails are collapsed except for the power supply rails to the DDR memory 135 and the power-control logic 155. Thus, the PMIC 120 maintains power to the DDR memory 135, allowing the DDR memory 135 to retain its contents in the RBSp state.[0033] The RBSp state provides greater power reduction than the current sleep state discussed above. This is because the RBSp state shuts down the SoC 110 except for the power- control logic 155. If the power-control logic 155 is located off the SoC, then the SoC may be completely shut off. The PMIC 120 maintains power to the DDR memory 135 to retain the images in the DDR memory 135. In contrast, the current sleep state maintains power to internal memory (on-chip memory) of the SoC to retain the logic state of the SoC 110 (e.g., by maintaining power to the MX power supply rails). The RBSp state also reduces power in the PMIC 120 compared with the current sleep state. This is because the PMIC 120 does not maintain power to the internal memory of the SoC 110 in the RBSp state, and can therefore power off circuitry (e.g., voltage regulators) in the PMIC 120 used to power the internal memory of the SoC. Thus, the RBSp state provides much greater power savings than the current sleep state.[0034] In one example, a wakeup timer in the power-control logic 155 tracks the amount of time that the SoC 110 is in the RBSp state. The wakeup timer expires when the amount of time tracked by the time reaches the wakeup time value set in the power-control logic 155. Upon expiration of the wakeup timer, the power-control logic 155 initiates RBSp exit. In this regard, the power-control logic 155 inputs a wakeup signal to the PMIC controller 128 signaling the PMIC controller 128 to exit the RBSp state.[0035] In response, the PMIC controller 128 restores power to the SoC 110 (restores power rails to the SoC 110). The PMIC controller 128 may also input the power reset signal RESIN to the SoC 110, which may initiate the boot up flow.[0036] During boot up, the boot loader reads the RBSp valid value, which indicates the SoC 110 is exiting the RBSp state (i.e., RBSp valid value is true). In response, the boot loader may skip loading the above images (e.g., Binary#l to Binary# n) from the flash memory 130 into the DDR memory 135, which was performed during the cold boot. This is because the images are already stored in the DDR memory 135 (which remained powered on in the RBSp state), and therefore do not need to be reloaded from the flash memory 130. Thus, the sub-systems (e.g., CPUs, modem, etc.) on the SoC 110 run the images already stored on the DDR memory 135 to restore the SoC 110 to a working state.[0037] The boot loader may also skip the authentication process for the above images since the authentication process was already performed during the initial cold boot of the SoC. [0038] Thus, the RBSp exit flow may skip loading and authenticating the above images, which contribute the most to the latency of a cold boot. As a result, the amount of time needed to exit the RBSp state (referred to as exit latency) is shorter than the latency of the cold boot.[0039] Therefore, the new RBSp state according to certain aspects of the present disclosure provides much better power savings than the current sleep state discussed above while having an exit latency that is shorter than a cold boot.[0040] In FIG. 1, communication between the DDR memory 135 and the sub-systems of the SoC 110 are shown in dashed lines. The sub-systems typically communicate with the DDR memory 135 via a memory controller (shown in FIG. 7), which may be on the SoC 110 or external to the SoC 110. The sub-systems may communicate with one another via buses (not shown in FIG. 1). For example, one of the CPUs 140 may perform some of the RBSp entry operations (e.g., place DDR memory in self-refresh mode) discussed above, and may send a signal to the power-control logic 155 indicating when the system is ready to enter the RBSp state. In response, the power-control logic 155 may input the RBSp enter signal to the PMIC 120, as discussed above.[0041] FIG. 3 is a flowchart illustrating an example of a cold boot flow 300 according to aspects of the present disclosure. In the present disclosure, the term "flow" may refer to a procedure or method. The cold boot flow 300 may be initiated when the system 100 is first powered on.[0042] At step 310, the flash memory 130 and DDR memory 135 are initialized.[0043] At step 320, binary images (e.g., Binary# 1 to Binary# n) are loaded from the flash memory 130 into the DDR memory 135. In one example, the image loading is performed by a secondary boot loader. In this example, a primary boot loader is executed from read-only memory (ROM) at the start of boot up. The primary boot loader loads the secondary boot loader from the flash memory 130 into the DDR memory or another memory, and authenticates the secondary boot loader. The primary boot loader authenticates the second boot loader to verify that the secondary boot loader has not been altered (e.g., by a malicious program). If the primary boot loader successfully authenticates the secondary boot loader, then the primary boot loader passes control of the boot process to the second boot loader. The secondary boot loader may be executed at a boot processor. If the primary boot loader fails to authenticate the secondary boot loader, then the boot may be aborted. [0044] At step 330, the images are authenticated. For example, each image may be signed with a respective digital signature, in which the digital signature was generated using a cryptographic algorithm based on the entire image or a portion of the image, and a private key of a private-public key pair. In this example, the secondary boot loader may authenticate an image by verifying the respective digital signature using the public key of the respective private-public key pair. This allows the secondary boot loader to verify that the image read from the flash memory 130 matches the image that was used to generate the digital signature, and therefore verify that the read image has not been altered. If the secondary boot loader is unable to verify a digital signature, then the secondary boot loader may abort the boot.[0045] In another example, each image may be accompanied by a respective checksum, in which the checksum was computed using a checksum algorithm based on the entire image or a portion of the image. In this example, the secondary boot loader authenticates an image by computing a checksum on the entire image or a portion of the image read from the flash memory 130, and the comparing the computed checksum with the checksum accompanying the image. If there is a match, then the secondary boot loader successfully authenticates the image. If there is no match, then the secondary boot loader may abort the boot.[0046] It is to be appreciated that the present disclosure is not limited to the exemplary authentication techniques discussed above, and that other authentication techniques may be used instead of or in addition to the exemplary authentication techniques discussed above.[0047] At step 340, a RBSp DDR partition table is updated. For example, the secondary boot loader may update the RBSp DDR partition table. The RBSp DDR partition table may indicate the location (e.g., start address) and size of each image loaded into the DDR memory 135.[0048] At step 350, the secondary boot loader may transfer control, for example, to an operating system running on one or more of the CPUs 140 and/or another processor.[0049] It is to be understood that the cold boot flow 300 is not limited to the specific order disclosed above. For example, the cold boot flow 300 does not require that all of the images be loaded in step 320 before the start of authentication in step 330. For instance, a binary image may be authenticated before another one of the binary images is loaded. Accordingly, steps 320 and 330 may overlap. Also, it is to be appreciated that the cold boot flow 300 may include additional steps. [0050] FIG. 4 is a flowchart illustrating an example of a RBSp entry flow 400 for entering theRBSp state according aspects of the present disclosure.[0051] At step 410, instructions in the IMEM 152 are transferred (written) to an IMEM partition in the DDR memory 135. The instructions may include instructions that are executed by one or more sub-systems (e.g. CPU) on the SoC 110. The IMEM 152 may be used, for example, to provide the sub-systems with fast access to the instructions.Storing the instructions in the DDR memory 135 allows the instructions to be restored to the IMEM 152 when the SoC exits the RBSp state.[0052] At step 420, the RBSp valid value is set to true. The RBSp valid value allows the secondary boot loader to later determine whether the SoC 110 is exiting from the RBSp state, as discussed further below. The RBSp valid value may be stored in the flash memory 130 and/or the DDR memory 135.[0053] At step 430, the DDR memory 135 is placed in the self-refresh mode. In the self-refresh mode, the DDR memory 135 automatically executes refresh operations using a refresh counter.[0054] At step 440, the SoC 110 is powered off. For example, the power-control logic 155 may input the RBSp enter signal to the PMIC controller 128 signaling to the PMIC controller 128 to enter the RBSp state. In response, the PMIC controller 128 may shut down the SoC 110 except for the power-control logic 155 by collapsing power supply rails to the SoC 110 (e.g., the CX and MX power supply rails). If the power-control logic 155 is located off chip, then the entire SoC may be shut off. In one example, the PMIC controller 128 may power collapse the power rails indicated in FIG. 2.[0055] It is to be understood that the RBSp entry flow 400 is not limited to the specific order of steps disclosed above. Also, it is to be appreciated that the RBSp entry flow 400 may include additional steps. For example, the RBSp entry flow may also include setting the wakeup time value discussed above, which specifies when the system is to exit the RBSp state.[0056] FIG. 5 is a flowchart illustrating an example of a flow 500 for exiting the RBSp state according to aspects of the present disclosure. The RBSp exit flow may be triggered by a wakeup event (e.g., a wakeup signal from the power-control logic 155, as discussed above).[0057] At step 510, the SoC 110 is powered up. For example, the PMIC controller 128 may turn on the power supply rails that were collapsed in the RBSp state. The PMIC 120 turns on these rails without disturbing the rails that were kept on in the RBSp state. The PMIC controller 128 may also input a power reset signal RESIN to the SoC. In response, SoC may start boot up by executing the primary boot loader from ROM. The primary boot loader may then load and authenticate the secondary boot loader, as discussed above. The primary boot load may authenticate the secondary boot loader using any of the authentication techniques discussed above, or another authentication technique.[0058] At step 520, the flash memory 130 is initialized.[0059] At step 530, a determination is made whether the RBSp valid value is true. For example, the secondary boot loader may read the RBSp valid value (e.g., from the flash memory) to determine whether the RBSp valid value is true. If the RBSp valid value is true (which indicates that the system 100 is exiting the RBSp state), then the flow proceeds to step 540.[0060] At step 540, the DDR memory 135 is taken out of the self-refresh mode. This allows sub-systems (e.g., CPUs, modem, etc.) to access the DDR memory 135 to run the images on the DDR memory 135, and restore the system to a working state.[0061] At step 550, the secondary boot loader may transfer control, for example, to an operating system running on one or more of the CPUs 140 and/or another processor.[0062] When the RBSp valid value is true (i.e., the system 100 is exiting the RBSp state), the secondary boot loader may skip loading and authenticating the binary images (e.g.,Binary# 1 to Binary# n), as discussed above.[0063] If the RBSp valid value is false, then the flow 500 proceeds to step 560. In this case, the flow 500 may perform a cold boot.[0064] At step 560, the binary images (e.g., Binary# 1 to Binary# n) are loaded from the flash memory 130 into the DDR memory 135. The secondary boot loader may load the images, as discussed above.[0065] At step 570, the images are authenticated. For example, the secondary boot loader may authenticate the images by verifying digital signatures and/or checksums attached to the images, as discussed above.[0066] At step 580, the secondary boot loader may transfer control, for example, to an operating system running on one or more of the CPUs 140 and/or another processor.[0067] As demonstrated above, the exemplary flow 500 in FIG. 5 is not limited to exiting theRBSp state, and may also be used during initial power up of the system to performed a cold boot. When the system 100 is initially powering up (i.e., not exiting an RBSp state), the RBSp valid value is false. In this case, the flow 500 proceeds to step 560 and performs the cold boot operations (i.e., loading and authenticating the binary images).[0068] It is to be understood that the flow 500 is not limited to the specific order disclosed above. Also, it is to be understood that the flow 500 may include additional steps. For example, the flow 500 may include updating the RBSp DDR partition table discussed above when the RBSp valid value is false. In another example, the flow 500 may include transferring the instructions in the IMEM partition of the DDR memory 135 back to the IMEM 152 when the RBSp valid value is true to restore the instructions in the IMEM 152.[0069] Each of the images (e.g., Binary# 1 to Binary# n) may include a read only (RO) portion and a read-write (RW) portion. The RO portion of the image may include program code and constant values. The RW portion may include data including values of variables, which may be changed during runtime. The RO portion is typically much larger than the RW portion (e.g., at least five times larger).[0070] In some embodiments, when exiting the RBSp state, the secondary boot loader skips reloading both the RO and RW portions of the binary images from the flash memory 130.[0071] In other embodiments, when exiting the RBSp state, the secondary boot loader skips reloading the RO portion of the binary images from the flash memory 130, but reloads the RW portion of the binary images from the flash memory 130 into the DDR memory 135. Although the RW portion of the images is reloaded from the flash memory 130 in these embodiments, the exit latency for the RBSp state is still shorter than the boot time of a cold boot. This is because the RW portion of the images typical makes up a small portion of the total size of the images. For the exemplary flow 500 shown in FIG. 5, the RW portion of the images may be reloaded from the flash memory 130 into the DDR memory 135 when the RBSp valid value is true.[0072] In still other embodiments, the RW portion of the images may be compressed using a compression algorithm to produce a compressed copy of the RW portion of the images. For example, the compressed copy of the RW portion may be in a zip format. The compressed copy of the RW portion may be stored in the DDR memory 135. During RBSp exit, the compressed copy of the RW portion in the DDR memory 135 is uncompressed (unzipped) instead of reloading the RW portion of the images from the flash memory 130. [0073] FIG. 6 illustrates a flow 600 for performing a cold boot or exiting the RBSp state (e.g., depending on whether the RBSp valid value is true) according to certain aspects of the present disclosure. The flow 600 is described first below for the case of a cold boot.[0074] In this example, the flow 600 may be initiated by a power reset ("Reset" in FIG. 6). At the start of boot up, the primary boot loader is executed from ROM. In Fig. 6, the primary boot loader is referred to "Boot ROM". The primary boot loader loads the secondary boot loader from the flash memory 130 into the DDR memory or another memory. The primary boot loader may also perform zero initialization ("ZI"), in which certain variables of the secondary boot loader are initialized to zero. The primary boot loader also authenticates the second boot loader to verify that the secondary boot loader has not been altered (e.g., by a malicious program). The primary boot loader may authenticate the secondary boot loader using any of the techniques discussed above.[0075] The secondary boot loader ("Boot Loader") then loads the binary images, including the RO and RW portions of the images, from the flash memory 130 into the DDR memory 135. The secondary boot loader may also perform zero initialization ("ZI"), in which certain variables of the images are initialized to zero. The secondary boot loader also authenticates the images to verify that the images have not been altered (e.g., by a malicious program). The secondary boot loader may authenticate the images using any of the techniques discussed above. In FIG. 6, the box next to the "Boot Loader" lists operations performed for a cold boot. Operations that may be skipped during an RBSp exit are shown in parenthesis.[0076] In the example in FIG. 6, the images include: (1) secure execution environment (SEE) software, (2) modem software, (3) resource power software, and (4) application software. The solid arrows in FIG. 6 pointing to these images represent the loading of these images into the DDR memory 135 by the secondary boot loader.[0077] The secondary boot loader may then initiate execution of the SEE software (5) on one or more of the CPUs 140 and/or another processor. When executed, the SEE software creates a secure execution environment that is protected from attack by a malicious program running in another execution environment (e.g., running on a high-level operating system). The SEE software may include a secure operating system and/or one or more trusted applications responsible for setting access controls (e.g., to system resources and sensitive data), providing services to other execution environments, and/or bringing other execution environments to life. [0078] In the example in FIG. 6, the SEE software initiates the modem execution environment (6) to get the modem 142 operational. The modem 142 is configured to connect the system 100 to a wireless network (e.g., cellular network, a WiFi network, etc.), and includes one or more digital signal processors (DSPs) and one or more transceivers.[0079] In the example in FIG. 6, the SEE software also initiates execution of the resource power software (7) to get the resource power manager 150 operational. The resource power manager 150 is configured to manage power for the SoC 110. For example, the resource power manager 150 may perform dynamic clock voltage scaling (DCVS), in which the supply voltages and/or clock frequencies in the SoC are dynamically scaled (e.g., based on performance requirements of sub-systems on the SoC).[0080] In the example in FIG. 6, the SEE software also initiates execution of application software (8). For example, the application software may include feature-rich applications with a high-level operating system (e.g., Linux, Windows, TreadX, etc.).[0081] The flow 600 will now be described for the case of an RBSp exit. In this example, the flow 600 may be initiated by a power reset ("Reset"). The primary boot loader ("Boot ROM") loads the secondary boot loader from the flash memory 130 into the DDR memory or another memory. The primary boot loader may also perform zero initialization ("ZI"), in which certain variables of the secondary boot loader are initialized to zero. The primary boot loader also authenticates the second boot loader to verify that the secondary boot loader has not been altered (e.g., by a malicious program). If the primary boot loader successfully authenticates the second boot loader, the primary boot loader passes control to the secondary boot loader.[0082] For RBSp exit, the secondary boot loader ("Boot Loader") may skip one or more of the operations that were performed for the cold boot discussed above. As discussed above, the secondary boot loader may recognize that the system is exiting the RBSp when the RBSp valid value is true.[0083] The secondary boot loader may skip loading the RO portion (program code and constants) of the images from the flash memory 130 into the DDR memory 135.[0084] In one example, the secondary boot loader may also may skip loading the RW portion of the images from the flash memory 130 into the DDR memory 135. In another example, the secondary boot loader may uncompress (e.g., unzip) a compressed copy of the RW portion of the images in the DDR memory 135. In still another example, the secondary boot loader may reload the RW portion of the images from the flash memory 130 into the DDR memory 135. [0085] In one example, the secondary boot loader may perform zero initialization ("ZI"), in which certain variables of the images are initialized to zero. The secondary boot loader may perform zero initialization using direct memory access to speed the process. In another example, the secondary boot loader skips zero initialization.[0086] Finally, the secondary boot loader may skip authenticating the images.[0087] The secondary boot loader may then initiate execution of the SEE software (5) on one or more of the CPUs 140 and/or another processor. As discussed above, the SEE software initiates the modem execution environment (6) to get the modem 142 operational. The SEE software also initiates execution of the resource power software (7) and the application software (8), as discussed above.[0088] It is to be understood that the flow 600 is not limited to the specific order disclosed above. Also, it is to be understood that the flow 600 may include additional steps.[0089] FIG. 7 shows an example of a memory interface 705 for transferring images and data between sub-systems on the SoC 110 and the DDR memory 135. The memory interface 705 includes a memory controller 720 and a PHY block 710. In one example, the memory controller 720 is integrated on the SoC 110. In this example, the memory controller 720 may be referred to as an integrated memory controller (IMC), a bus- integrated memory controller (BIMC), or another terminology. The memory controller 720 is responsible for buffering and serving memory requests from sub-systems on the SoC that need to access images and/or data in the DDR memory 135.[0090] The memory controller 720 communicates with the DDR memory 135 via a physical (PHY) block 710. The PHY block 710 may be coupled to the DDR memory 135 via one or more channels. The PHY block 710 may include one or more transceivers (not shown) for transmitting signals to and receiving signals from the DDR memory 135 over the one or more channels. In the example in FIG. 7, the PHY block 710 is coupled to the DDR 135 via channels CH0 and CHI, in which each channel may include multiple lines for transferring multiple bits across the channel in parallel.[0091] When the system 100 enters the RBSp state, the PHY block 710 may be placed in a freeze input/output (I/O) state to stop access to the contents of the DDR memory 135. The DDR memory 135 may also be placed in the self-refresh mode to retain the contents of the DDR memory 135 in the RBSp state, as discussed above. The PHY block 710 is unfrozen when exits the RBSp state.[0092] FIG. 8 shows a block diagram of a system 800 for entering and exiting the RBSp state according to certain aspects. The system 800 includes the flash memory 130, the DDR memory 135, the primary boot loader (PBL) 810, the secondary boot loader (SBL) 815, and a sleep state controller 820.[0093] The primary boot loader 810 performs the operations of the primary boot loader discussed above including loading and authenticating the secondary boot loader 815, and may be implemented by a boot processor executing a boot image (e.g., boot code) from ROM. As discussed above, the primary boot loader 810 may be activated by power reset RESIN from the PMIC 120. The secondary boot loader 815 performs the operations of the secondary boot loader discussed above including loading and authenticating the images (e.g., Binary# 1 to Binary# 2) in a cold boot, and may be implemented by a boot processor executing a boot image (e.g., from flash memory 130). In one example, one or more of the CPUs 140 may function as the boot processor.[0094] The sleep state controller 820 is configured to perform one or more of the operations for entering the RBSp state discussed above. For example, the sleep state controller 820 may perform the RBSp entry flow 400 discussed above. The sleep state controller 820 may be implemented by one or more of the CPUs 140 and the power-control logic 155. In this example, the one or more CPUs 140 may execute code for performing the operations discussed above.[0095] As shown in FIG. 8, the secondary boot loader 815 and the sleep state controller 820 access the flash memory 130 and the DDR memory 135. Access to the flash memory 130 may be provided by a flash memory interface (not shown), which may be integrated on the SoC 110. Access to the DDR memory 135 may be provided by the DDR memory interface 705 shown in FIG. 7 and discussed above.[0096] FIG. 9 is a flowchart illustrating a method 900 for reducing power in a system. The system (e.g., system 100) includes a chip (e.g., SoC 110), and a volatile memory (e.g., DDR memory 135).[0097] At step 910, the system enters a sleep state. Entering the sleep state includes, at sub- step 912, placing the volatile memory in a self-refresh mode, wherein the volatile memory stores one or more binary images and the volatile memory is powered in the sleep state. This may be done so that the volatile memory (e.g., DDR memory 135) retains the one or more binary images in the sleep state (e.g., RBSp state). Entering the sleep state also includes, at sub-step 914, collapsing multiple power supply rails on the chip. The multiple power supply rails may include both CX power supply rails and MX power supply rails. Collapsing the multiple power supply rails may be accomplished by shutting off power to the multiple power supply rails. [0098] At step 920, the system exits the sleep state. Exiting the sleep state includes, at sub-step 922, restoring power to the multiple power supply rails on the chip. Exiting the sleep state also includes, at sub-step 924, taking the volatile memory out of the self-refresh mode. Exiting the sleep state further includes, at sub-step 926, running the one or more binary images on one or more sub-systems on the chip. For example, the one or more binary images may include at least one of an operating system image, a modem image, a power resource image, or a software application image, and the one or more subsystems may include at least one of a central processing unit, a modem, or a resource power manager. In certain aspects, the system exits the sleep state (e.g., RBSp state) without reloading the one or more images from a non-volatile memory (e.g., flash memory 130) into the volatile memory or reauthenticating the one or more images, both of which may be performed in a cold boot.[0099] Within the present disclosure, the word "exemplary" is used to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two components. The term "sub-system" is used broadly, and intended to cover hardware implementations of electrical devices and conductors that, when connected and configured, enable the performance of the functions described in the present disclosure. The term "sub-system" is also intended to cover software implementations, in which a processor performs the functions described herein by executing software comprising code for performing the functions. The software may be stored on a computer-readable storage medium, such as the DDR memory 135, on-chip memory, and/or another type of memory.[0100] It is to be understood that present disclosure is not limited to the specific order or hierarchy of steps in the methods disclosed herein. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods may be rearranged. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented unless specifically recited therein.[0101] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
An improved manufacturing process and an improved device made by the process for forming via interconnects between metal layers in a multilevel metallization structure substantially eliminates trench formation during via overetch and exploding vias during via fill. An insulating multilayer structure comprising a conformal oxide, a spin-on layer, and an etch stop layer for the via etch locally planarizes the region adjacent to metal lines before the ILD is deposited and vias are patterned and etched. Using this process, metal borders around vias can be reduced or eliminated, thereby increasing circuit packing density. |
I claim: 1. An integrated circuit manufacturing process for fabricating a borderless via for interconnecting a first bottom metal line to a second top metal line in a multilevel metallization structure on a semiconductor substrate, said first bottom metal line having a top conducting surface, comprising the steps of:providing a semiconductor substrate having devices therein to be connected, having an insulating layer on said substrate, and having a bottom metal layer deposited onto said insulating layer on said substrate; patterning and etching said bottom metal layer to provide said first bottom metal line having an exposed top conducting surface and an edge surface, and to expose portions of said insulating layer; forming a dielectric layer surrounding said bottom metal line, said dielectric layer having a top dielectric surface substantially locally planar with said top conducting surface of said bottom metal line near said bottom metal line; depositing a non-conducting via etch stop layer onto said top conducting surface of said bottom metal line and said top dielectric surface; depositing an ILD layer onto said via etch stop layer; etching a via having a first and a second portion through said ILD layer to expose the region of said via etch stop layer under said via, said via etch stop layer being substantially unetched by said via etch, said first via portion being atop said via etch stop layer atop said bottom metal line, and said second via portion being atop said via etch stop layer atop said dielectric layer, said via having a bottom surface and a side surface; removing said exposed region of said via etch stop layer to expose the region of said top conducting surface of said bottom metal line and the region of said top dielectric surface under said via; depositing a barrier/nucleation/adhesion layer onto said via side surface and onto said exposed portion on said top conducting surface of said bottom metal line and said top dielectric surface; and filling said via with a conductive material wherein during said via filling step said top surface and said edge surface of said bottom metal line are protected by intervening layers from exposure to reactants and reaction products of said via filling step. 2. The process of claim 1, further comprising the steps of:depositing a top layer of metal atop said ILD and said filled via; and patterning and etching said top metal layer to form said top metal line. 3. The process of claim 1, wherein the step of forming said dielectric layer comprises the steps of:depositing a conformal dielectric layer onto said top conducting surface of said bottom metal line and said edge surface of said bottom metal line, and onto said exposed portions of said insulating layer; forming a layer of spin-on dielectric on said conformal dielectric layer; etching back said spin-on and said conformal dielectric layers to expose said top conducting surface of said bottom metal line, thereby forming a top surface of said etched back dielectric layers substantially locally planar with said top conducting surface near said bottom metal line. 4. The process of claim 3, wherein said conformal dielectric is selected from the group consisting of: PECVD of TEOS, PECVD of SiH4/O2, LPCVD of TEOS, LPCVD of SiH4/O2, subatmospheric CVD of TEOS, and subatmospheric CVD of SiH4/O2.5. The process of claim 4, wherein said spin-on dielectric is selected from the group consisting of: SOG, SOS, HSQ, Flowable Oxide, polyimide, and parylene.6. The process of claims 5, wherein said via etch stop layer is selected from the group consisting of: silicon nitride, silicon oxynitride, Al2O3, and polysilicon.7. The process of claim 6, wherein said via etching step comprises a first reactive ion etch having etch rate selectivity for said ILD layer over said via etch stop layer of 4:1 or greater.8. The process of claim 7, wherein said first reactive ion etch utilizes an etch chemistry selected from the group consisting of: C4F8 and C4F8/CH3F.9. The process of claim 8, wherein said step of removing said via etch-stop layer comprises a second reactive ion etch having etch rate selectivity for said via etch-stop layer over said dielectric layer of 1:1 or greater.10. The process of claim 9, wherein said second reactive ion etch utilizes CH3F/O2 chemistry.11. The process of claim 10, wherein said metal line is comprised of Al or an aluminum alloy.12. The process of claim 11, wherein said metal line has a hard mask thereon, said hard mask being selected from the, group consisting of: SiOxNy, SiO2, and Si3N4.13. The process of claim 11, wherein said metal line has an ARC layer with a top surface thereon, said top surface of said ARC layer thereby being said top conducting surface of said bottom metal line.14. The process of claim 13, wherein said ARC layer is selected from the group consisting of: TiN, TiW, and Ti.15. The process of claim 13, further comprising the steps of:depositing a barrier/nucleation/adhesion layer onto said via sides and onto said exposed portion of said top conducting surface of said bottom metal line, and; filling said via with a conductive material; depositing a top layer of metal atop said ILD; patterning and etching said top metal layer to form said top metal line. 16. The process of claim 15, wherein said barrier/nucleation/adhesion layer is selected from the group consisting of TiN and TiW. |
CROSS REFERENCE TO RELATED APPLICATIONThis application is a Rule 1.53(b) divisional of application Ser. No. 08/754,564, filed Nov. 21, 1996 U.S. Pat. No. 6,362,527. Application Ser. No. 08/754,564 is hereby incorporated by reference in its entirety into the subject application.FIELD OF THE INVENTIONThis invention relates to processes for formation of vias used for interconnecting metal layers of a multilevel metallization structure employed in integrated circuits.BACKGROUND OF THE INVENTIONIntegrated circuits are becoming increasingly fast, and correspondingly, devices and feature sizes are shrinking. This allows for much higher device packing density on chips, and consequently lower cost per device.When devices were relatively large, one layer of metal was adequate to provide all of the metal interconnections and other wiring needed to build a complete integrated circuit, without wiring requirements limiting device packing density. To avoid such a limitation as device dimensions have shrunk, it has become necessary to develop multilevel metallization schemes and to reduce certain metal dimensions.In a single level metallization system, contact is made to the underlying silicon devices through contact holes etched through the dielectric separating the silicon from the interconnect metal. Multilevel metallization systems are comprised of alternating layers of dielectric and metal materials. The metal interconnects on the metal layer closest to the silicon surface (M1), make contact to the underlying silicon devices through contact holes, just as in single level systems. The successive metal layers, designated M1 to M(n), where n is the number of metal layers, are electrically connected to each other as required by appropriately located holes, referred to as vias, through the interlevel dielectric layers (ILD's). The dielectric layer between the silicon surface and the first metal layer closest to the silicon is designated ILDO. Vias are typically filled with a conductor such as aluminum or tungsten. The conducting material filling the via is called a via plug.Interconnect lines on each metal layer are separated by spaces. These spaces are filled with dielectric when the next dielectric layer is deposited. The width of one metal line plus one space is referred to as pitch. Many factors, including transistor size, circuit layout, and the number of metal layers that can be used, enter into the choice of the pitch for the different metal layers. The minimum pitch for M1 is usually set by the minimum transistor size and by lithography tolerances to insure that adjacent lines, at the minimum pitch, completely cover contacts without shorting to each other. After pitch is determined, the line and space dimensions are defined by circuit performance requirements such as RC time constants and reliability, as well as by the capability of the process to provide lines of minimum width. Minimum pitch for the M2 and M3 layers of metal are generally successively larger than for the M1 level, being determined by factors other than transistor size. If, however, vias are stacked one over another between successive metal layers, as is sometimes done to enhance performance and increase packing density, the pitches of all the layers contacted by the stacked via are generally maintained the same to facilitate layout.Via dimensions are typically determined by the design current expected to flow through the via plug and by the resistance of the plug itself, as well as by variances and limitations imposed by lithography, etch, and via-fill processes. As device dimensions shrink and the line widths at the lower metallization levels such as M1 and M2 become correspondingly smaller, via cross sectional area decreases, and the via aspect ratio (AR), defined as via height/via width, tends to increase. The via aspect ratio is critical to the determination of how, and with what metal, the via is filled.The generally preferred manufacturing method of filling vias having AR>1 is Chemical Vapor Deposition of tungsten, (CVD tungsten process). Generally, the CVD tungsten process inherently provides better step coverage than competing processes such as sputtering of aluminum. It therefore is a better choice for uniformly coating the sides and bottoms of holes with high aspect ratio, thus yielding substantially void free plugs. Additionally, the CVD tungsten process is a manufacturing-proven process for filling high aspect ratio vias.Two somewhat different CVD tungsten processes are in common use.1. Selective Tungsten CVD, and2. Blanket Tungsten CVD with Etchback or Chemical Mechanical Polishing (CMP).Both are based on the chemical reduction of tungsten hexafluoride (WF6), a highly reactive gas. The process used for via fill between two metal layers is Blanket Tungsten CVD. In this process, tungsten hexafluoride is reduced by hydrogen in accordance with the reaction:WF6+3H2+(heat)->W+6HFThe blanket tungsten process results in deposition of tungsten over the entire surface of the interlevel dielectric layer, and in filling of the vias over the underlying metal. The underlying metal is usually aluminum or an aluminum alloy, the preferred interconnect metal in most applications. In some applications, the entire tungsten layer deposited on the dielectric surface is subsequently etched back or polished using CMP, leaving only the plug in the via. In other applications the tungsten on the dielectric surface is patterned and used as interconnect metal. This may be accomplished by directly patterning the tungsten, or the ILD may have trenches patterned and etched before tungsten deposition. In this case, when excess tungsten is etched or polished off the surface, metal interconnect lines remain.Before depositing the CVD tungsten, a thin barrier/nucleation/adhesion film is deposited on the dielectric surface and into the vias, coating the underlying aluminum with a protective barrier. This barrier prevents damaging interaction between the aluminum and the reactants and reaction products of the tungsten deposition. Preferred materials for the barrier/nucleation/adhesion film are TiN and TiW, with TiN being the most frequently used. A serious yield problem arises if, for any reason such as worst case tolerance buildup, misalignment of vias and the underlying metal result in vias not mating properly and extending outside of the underlying metal. This results in formation of trenches in the dielectric adjacent the metal lines during via overetch. The portion of the via extending beyond the metal can etch downward to the next lower metal layer or to the silicon in extreme cases, causing an interlevel short. Additionally, the trenches have high aspect ratio, and are difficult to completely fill with tungsten. Low density metal or actual metal voids in the trench regions can result, trapping gases therein and causing reliability problems. Finally, there is a high probability that the edge of the underlying aluminum interconnect metal, exposed due to the misalignment, will not be adequately protected by the barrier layer. This would result in a violent chemical reaction between the exposed aluminum and the WF6 and/or HF during deposition of the tungsten plug, causing severe damage to the structure. This phenomenon has been termed "exploding vias".To insure that interconnect metal and via plug make contact over the entire end surface of the plug and to reduce the occurrence of trench formation, exploding vias, and interlevel shorts, it has been common practice to provide for a minimum required border of metal around the via. This border or overlap is intended to account for any variations in metal and via dimensions and also for any misalignment tolerance of the lithography tool used. Borders are made sufficiently large to assure that vias do not extend beyond the underlying metal under worst case conditions of misalignment and/or dimensional tolerance buildup. If the metal line width is not adequate to provide the minimum required border, it is increased where it encounters a via, as shown in FIG. 2d. Since the minimum space can not decrease where the line width increases, the minimum pitch in this contacted case is greater than the non-contacted pitch previously described. This practice has the disadvantage of limiting the device packing density due to the increase in contacted metal pitch.Design rules establishing the minimum size of borders around vias can be tightened; allowing smaller borders around vias if:1. Tolerances associated with line, space, and pitch dimensions are reduced, and/or,2. Tolerances associated with misalignment of vias caused by lithography are reduced.While these steps will reduce the loss in device packing density caused by an increase in metal pitch due to widening of metal lines at vias, they do not fully compensate for that loss, and they also introduce added cost to the manufacturing process.The use of borderless vias is attractive from a packing-density and manufacturing cost viewpoint, but all of the aforementioned problems associated with misalignment are magnified in this case. A method for solving the exploding via problem for borderless vias is described by the inventor in an earlier U.S. patent application Ser. No. 08/595,150 (B279), which is hereby incorporated by reference. According to this earlier method, a conformal protective insulating cap layer, silicon nitride or silicon oxynitride by way of example, is deposited over the metal line before ILD deposition. A two step directional via etch is utilized. The first portion of the etch has high selectivity, with high oxide etch rate compared with nitride etch rate. The nitride cap layer thereby acts as an etch stop on the top metal surface. Although for misaligned vias, a trench forms in the dielectric adjacent the metal lines during via overetch, the nitride on the metal sidewalls is substantially unaffected during via overetch due to the high selectivity. The second portion of the etch removes the nitride cap layer atop the metal lines, but due to its high directionality, leaves the sidewall cap layer substantially intact. The metal sidewalls are thus protected by the cap layer from WF6, thereby substantially eliminating the exploding via problem. The method as described above does not address the aforementioned problem of trench formation adjacent the metal lines, with the associated consequences of interlevel shorts and metal voids in the high aspect region.A method for minimizing trench formation for slightly misaligned borderless vias is described by the inventor in U.S. patent application Ser. No. 08/601,541 (B077)U.S. Pat. No. 5,619,072, which is hereby incorporated by reference. According to this method, an insulating sidewall spacer is formed on the metal lines by deposition and etchback, before depositing the thick ILD and performing via etch. The sidewall spacer is comprised of an etch stop material relative to the oxide via etch, silicon nitride by way of example. For slightly misaligned vias, the portion of the via extending beyond the metal line falls above the spacers, and therefore during via overetch the etch stop material of the spacers prevents formation of a deep trench in that region, thereby lessening the probability of interlevel shorts. This method does not fully address the exploding via problem, since during etchback for sidewall spacer formation, the corner of the metal line may be exposed. This is true particularly since nitride is seen to be thinner at the metal corner, and nitride etch rate is experimentally observed to be enhanced near the corner. Additionally, the amount of misalignment tolerated by this process is limited to the thickness of the sidewall spacers, generally approximately 0.10-0.15 microns.A single method which would substantially eliminate trench formation for slightly or moderately misaligned borderless vias and would additionally prevent exposure of metal sidewalls would provide a substantially complete solution to the problems of via etch-induced interlevel shorts and metal voids in the high aspect region, as well as preventing exploding vias.SUMMARY OF THE INVENTIONI have provided an improved manufacturing process for forming via interconnects between metal layers in a multilevel metallization structure. This process substantially eliminates trench formation adjacent metal lines during via overetch, and prevents exploding vias, via metal voids in the trenches, and interlevel shorts caused by via overetch.It is an object of this invention to provide an improved manufacturing process for fabricating multilevel metallization structures.It is a further object to provide a manufacturing process which improves yield in the fabrication of multilevel metallization structures.It is a further object to provide an integrated circuit with an improved multilevel metallization structure which permits higher device packing density on chips.It is a further object to provide a manufacturing process which allows reducing the contacted pitch on metal layers of multilevel metallization structures.It is a further object to provide a manufacturing process which permits the use of borderless vias in multilevel metallization structures.It is a further object to provide a manufacturing process which permits loosening of the design rules that establish the minimum size of borders around vias in the fabrication of multilevel metallization structures.It is a further object to provide a manufacturing process which substantially eliminates trench formation adjacent metal lines during via overetch.It is a further object to provide a manufacturing process which substantially eliminates interlevel shorts between metal layers caused by via overetch.It is a further object to provide a manufacturing process which substantially eliminates via metal voids adjacent metal lines, caused by via overetch.It is a further object to provide a manufacturing process which substantially eliminates the problem of exploding vias in the fabrication of multilevel metallization structures.It is a further object to provide a manufacturing process tolerant of misalignment of vias and underlying metal in the fabrication of multilevel metallization structures.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic cross section of a four level metallization system interconnecting devices on a silicon wafer.FIG. 2a shows a plan view of vias and underlying metal interconnect lines with minimum required borders provided.FIG. 2b shows a plan view of vias and underlying metal interconnect lines with less than minimum required borders provided.FIG. 2c shows a plan view of vias and underlying metal interconnect lines with no borders.FIG. 2d shows a plan view of vias and underlying metal interconnect lines widened around the vias to provide minimum required borders.FIG. 3 is a process flow embodiment utilizing this invention.FIG. 4 is a cross sectional view of a via centrally aligned with respect to an underlying metal line, not utilizing this invention.FIG. 5 is a cross sectional view of a via misaligned with respect to an underlying metal line, not utilizing this invention, showing trench formation.FIG. 6a is a cross sectional view of a metal line atop a dielectric layer.FIG. 6b is a cross sectional view of a metal line with conformal oxide and spin-on layers deposited thereon.FIG. 6c is a cross sectional view of the metal line and oxide and spin-on layers of FIG. 6b, after etchback.FIG. 6d is a cross sectional view of the metal line and etched back layers of FIG. 6c, with etch stop layer and ILD layer deposited thereon.FIG. 6e is a cross sectional view of the metal line and dielectric layers of FIG. 6d, after via etch, removal of etch stop layer, and deposition of glue layer.FIG. 6f is a cross sectional view of the metal line, dielectric layers, and via of FIG. 6e, after formation of via plug and next level of metal.It should be noted that the figures are not drawn to scale and that the proportions of illustrated parts do not reflect actual dimensions as they relate to implementation of the invention.DETAILED DESCRIPTION OF THE INVENTIONFIG. 1 shows a cross section of a multi-level metallization system which could utilize this invention. Devices 2 in silicon wafer 4 are connected to first layer (M1) of metal interconnects 6 through contact openings 8 in dielectric layer ILDO 10. Layer of metal interconnects 6 is connected to metal layer 12, and layer 12 is connected to layer 12', through vias 14 in interlevel dielectric 15. My invention is applicable to via interconnects and to the process for forming them between any two metal layers.With reference to FIGS. 2a-2d, vias 14, of constant dimension, are shown on adjacent underlying metal lines 6, of varying width. In FIG. 2a, minimum required metal borders 16 are provided around the vias. In FIG. 2b, borders 17, less than minimum required borders 16, are provided around the vias. FIG. 2c illustrates borderless vias. In FIG. 2d, metal line width 18 adjacent via 14 is increased to meet the condition for minimum required borders 16. The probability of dimensional tolerance buildup causing vias to extend outside underlying metal is high in the cases illustrated in FIG. 2b and FIG. 2c. My invention provides a process for substantially eliminating catastrophic yield and reliability problems most likely to occur in such cases. My invention is also applicable in cases where minimum border requirements are satisfied.According to my invention, a conformal oxide is deposited over a metal line, followed by application of a spin-on material such as spin-on-glass (SOG) to locally planarize the surface. Oxide etchback to the metal line yields a substantially flat surface, whereby the metal line is embedded in a dielectric layer. A silicon nitride or oxynitride etch stop layer is then deposited on this substantially flat surface, followed by thick ILD deposition and via etch. The flat etch stop layer surrounding the metal lines prevents trenching during via etch.With reference to FIG. 3, a preferred process flow embodiment utilizing this invention is described. In step 19, an integrated circuit wafer is processed through patterning and etching of the first metal layer 6 of the multilevel metallization structure, utilizing standard processes which are not part of this invention. The standard processes for metal and ILD deposition, patterning, and etching are described in "Handbook of Multilevel Metallization for Integrated Circuits", S. Wilson, C. J. Tracy, J. T. Freeman Jr., eds, Noyes Publications, 1993, pp. 126-169, pp. 461-569. By way of example in CMOS technology, after formation of the source/drain regions, ILD010, usually comprising SiO2, is deposited, contact holes 8 are patterned, etched and filled, and the first metal layer 6, usually aluminum or an aluminum alloy, is deposited, patterned and etched, to form the M1 interconnect structure. In step 20, a thin conformal oxide layer, approximately 2000-3000 Angstroms thick and formed by Plasma Enhanced Chemical Vapor Deposition (PECVD) of tetraethylorthosilicate (TEOS) by way of example, is deposited over the exposed metal and dielectric surfaces. In step 22, a planarizing layer such as spin-on-glass (SOG) is applied over the metal and conformal oxide layer to fill gaps between adjacent metal lines. In step 24, the SOG and conformal oxide layer are etched back to the surface of the metal lines, leaving a locally planar oxide surrounding the metal. In step 26, a thin silicon nitride layer, 500 Angstroms thick by way of example, is deposited over the metal and oxide surfaces, to serve as a via etch stop. In step 28, a thick ILD film, usually TEOS, is deposited over the nitride layer. In step 30, the ILD surface is planarized, by Chemical Mechanical Polishing by way of example. In step 32, the vias are patterned using standard techniques. In step 34, the first step of a directional two-step via etch process etches through the ILD but stops at the nitride etch stop layer. In step 36, the second step of a two-step via etch process etches through the nitride etch stop layer to expose the underlying metal. In step 38, standard via-fill and metallization processing continues.FIGS. 4 and 5 illustrate the source of the trench formation and the exploding via phenomenon when this inventive process is not utilized.With reference to FIG. 4, metal interconnect line 6 in the first metal layer is shown in cross section on surface 40 of dielectric layer (ILDO)10. An electrically conducting coating 42 which is chemically inert with respect to reactants and reaction products of the Blanket Tungsten CVD via fill process, is shown deposited on first metal layer 6 before patterning and etching of the metal. This coating 42 may also serve as an antireflection coating (ARC) which, by way of example, may be approximately 1100 Angstroms thick and preferably be comprised of TiN. Via 14, reactively ion etched with standard equipment through interlevel dielectric layer (ILD1) 15, is shown in substantially perfect alignment with underlying metal interconnect line 6, leaving uniform border 44 and 44' around via 14. A barrier/nucleation/adhesion layer 46, also referred to as the "glue layer", is deposited on top surface 48 of interlevel dielectric layer 15, and on via sidewall 50, and on underlying conductive surface 52, and provides a substantially continuous barrier preventing a chemical reaction between interconnect line 6 and reactants and reaction products WF6 and HF of the subsequent Blanket Tungsten CVD via fill process. In this case of a properly aligned via, the ARC layer42 also provides added chemical isolation of the aluminum interconnect metal from WF6 and HF. This can be particularly important at the intersection 54 of surfaces within vias, where discontinuities in the thin barrier/nucleation/adhesion layer 46 are likely to occur. The barrier/nucleation/adhesion layer 46 additionally promotes adhesion of CVD tungsten (not shown) to surfaces 48, 50 of interlevel dielectric layer 15.With reference to FIG. 5, via 14 is shown misaligned with respect to underlying metal line 6, causing via 14 to extend beyond metal line 6. During via overetch, this results in deep etching of dielectric 15, and formation of high aspect ratio or "trench" region 56, adjacent to edge 58 of metal line 6. This is the so-called trenching effect, which in extreme cases can cause shorting between metal layers or between metal and the substrate, and which can cause metal voids due to the extremely high aspect ratio of the trench. Barrier/nucleation/adhesion layer 46 will, with some probability, have one or more discontinuities 60 on metal edge 58 in high aspect ratio region 56. Metal edge 58 of aluminum interconnect line 6, having no ARC, is therefore directly exposed to reactants and reaction products WF6 and HF of the subsequent Blanket Tungsten CVD via fill process at the discontinuities 60. This can result in a violent chemical reaction and severe damage to the structure, referred to as exploding via. FIG. 6 illustrates how my inventive process and structure prevents the trenching effect and exploding vias.FIGS. 6a to 6f illustrate the preferred process flow embodiment yielding the structure of my invention. In FIG. 6a, a metal layer is deposited onto dielectric 10, then patterned and etched to form metal line 6. The metal line 6 is generally comprised of Al or an Al alloy, and may have multilayer structure. Generally, the anti-reflective coating (ARC) 42 forms the top layer of the metal line 6. In some applications however, a "hard mask" layer comprised of SiO2, Si3N4, or SiOxNy, may be deposited atop ARC 42 or in place of ARC 42, to protect top conducting surface 52 in the case of resist erosion during metal etch. In this case, the portion of the hard mask within the via is removed during or after via etch.In FIG. 6b, a thin conformal oxide layer 62 is deposited onto exposed metal and dielectric surfaces 52, 58, and 40. Oxide layer 62 may be comprised of CVD or PECVD oxide by way of example, and serves multiple purposes, including rigidly confining sidewalls 58 of metal line 6 to prevent stress-induced metal eruptions. Additionally, oxide layer 62 provides chemical isolation between metal line 6 and the spin-on layer described hereinafter. Thereafter, a spin-on film 64 with low viscosity is applied atop oxide layer 62 to locally planarize the surface region 65 adjacent metal line 6. The spin-on film 64 is generally spin-on-glass (SOG), but may be comprised of other spin-on materials such as spin-on-silicate (SOS) or hydrogen silsesquioxane (HSQ), also known by the trade name of Flowable Oxide (FOx) by Dow-Corning Company. Since many of the spin-on materials are organic materials, they may react chemically with exposed metal, and are therefore usually sandwiched between oxide layers in production processes.As illustrated in FIG. 6c, the spin-on and conformal oxide layers 62 and 64 are thereafter etched back to the top surface 52 of metal line 6. The etchback may use Reactive Ion Etching (RIE) with CF4/CHF3/Ar chemistry by way of example, and the TiN ARC layer 42 provides an effective etch stop. Metal line 6 is embedded in dielectric layer 66 comprising conformal oxide 62 and spin-on layer 64. Top surface 68 of dielectric layer 66 is substantially planarized with top surface 52 of metal line 6 in the vicinity of metal line 6. This is known as local planarization.In FIG. 6d, a thin (500 Angstrom by way of example) etch-stop film 70 which may be comprised of silicon nitride or silicon oxynitride by way of example is deposited onto top surfaces 52 and 68 of metal line 6 and dielectric 66. A thick ILD layer 15, comprised of a CVD oxide such as TEOS, is deposited atop etch-stop layer 70. Top surface 72 of ILD 15 is planarized, usually by CMP.FIG. 6e shows the structure after via 14 is patterned and etched by a two-step directional etch process. The first etch step has high selectivity of oxide to nitride, and etches the via hole 14 while stopping at nitride etch-stop layer 70. These etch characteristics can be achieved by utilizing C4F8 or C4F8/CH3F etch chemistry in an Applied Materials Model 5300 reactive ion etch system, by way of example. Edge portion 74 of via 14 which extends past edge 58 of metal line 6 is prevented from forming a trench by etch stop layer 70, which is substantially planar with top edge 52 of metal line 6 in the vicinity of the metal line, due to application of spin-on layer 64. Thereafter, the nitride etch stop layer 70 is removed by the second step of the via etch process, which has high selectivity of nitride to oxide. These etch characteristics can be achieved by utilizing CH3F/O2 etch chemistry in an Applied Materials Model 5300 reactive ion etch system, by way of example. The photoresist used to pattern the via may be stripped either before or after the nitride etch step. Glue layer 46, comprised of TiN or Ti/TiN by way of example, is deposited onto bottom 52 and sides 50 of via 14, as well as onto top surface 72 of ILD 15. Any discontinuities 60 in glue layer 46 are most likely to occur at inner corners 78.In FIG. 6f, standard processing completes the formation of via plugs and next level of metal. CVD tungsten is deposited into via 14 and onto top surface 72 of ILD 15 by Blanket Tungsten CVD. The glue layer 46 prevents contact between the Al line 6 and the WF6of the Blanket Tungsten CVD process. Additional protection is provided by ARC layer 42 on top metal surface 52, and by dielectrics 62 and 64 at metal sidewall surfaces 58. As a result, even if discontinuities exist in glue layer 46, the exploding via phenomenon is prevented. Thereafter, excess tungsten and glue layer on top surface 72 of ILD 15 is removed by CMP, and the next level of metal 80 is deposited, patterned, and etched.Utilizing my inventive process as described, the problem of trench formation adjacent metal lines during via overetch and the problem of exploding vias are substantially eliminated for moderately misaligned borderless vias, thereby increasing yield and allowing increased packing density.Although the preferred process described herein utilizes a conformal oxide layer, a planarizing spin-on layer and a nitride etch stop layer, the invention should not be considered limited to any or all of these exact implementations. Other possible types of conformal oxides may include: SiH4/O2, subatmospheric CVD, and Low Pressure CVD (LPCVD), also termed LTO. Other possible spin-on materials may include polyimide and parylene. Other possible via etch stop materials may include Al2O3 and polysilicon. The scope of the invention should be construed in light of the claims. With this in mind, |
Machine-readable media, methods, and apparatus are described to recover from stream under-run and/or over-run conditions. In some embodiments, an audio controller may discard any partial sample block of the stream. |
1.a method, includingReceiving a packet comprising one or more sample blocks of the stream, andAfter detecting the end of the packet, any remaining sample blocks of the remaining packet are discarded.2.The method of claim 1 further comprisingReceiving the actual packet length of the packet, andThe end point of the packet is detected based on the actual packet length.3.The method of claim 1 further comprising detecting said end point of said packet in response to receiving a synchronization signal of said stream.4.The method of claim 1 further comprising detecting said end point of said packet in response to detecting another packet of said stream.5.The method of claim 1 further comprisingReceiving an expected packet length indicating the number of complete sample blocks of the packet,Receiving an actual packet length indicating the number of complete sample blocks of the packet, andAlthough the expected packet length indicates a smaller number of complete sample blocks than the actual packet length, the number of complete sample blocks indicated by the actual packet length is accepted.6.The method of claim 1 further comprisingReceiving an expected packet length indicating the number of complete sample blocks of the packet,Receiving an actual packet length indicating the number of complete sample blocks of the packet, andAlthough the expected packet length indicates more complete sample blocks than the actual packet length, only the full number of sample blocks indicated by the actual packet length is accepted.7.The method of claim 1 further comprising transmitting the complete sample block of the packet only to a buffer of the memory.8.The method of claim 1 further comprising classifying any sample block having a smaller number of bytes than the defined number of bytes as an incomplete sample block.9.a device, includinga memory interface for accessing the memory,A link controller for receiving a packet including a plurality of sample blocks and for discarding the incomplete sample block of the packet.A direct memory access (DMA) controller for receiving a complete sample block from the link controller and for transmitting the complete sample block to the memory through the memory interface.10.The apparatus of claim 9, wherein said link controller further classifies any sample block having a smaller number of bytes than a defined number of bytes as an incomplete sample block.11.The apparatus of claim 9 wherein said link controller furtherReceiving a stream identifier of the packet, andThe complete sample block is transmitted to the DMA controller in response to determining that the DMA controller has been configured to process the stream associated with the stream identifier.12.The apparatus of claim 9 wherein said link controllerConfigured with an expected packet length indicating the total number of sample blocks expected for each packet of the stream,Receiving an actual packet length indicating the number of complete sample blocks of the packet, andAlthough the expected packet length indicates a smaller number of complete sample blocks than the actual packet length, the number of complete sample blocks indicated by the actual packet length is accepted.13.The apparatus of claim 9 wherein said link controllerConfigured with an expected packet length indicating the total number of sample blocks expected for each packet of the stream,Receiving an actual packet length indicating the number of complete sample blocks of the packet, andAlthough the expected packet length indicates more complete sample blocks than the actual packet length, only the full number of sample blocks indicated by the actual packet length is accepted.14.The apparatus of claim 9 wherein each sample block of said packet comprises at least a first sample of a first audio channel and a second sample of a second audio channel.15.The apparatus of claim 14, wherein said link controller classifies any sample block having a smaller number of bytes than a defined number of bytes into an incomplete sample block.16.A system comprising a processor, a memory, an audio controller, and an audio codec, whereinThe processor configuring the audio controller to process the stream by providing the audio controller with a stream identifier of a stream of the audio codec and a sample block length of the stream, andIn response to receiving a packet having the associated stream identifier, the audio controller classifies a sample block of the packet based on the sample block length of the stream, and the memory is classified as a full sample block The block is sampled and the sample block classified as an incomplete sample block is discarded.17.The system of claim 16 wherein said audio controller classifies sample blocks having fewer bytes than said sample block length lock defines as incomplete sample blocks.18.The system of claim 16 whereinThe processor allocates a buffer for the stream within the memory, allocates another buffer within the memory for another stream, and configures the audio controller to process the another stream, andThe audio controller transmits a complete sample block of the stream to the buffer of the stream and a complete sample block of the other stream to the other buffer of the other stream.19.The system of claim 18 wherein said audio controller receives said stream and said another stream from said audio codec.20.The system of claim 18 wherein said audio controller receives said stream from said audio codec and receives said another stream from another audio codec.21.The system of claim 16 whereinThe processor also configures the audio controller for the stream by providing an expected packet length to the audio controller, the expected packet length indicating a total number of sample blocks expected for each packet of the stream, as well asAlthough the expected packet length indicates a smaller number of full sample blocks than the actual packet length, the audio controller transmits to the memory the number of complete sample blocks indicated by the actual packet length.22.The system of claim 16 whereinThe processor also configures the audio controller for the stream by providing an expected packet length to the audio controller, the expected packet length indicating a total number of sample blocks expected for each packet of the stream, as well asAlthough the expected packet length indicates a greater number of full sample blocks than the actual packet length, the audio controller only transmits to the memory the number of complete sample blocks indicated by the actual packet length.23.A machine readable medium comprising a plurality of instructions responsive to the plurality of instructions being executed, resulting in a deviceClassifying a plurality of sample blocks of a stream into one or more complete sample blocks, and one or more incomplete sample blocks, andThe one or more complete sample blocks of the packet are only transmitted to the memory.24.The machine readable medium of claim 23, wherein the device is caused to discard the one or more incomplete sample blocks of the packet in response to the plurality of instructions being executed.25.A machine-readable medium as recited in claim 24, wherein responsive to said plurality of instructions being executed, further causing said device to transmit said number of complete sample blocks indicating an expected packet length of the stream is less than an actual packet length The number of complete sample blocks defined by the actual packet length of the packet of the stream.26.A machine readable medium as recited in claim 23, wherein responsive to said plurality of instructions being executed, further causing said device to transmit only if the expected packet length of the stream indicates a greater number of full sample blocks than the actual packet length The number of complete sample blocks defined by the actual packet length of the streamed packet. |
Flow underload/overload recoverybackgroundThe audio codec can provide the audio controller with more samples than the number of samples that the audio controller is programmed to accept. If the audio controller is unable to accept these additional samples, the additional samples may be lost due to over-run. Conversely, if the audio codec provides less data to the audio controller than the audio controller is programmed to accept, the audio controller may come from the under-run due to under-run Other data describing the audio codec is interpreted as one or more samples. Both underload and overload conditions can result in reduced audio quality and/or erroneous conditions.DRAWINGSThe invention described herein is illustrated by way of example and not limitation in the drawings. For the sake of simplicity and clarity of the illustration, the elements in the figures are not necessarily drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals have been repeated in the drawings to theFIG. 1 illustrates an embodiment of a computing device with an audio controller.2 shows an embodiment of a frame transmitted by the audio controller of FIG. 1.FIG. 3 illustrates an embodiment of a flow control method of the audio controller of FIG. 1.Detailed waysThe following describes the data flow technology. In the following description, a number of specific details are set forth, such as logical implementations, opcodes, ways of specifying operands, implementation of resource partitioning/sharing/replication, types and interrelationships of system components, and logical partitioning/integration selection so that A more complete understanding of the invention is provided. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In addition, the control structures, gate level circuits, and all software instruction sequences are not shown in detail to avoid obscuring the present invention. One of ordinary skill in the art, with the included description, can perform the appropriate functions without undue experimentation.References to "an embodiment", "an embodiment", "an exemplary embodiment" and the like in the specification means that the described embodiments may include specific features, structures or characteristics, but not every implementation. The solution must include the specific features, structures or characteristics described. Moreover, the phrases are not necessarily referring to the same embodiment. In addition, when a specific feature, structure, or characteristic is described with respect to a certain embodiment, it is considered that those skilled in the art can implement the feature, structure or structure in combination with other embodiments within the scope of their knowledge, whether or not explicitly stated. characteristic.Embodiments of the invention may be implemented in hardware, firmware, software, or a combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (eg, a computing device). For example, a machine-readable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustic, or other forms of propagated signals (eg, carrier waves, infrared signals) , digital signals, etc.), as well as other media.An embodiment of computing device 100 is shown in FIG. The computing device can include a processor 100 and a chipset 102 coupled to one another via a processor bus 104. Chipset 102 may include one or more integrated circuit packages or chips that couple processor 100 to memory 106 and audio controller 108. Chipset 102 may also couple the processor to other components 110 of the computing device (eg, BIOS firmware, keyboard, mouse, storage device, network interface, etc.) via one or more buses 112. In one embodiment, chipset 102 can include a memory controller 114 to access memory 106 via memory bus 116. The memory controller 114 can access the memory 106 in response to a memory transaction associated with the processor 100, the audio controller 108, and other components 110 of the computing device. In addition, memory 106 can include various memory devices that provide addressable memory locations from which memory controller 114 can read data or write data to the addressable locations. In particular, memory 106 may include one or more different kinds of storage devices, such as DRAM (Dynamic Random Access Memory) devices, SDRAM (Synchronous DRAM) devices, DDR (Double Data Rate) SDRAM devices, or other storage devices. .Audio controller 108 can control the flow of data between memory 106 and audio codec 118. The audio controller 108 can be integrated in the chipset 102. However, as depicted, the audio controller 108 can also be separated from the chipset 102. In such an embodiment, the audio controller 108 can include a bus interface 120, a link controller 122, and one or more DMA (Direct Memory Access) controllers 124. The bus interface 120 of the audio controller 108 can couple the audio controller 108 to the bus interface 120 of the chipset 102 to become an interface of the audio controller 108 to the memory 106, which is coupled to the memory controller 114 of the chipset 102. .By controlling the link between the audio controller 108 and the codec 118, the link controller 122 can provide the audio controller 108 with an interface to the audio bus 126 and to the codec 118 of the audio bus 126. In one embodiment, audio bus 126 may include one or more point-to-point serial input links from each codec 118 to audio controller 108. The audio bus 126 may also include a broadcast serial output link from the audio controller 108 to the codec 118. Link controller 122 may generate frame 128 and may receive frame 128 via the link of audio bus 126 in accordance with an audio bus protocol.In one embodiment, each DMA controller 124 can be individually programmed by the processor 100 to stream data between the cache of the memory 106 and one or more codecs 118. Audio codec 118 may correspond to a sound card, modem, fax machine, audio capture device, etc., incorporated in and/or otherwise coupled to the computing device. In one embodiment, the audio codec 118 can be integrated into the chipset 102, can be mounted on the motherboard of the computing device, can be mounted on an add-in card coupled to the computing device, and/or can It is part of an external device (eg, a docking station, audio mixer, etc.) that is coupled to an interface port (not shown) of the computing device.As shown in FIG. 2, link controller 122 can receive an audio stream from codec 118 via frame 128, which is a control signal 130 and an audio bus serial data input link of the audio bus control link. The data signal 132 is defined. In particular, control signal 130 may include frame synchronization 134 to indicate the beginning of frame 128. As shown, frame 128 may include command/response 136, one or more flow tags, one or more packets 140, and an optional null field 142. The command/response 136 may include a command requesting the recipient of the frame 128 to perform some action, and/or may include a response to the command of the previous frame 128.In general, flow tag 138 may indicate the beginning of packet 140, may indicate which flow the packet 140 is associated with, and may indicate the length of packet 140. In one embodiment, each flow tag 138 of frame 128 may include a flow identifier (ID) 144 that indicates which flow the packet 140 is associated with. Moreover, each flow label 138 can include an actual packet length 146 that indicates the length of the subsequent packet 140 (eg, the number of bytes). Stream tag 138 may allow codec 118 to transmit multiple streams and/or multiple packets 140 of a single stream within a single frame 128. In addition, the null field 142 can include pad bits/bytes that extend the frame 128 to a fixed length or a multiple of the length of a certain frame unit. In another embodiment, the null field 142 may be associated with a quiescent period of the audio link during which no data is transmitted.As shown, each packet 140 can include one or more sample blocks 148 and an optional null pad 150. The empty padding 150 can fill the packet 140 to a fixed packet length or fill a multiple of the length of a certain frame unit. In another embodiment, the null fill 150 can be associated with a quiet period of the audio link during which no data is transmitted. Each sample block 148 of packet 140 may include separate samples for each of the plurality of channels. For example, stereo sample block 148 may include right channel samples 152 and left channel samples 152 that are associated with the same sample point during the stereo audio signal. Similarly, the 5.1 sample block 148 can include intermediate channel samples 152, front right channel samples 152, front left channel samples 152, rear right channel samples 152, rear right channel samples 152, and bass channel samples 152, all of which are in 5.1. The same sample point during the channel audio signal is associated.In one embodiment, processor 100 may program audio controller 108 with the characteristics of the stream of buffers to be transferred from codec 118 to memory 106. In particular, processor 100 may assign DMA controller 124 to the stream, which may set the sample length (e.g., the number of bits per sample 152), and may set the sample block length (e.g., the number of bytes per sample block 148 or sample). The number of 152s), and the expected packet length (e.g., the number of sample blocks or the number of bytes per packet 140) may be set to indicate the expected amount of transmission of codec 118 in each packet 140.In such an environment, if codec 118 sends packet 140, which has an actual packet length 146 that is greater than the expected packet length that audio controller 118 is configured to accept, an overload may occur. In one embodiment, the audio controller 108 can recover from the overload described above by accepting the additional sample block 148 of the packet 140 because the additional sample block 148 is the valid sample block 148 of the packet 140. If codec 118 sends packet 140, which has an actual packet length 146 that is a non-integer multiple of the sample block length, then overload/underload may also occur. In one embodiment, the audio controller can recover from an overload/underload condition by discarding or discarding any of the partial sample blocks 148. Moreover, if codec 118 sends packet 140, which has an actual packet length 146 that is less than the expected packet length that audio controller 124 is configured to accept, an underload may occur. In one embodiment, the audio controller 124 may recover from the underload described above by accepting only the transmitted sample block 148 of the packet 140 because the transmitted sample block 148 is the only valid sample block 148 of the packet 140.An embodiment of a method in which audio controller 108 recovers from flow overload and/or underload is shown in FIG. At block 200, the processor 100 can program the audio controller 108, and/or the audio controller 108 can be additionally configured to process the audio stream of the codec 118. In one embodiment, processor 100 may distribute the stream to DMA controller 124 of audio controller 108 by providing stream ID 144 of the stream to link controller 122 and/or DMA controller 124. Moreover, processor 100 can provide the link controller and/or DMA controller 124 with the sample length of the stream, the sample block length, and the expected packet length.At block 202, the audio controller 108 can receive a flow tag 138 from the codec 118 having a flow ID 144 and an actual packet length 146 indicating the number of bytes of the packet associated with the flow tag 138. . In block 204, link controller 122 may update the TBR (to be received) value based on the received actual packet length 146. In one embodiment, link controller 122 may update the TBR value by setting the TBR value equal to the received actual packet length 146, indicating the number of bytes of packet 140 to be received.At block 206, the link controller can determine if the end of packet 140 has been reached. In one embodiment, link controller 122 may determine that the end of packet 140 has been reached based on the TBR value for the packet 140. In particular, in response to the TBR value having a predetermined relationship (eg, less than or equal to) to the packet endpoint value (eg, 0), the link controller 122 may determine that the end of the packet 140 has been reached. The link controller 122 may also determine that the packet 140 has been reached in response to detecting a frame synchronization 134 that signals another frame 128 to begin and/or a flow label 138 that signals another packet start of the frame 128. end.At block 208, in response to detecting the end of the packet 140, the link controller 122 may cause the DMA controller 124 of the stream assigned to the received packet 140 to transmit the received full sample block 148 to the buffer of the memory 106. As depicted, the DMA controller 124 can wait until the end of the stream is reached before transmitting the full sample block 148 to the memory 106, which can improve the transmission efficiency to the memory 106. However, in other embodiments, DMA controller 124 may transmit a complete sample block 148 to memory 106 more frequently to reduce the time at which full sample block 148 is received and the complete sample block 148 that may be obtained in memory 106. The latency between times.However, if link controller 122 does not detect the end of packet 140, then in block 210, link controller 122 may determine if there are still one or more additional full sample blocks 148 of packet 140 that may be receive. In one embodiment, link controller 122 may determine an additional complete sample block 148 that may be received based on the TBR value. In particular, the audio controller may determine that an additional full sample block 148 of the packet 140 may be received in response to determining that the TBR value has a predetermined relationship (eg, less than or equal to) the sample block length of the packet 140. At block 212, in response to determining that the additional full sample block 148 is not received, the link controller 122 may discard the incomplete sample block by discarding any received packet data prior to detecting the end of the packet 140. In one embodiment, in response to detecting a frame synchronization 134 that signals the start of the next frame 128, a flow label 138 that signals another packet 140 of the frame 128, or a TBR value indicating the end of the current packet 140, the link Controller 122 may determine that the end of packet 140 has been reached.At block 214, link controller 122 may buffer the data received from audio codec 118 and may monitor control signal 130 for synchronization events (e.g., frame synchronization). At block 216, link controller 122 may determine whether a complete sample block 148 has been received. In one embodiment, link controller 122 may classify sample block 148 having a defined number of bytes into full sample block 148 and classify sample block 148 having a smaller number of bytes than a defined number of bytes as not Complete sample block. At block 218, in response to determining that the full sample block 148 has been received, the DMA controller 124 can accept the complete sample block 148 and can update the TBR value accordingly. In one embodiment, DMA controller 124 may update the TBR value by subtracting the sample block length or the number of bytes of full sample block 148 from the TBR value. Next, DMA controller 124 may return to block 206 to determine if the end of packet 140 has been reached. In response to determining that only the incomplete sample block 148 has been received at this time, the DMA controller 124 may return to block 214 to receive the remainder of the sample block 148.Certain features of the invention are described with reference to the exemplary embodiments. However, these descriptions are not intended to be construed as limiting. Various modifications of the exemplary embodiments, as well as other embodiments of the invention, which are apparent to those skilled in the art of the invention, are considered to be within the spirit and scope of the invention. |
A cache memory system including a cache memory employing a tag including associated touch bits. The system includes a first cache memory subsystem having a first cache storage and a second cache memory subsystem including a second cache storage. The first cache storage may store a first plurality of cache lines of data. The second cache storage may store a second plurality of cache lines of data. Further the second cache memory subsystem includes a tag storage which may store a plurality of tags each corresponding to a respective cache line of the second plurality of cache lines. In addition, each of said plurality of tags includes an associated bit indicative of whether a copy of the corresponding respective cache line is stored within the first cache memory subsystem. |
What is claimed is:1. A system comprising:an instruction cache memory configured to store a first plurality of cache lines of data;a lower level cache memory coupled to said instruction cache memory, and configured to store a second plurality of cache lines of data; anda data cache memory coupled to the lower level cache memory, and configured to store a third plurality of cache lines of data;wherein the lower level cache memory further includes:a tag storage configured to store a plurality of tags each corresponding to a respective cache line of said second plurality of cache lines, wherein each of said plurality of tags includes an associated bit and a second associated bit, wherein the associated bit is indicative of whether a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said, instruction cache memory, and the second associated bit is indicative of whether a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said data cache memory; andtag logic coupled to said tag storage and configured to detect a cache request and an eviction notification from said instruction cache memory, and wherein said tag logic is further configured to clear said associated bit in response to detecting said eviction notification;wherein the tag logic is configured to detect sharing of a modified cache line between said instruction cache memory and said data cache memory based upon a state of said associated bit and said second associated bit;wherein the tag logic is configured to detect sharing if, in response to a cache line request for said modified cache line by the data cache memory, said associated bit indicates said modified cache line is stored in said instruction cache memory, andwherein the tag logic is further configured to cause a self-modifying code check to be initiated in response to detecting sharing of the modified cache line between the instruction cache memory and the data cache memory.2. The system as recited in claim 1, wherein if said associated bit is clear, a copy of said corresponding respective cache line of said second plurality of cache lines is not stored within said instruction cache memory.3. The system as recited in claim 1, wherein if said associated bit is set, a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said instruction cache memory.4. The system as recited in claim 1, wherein if said associated bit is set, a copy of said corresponding respective cache line of said second plurality of cache lines was stored within said instruction cache memory during a previous transaction.5. The system as recited in claim 1 further comprising a cache control coupled to control the transfer of data between said instruction cache memory and said lower level cache memory.6. The system as recited in claim 1, wherein said tag logic is further configured to set said bit in response to detecting said cache request.7. The system as recited in claim 1 further comprising tag logic coupled to said tag storage and configured to detect whether two copies of said corresponding respective cache line of said second plurality of cache lines are stored within said instruction cache memory.8. The system as recited in claim 7, wherein said tag logic is further configured to initiate a back-probe of said instruction cache memory and to cause one of said two copies to be evicted in response to detecting that two copies of said corresponding respective cache line of said second plurality of cache lines are stored within said instruction cache memory.9. The system as recited in claim 1 further comprising tag logic coupled to said tag storage and configured to initiate a back-probe of said instruction cache memory in response to a system probe hitting on a given one of said second plurality of cache lines and said associated bit is set.10. The system as recited in claim 1 further comprising tag logic coupled to said tag storage and configured to choose for eviction a given one of said second plurality of cache lines having a clear associated bit over another one of said second plurality of cache lines having an associated bit which is set.11. The system as recited in claim 1 further comprising tag logic coupled to said tag storage and configured to initiate a back-probe of said instruction cache memory in response to evicting one of said second plurality of cache lines having an associated bit which is set.12. A microprocessor comprising:an execution unit configured to execute instructions; anda cache system coupled to said execution unit, said cache system includes:an instruction cache configured to store a first plurality of cache lines of data;a lower level cache coupled to said instruction cache, and configured to store a second plurality of cache lines of data; anda data cache coupled to the lower level cache, and configured to store a third plurality of cache lines of data;wherein the lower level cache further includes:a tag storage configured to store a plurality of tags each corresponding to a respective cache line of said second plurality of cache lines, wherein each of said plurality of tags includes an associated bit and a second associated bit, wherein the associated bit is indicative of whether a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said instruction cache, and the second associated bit is indicative of whether a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said data cache; andtag logic coupled to said tag storage and configured to detect a cache request and an eviction notification from said instruction cache, and wherein said tag logic is further configured to clear said bit in response to detecting said eviction notification;wherein the tag logic is configured to detect sharing of a modified cache line between said instruction cache and said data cache based upon a state of said associated bit and said second associated bit;wherein the tag logic is configured to detect sharing if, in response to a cache line request for said modified cache line by the data cache, said associated bit indicates said modified cache line is stored in said instruction cache, andwherein the tag logic is further configured to cause a self-modifying code check to be initiated in response to detecting sharing of the modified cache line between the instruction cache and the data cache.13. The microprocessor as recited in claim 12, wherein if said associated bit is clear, a copy of said corresponding respective cache line of said second plurality of cache lines is not stored within said first instruction cache.14. The microprocessor as recited in claim 12, wherein if said associated bit is set, a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said instruction cache.15. A method comprising:storing a first plurality of cache lines of data in an instruction cache;storing a second plurality of cache lines of data in a lower level cache memory subsystem;storing a third plurality of cache lines of data in a data cachestoring a plurality of tags each corresponding to a respective cache line of said second plurality of cache lines, wherein each of said plurality of tags includes an associated bit and a second associated bit, wherein the associated bit is indicative of whether a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said instruction cache, and the second associated bit is indicative of whether a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said data cache;detecting a cache request and an eviction notification from said instruction cache;clearing said bit in response to detecting said eviction notification;detecting sharing of a modified cache line between said instruction cache and said data cache by detecting, in response to a cache line request for said modified cache line by the data cache, that said associated bit indicates said modified cache line is stored in said instruction cache; andcausing a self-modifying code check to be initiated in response to detecting sharing of the modified cache line between the instruction cache and the data cache.16. The method as recited in claim 15, further comprising setting said associated bit in response to receiving a cache request from said instruction cache and said bit is clear.17. The method as recited in claim 15 further comprising clearing said associated bit in response to receiving from said first cache memory subsystem a notification indicative that said copy of said corresponding respective cache line stored within said instruction cache has been evicted.18. The system as recited in claim 1, wherein the tag logic is further configured to detect sharing of a modified cache line between said instruction cache memory and said data cache memory if, in response to a cache line request for said modified cache line by the instruction cache memory, said second associated bit indicates said modified cache line is stored in said data cache memory.19. The microprocessor as recited in claim 12, wherein the tag logic is further configured to detect sharing of a modified cache line between said instruction cache and said data cache if, in response to a cache line request for said modified cache line by the instruction cache, said second associated bit indicates said modified cache line is stored in said data cache.20. The method as recited in claim 15, further comprising detecting sharing of a modified cache line between said instruction cache and said data cache by, in response to a cache line request for said modified cache line by the instruction cache, detecting that said second associated bit indicates said modified cache line is stored in said data cache. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates to microprocessors and, more particularly, to cache subsystems within a microprocessor.2. Description of the Related ArtTypical computer systems may contain one or more microprocessors which may be connected to one or more system memories. The processors may execute code and operate on data that is stored within the system memories. It is noted that as used herein, the term "processor" is synonymous with the term microprocessor. To facilitate the fetching and storing of instructions and data, a processor typically employs some type of memory system. In addition, to expedite accesses to the system memory, one or more cache memories may be included in the memory system. For example, some microprocessors may be implemented with one or more levels of cache memory. As used herein, a the level of the cache refers the cache's proximity to the microprocessor core relative to another cache's proximity to the microprocessor core. In this example, the L1 cache is considered to be at a higher level than the L2 cache. In a typical microprocessor, a level one (L1) cache and a level two (L2) cache may be used, while some newer processors may also use a level three (L3) cache. In many legacy processors, the L1 cache may reside on-chip and the L2 cache may reside off-chip. However, to further improve memory access times, many newer processors may use an on-chip L2 cache.The L2 cache is often implemented as a unified cache, while the L1 cache may be implemented as a separate instruction cache and a data cache. The L1 data cache is used to hold the data most recently read or written by the software running on the microprocessor. The L1 instruction cache is similar to L1 data cache except that it holds the instructions executed most frequently. It is noted that for convenience the L1 instruction cache and the L1 data cache may be referred to simply as the L1 cache, as appropriate. The L2 cache may be used to hold instructions and data that do not fit in the L1 cache. The L2 cache may be exclusive (e.g., it stores information that is not in the L1 cache) or it may be inclusive (e.g., it stores a copy of the information that is in the L1 cache).During a read or write to cacheable memory, the L1 cache is first checked to see if the requested information (e.g., instruction or data) is available. If the information is available, a hit occurs. If the information is not available, a miss occurs. If a miss occurs, then the L2 cache may be checked. Thus, when a miss occurs in the L1 cache but hits within, L2 cache, the information may be transferred from the L2 cache to the L1 cache in a cache line fill. As described below, the amount of information transferred between the L2 and the L1 caches is typically a cache line. In addition, depending on the space available in the L1 cache, a cache line may be evicted from the L1 cache to make room for the new cache line and may be subsequently stored in L2 cache. If the cache line that is being evicted is in a modified state, the microprocessor may perform a cache line write-back to system memory when it performs the cache line fill. These write-backs help maintain coherency between the caches and system memory.Memory subsystems typically use some type of cache coherence mechanism to ensure that accurate data is supplied to a requester. The cache coherence mechanism typically uses the size of the data transferred in a single request as the unit of coherence. The unit of coherence is commonly referred to as a cache line. In some processors, for example, a given cache line may be 64 bytes, while some processors employ a cache line of 32 bytes. In yet other processors, other numbers of bytes may be included in a single cache line. If a request misses in the L1 and L2 caches, an entire cache line of multiple words is transferred from main memory to the L2 and L1 caches.Generally speaking, a lower-level cache such as an L2 cache, for example, may maintain coherency information for a higher-level cache such as an L1 cache. Inclusive cache implementations typically require back-probes of the higher-level cache in response to a variety of lower-level cache accesses. For example, the L2 cache may perform a "back-probe" of the L1 cache in response to receiving a probe to determine if a copy of an L2 cache line exists in the L1 cache. This back-probing of the higher-level cache may reduce the available bandwidth of the cache bus and thus may increase the latency associated with cache accesses.SUMMARY OF THE INVENTIONVarious embodiments of a cache memory system including a cache memory employing a tag including associated touch bits are disclosed. In one embodiment, a system is contemplated which includes a first cache memory subsystem having a first cache storage which is coupled to a second cache memory subsystem including a second cache storage. The first cache storage may be configured to store a first plurality of cache lines of data. The second cache storage may be configured to store a second plurality of cache lines of data. Further the second cache memory subsystem includes a tag storage which may store a plurality of tags each corresponding to a respective cache line of the second plurality of cache lines. In addition, each of said plurality of tags includes an associated bit indicative of whether a copy of the corresponding respective cache line is stored within the first cache memory subsystem.In one specific implementation, if the associated bit is clear, a copy of the corresponding respective cache line is not stored within the first cache memory subsystem.In another specific implementation, if the bit is set, a copy of said corresponding respective cache line of said second plurality of cache lines is stored within said first cache memory subsystem.In another embodiment, a cache memory subsystem for use with a higher-level cache memory is contemplated in which the cache memory subsystem includes a cache storage configured to store a plurality of cache lines of data. The cache memory subsystem also includes a tag storage which may be configured to store a plurality of tags each corresponding to a respective cache line of the plurality of cache lines. Each of the tags includes an associated bit which is indicative of whether a copy of the corresponding respective cache line is stored within the higher-level cache memory.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of one embodiment of a microprocessor.FIG. 2 is a block diagram of one embodiment of a cache memory system.FIG. 3 is a block diagram of another embodiment of a cache memory system.FIG. 4 is a block diagram of one embodiment of a computer system.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTIONTurning now to FIG. 1, a block diagram of one embodiment of an exemplary microprocessor 100 is shown. Microprocessor 100 is configured to execute instructions stored in a system memory (not shown in FIG. 1). Many of these instructions operate on data stored in the system memory. It is noted that the system memory may be physically distributed throughout a computer system and may be accessed by one or more microprocessors such as microprocessor 100, for example. In one embodiment, microprocessor 100 is an example of a microprocessor which implements the x86 architecture such as an Athlon(TM) processor, for example. However, other embodiments are contemplated which include other types of microprocessors.In the illustrated embodiment, microprocessor 100 includes a first level one (L1) cache and a second L1 cache: an instruction cache 101A and a data cache 101B. Depending upon the implementation, the L1 cache may be a unified cache or a bifurcated cache. In either case, for simplicity, instruction cache 101A and data cache 101B may be collectively referred to as L1 cache where appropriate. Microprocessor 100 also includes a pre-decode unit 102 and branch prediction logic 103 which may be closely coupled with instruction cache 101A. Microprocessor 100 also includes a fetch and decode control unit 105 which is coupled to an instruction decoder 104; both of which are coupled to instruction cache 101A. An instruction control unit 106 may be coupled to receive instructions from instruction decoder 104 and to dispatch operations to a scheduler 118. Scheduler 118 is coupled to receive dispatched operations from instruction control unit 106 and to issue operations to execution unit 124. Execution unit 124 includes a load/store unit 126 which may be configured to perform accesses to data cache 101B. Results generated by execution unit 124 may be used as operand values for subsequently issued instructions and/or stored to a register file (not shown). Further, microprocessor 100 includes an on-chip L2 cache 130 which is coupled between instruction cache 10A, data cache 101B and the system memory.Instruction cache 101A may store instructions before execution. Functions which may be associated with instruction cache 101A may be instruction loads, instruction pre-fetching, instruction pre-decoding and branch prediction. Instruction code may be provided to instruction cache 106 by pre-fetching code from the system memory through buffer interface unit 140 or as will be described further below, from L2 cache 130. Instruction cache 101A may be implemented in various configurations (e.g., set-associative, fully-associative, or direct-mapped). In one embodiment, instruction cache 10A may be configured to store a plurality of cache lines where the number of bytes within a given cache line of instruction cache 101A is implementation specific. Further, in one embodiment instruction cache 101A may be implemented in static random access memory (SRAM), although other embodiments are contemplated which may include other types of memory. It is noted that in one embodiment, instruction cache 101A may include control circuitry (not shown) for controlling cache line fills, replacements, and coherency, for example.Instruction decoder 104 may be configured to decode instructions into operations which may be either directly decoded or indirectly decoded using operations stored within an on-chip read-only memory (ROM) commonly referred to as a microcode ROM or MROM (not shown). Instruction decoder 104 may decode certain instructions into operations executable within execution unit 124. Simple instructions may correspond to a single operation. In some embodiments, more complex instructions may correspond to multiple operations.Instruction control unit 106 may control dispatching of operations to the execution unit 124. In one embodiment, instruction control unit 106 may include a reorder buffer for holding operations received from instruction decoder 104. Further, instruction control unit 106 may be configured to control the retirement of operations.The operations and immediate data provided at the outputs of instruction control unit 106 may be routed to scheduler 118. Scheduler 118 may include one or more scheduler units (e.g. an integer scheduler unit and a floating point scheduler unit). It is noted that as used herein, a scheduler is a device that detects when operations are ready for execution and issues ready operations to one or more execution units. For example, a reservation station may be a scheduler. Each scheduler 118 may be capable of holding operation information (e.g., bit encoded execution bits as well as operand values, operand tags, and/or immediate data) for several pending operations awaiting issue to an execution unit 124. In some embodiments, each scheduler 118 may not provide operand value storage. Instead, each scheduler may monitor issued operations and results available in a register file in order to determine when operand values will be available to be read by execution unit 124. In some embodiments, each scheduler 118 may be associated with a dedicated one of execution unit 124. In other embodiments, a single scheduler 118 may issue operations to more than one of execution unit 124.In one embodiment, execution unit 124 may include an execution unit such as and integer execution unit, for example. However in other embodiments, microprocessor 100 may be a superscalar processor, in which case execution unit 124 may include multiple execution units (e.g., a plurality of integer execution units (not shown)) configured to perform integer arithmetic operations of addition and subtraction, as well as shifts, rotates, logical operations, and branch operations. In addition, one or more floating-point units (not shown) may also be included to accommodate floating-point operations. One or more of the execution units may be configured to perform address generation for load and store memory operations to be performed by load/store unit 126.Load/store unit 126 may be configured to provide an interface between execution unit 124 and data cache 101B. In one embodiment, load/store unit 126 may be configured with a load/store buffer (not shown) with several storage locations for data and address information for pending loads or stores. The load/store unit 126 may also perform dependency checking on older load instructions against younger store instructions to ensure that data coherency is maintained.Data cache 101B is a cache memory provided to store data being transferred between load/store unit 126 and the system memory. Similar to instruction cache 101A described above, data cache 101B may be implemented in a variety of specific memory configurations, including a set associative configuration. In one embodiment, data cache 101B and instruction cache 101A are implemented as separate cache units. Although as described above, alternative embodiments are contemplated in which data cache 101B and instruction cache 101A may be implemented as a unified cache. In one embodiment, data cache 101B may store a plurality of cache lines where the number of bytes within a given cache line of data cache 101B is implementation specific. Similar to instruction cache 101A, in one embodiment data cache 101B may also be implemented in static random access memory (SRAM), although other embodiments are contemplated which may include other types of memory. It is noted that in one embodiment, data cache 101B may include control circuitry (not shown) for controlling cache line fills, replacements, and coherency, for example.L2 cache 130 is also a cache memory and it may be configured to store instructions and/or data. In the illustrated embodiment, L2 cache 130 may be an on-chip cache and may be configured as either fully associative or set associative or a combination of both. In one embodiment, L2 cache 130 may store a plurality of cache lines. It is noted that L2 cache 130 may include control circuitry (not shown in FIG. 1) for controlling cache line fills, replacements, and coherency, for example.As will be described in greater detail below in conjunction with the description of FIG. 2 and FIG. 3, in one embodiment, L2 cache 130 may employ a tag portion which includes an associated bit which may be indicative of whether a copy of the L2 cache line corresponding to a given L2 tag is stored within L1 cache 101. This bit is referred to as a 'touch' bit. In an alternative embodiment such as the embodiment illustrated in FIG. 3, L1 cache 101 may include cache line sizes which are different than the cache line size of L2 cache 130. In such an embodiment, L2 cache 130 may include a touch bit for each L1 cache line which corresponds to the L2 cache line.Bus interface unit 140 may be configured to transfer instructions and data between system memory and L2 cache 130 and between system memory and L1 instruction cache 101A and L1 data cache 101B. In one embodiment, bus interface unit 140 may include buffers (not shown) for buffering write transactions during write cycle streamlining.Referring to FIG. 2, a block diagram of one embodiment of an a cache memory system 200 is shown. Components that correspond to those shown in FIG. 1 are numbered identically for simplicity and clarity. In one embodiment, cache system 200 is part of microprocessor 100 of FIG. 1. Cache system 200 includes an L1 cache memory 101 coupled to an L2 cache memory 130 via a plurality of cache transfer buses 255. Further, cache system 200 includes a cache control 210 which is coupled to L1 cache memory 101 and to L2 cache memory 130 via cache request buses 215A and 215B, respectively. It is noted that although L1 cache memory 101 is illustrated as a unified cache in FIG. 2, other embodiments are contemplated that include separate instruction and data cache units, such as instruction cache 101A and L1 data cache 101B of FIG. 1, for example. It is also noted that L1 cache memory 101 and L2 cache memory 130 may each be referred to as a cache memory subsystem.As described above, memory read and write operations are generally carried out using a cache line of data as the unit of coherency and consequently as the unit of data transferred to and from system memory. Caches are generally divided into fixed sized blocks called cache lines. The cache allocates lines corresponding to regions in memory of the same size as the cache line, aligned on an address boundary equal to the cache line size. For example, in a cache with 32-byte lines, the cache lines may be aligned on 32-byte boundaries. The size of a cache line is implementation specific although many typical implementations use either 32-byte or 64-byte cache lines.In one embodiment, cache control 210 may include logic (not shown) which may control the transfer of data between L1 cache 101 and L2 cache 130. In addition, cache control 210 may control the flow of data between a requester and cache system 200. It is noted that although in the illustrated embodiment cache control 210 is depicted as being a separate block, other embodiments are contemplated in which portions of cache control 210 may reside within L1 cache memory 101 and/or L2 cache memory 130.In the illustrated embodiment, L1 cache memory 101 includes a tag storage which is designated tag portion 230 and a data storage which is designated data portion 235. A cache line typically includes a number of bytes of data as described above and other information (not shown in FIG. 2) such as state information and pre-decode information. Each of the tags within tag portion 230 is an independent tag and may include address information corresponding to a cache line of data within data portion 235. The address information in the tag is used to determine if a given piece of data is present in the cache during a memory request. For example, a memory request includes an address of the requested data. Compare logic (not shown) within tag portion 230 compares the requested address with the address information within each tag stored within tag portion 230. If there is a match between the requested address and an address associated with a given tag, a hit is indicated as described above. If there is no matching tag, a miss is indicated. In the illustrated embodiment, tag A1 corresponds to data A1, tag A2 corresponds to data A2, and so forth, wherein each of data units A1, A2 . . . Am is a cache line within L1 cache memory 101.In the illustrated embodiment, L2 cache memory 130 also includes a tag storage which is designated tag portion 245 and a data storage which is designated data portion 250. Each of the tags within tag portion 245 includes address information corresponding to a cache line of data within data portion 250. In the illustrated embodiment, tag B1 corresponds to the cache line B1, tag B2 corresponds to the cache line B2, and so forth.In addition, in one embodiment, an associated touch bit 246 is stored along with each of the tags in tag portion 245. The state of touch bit 246 may be indicative of whether a copy of the cache line corresponding to a given tag in the L2 cache memory is stored within the L1 cache 101. For example, in one embodiment, a set touch bit 246 may be indicative that a copy of the cache line corresponding to a given tag in the L2 cache memory is stored within the L1 cache 101 and a clear touch bit 246 may be indicative that a copy of the cache line corresponding to a given tag in the L2 cache memory is not stored within the L1 cache 101. This may be referred to as a precise indication. However, it is contemplated that in other embodiments the logic may be reversed such that a clear touch bit 246 may be indicative that a copy of the cache line corresponding to a given tag in the L2 cache memory is stored within the L1 cache 101 and a set touch bit 246 may be indicative that a copy of the cache line corresponding to a given tag in the L2 cache memory is not stored within the L1 cache 101. It is noted that as used herein in reference to the state of touch bit 246, the terms set and clear refer to a logic one and a logic zero, respectively.It is noted that in an alternative embodiment, the state of touch bit 246 may be indicative of whether a copy of the cache line corresponding to a given tag in the L2 cache memory was at one time stored within the L1 cache 101. For example, certain L1 cache implementations provide no feedback to the L2 cache when an L1 cache line is evicted. When using such L1 cache implementations, in response to an initial cache request from the L1 cache, the touch bit 246 corresponding to the requested cache line may be set. However, that touch bit 246 may not be cleared in response to the eviction of the cache line copy from the L1 cache. Therefore, the touch bit may be indicative that the L2 cache line copy was stored in the L1 cache, but may or may not be stored there now. This may be referred to as an imprecise indication.In another embodiment, L1 cache 101 may be implemented using an instruction cache and a data cache as described in conjunction with the description of FIG. 1. In such an embodiment, separate touch bits may be used for the L1 instruction cache and the L1 data cache.It is noted that the touch bits described above may be applied generally to any lower level cache memory, wherein the touch bits may provide an indication of whether a copy of a cache line corresponding to a given tag in a lower-level cache memory is stored within a higher-level cache.In the illustrated embodiment, control logic 265 may monitor accesses to tag portion 245 and thus to L2 cache memory 130. Control logic 265 may be configured to detect cache requests from various sources such as L1 cache memory 101, for example. In response to detecting a cache request from L1 cache memory 101, control logic 260 may be configured to set touch bit 246. For example, if a cache request misses in L1 cache 101, a request may be made to L2 cache 130. If there is a hit in L2 cache memory 130, the requested cache line, or portion thereof may be transferred to L1 cache memory 101 and to the originator of the request. Accordingly, the touch bit for that cache line may be set, indicating that a copy of the L2 cache line is stored within L1 cache memory 101. Further, in response to detecting that a copy of cache line has been evicted from the L1 cache memory 101, control logic 265 may be configured to clear touch bit 246.As described above, cache line write-backs help maintain coherency between the caches and system memory. Another way a microprocessor may maintain cache coherency is by internally probing the caches and write buffers for a more recent version of the requested data. In addition, external devices may also check the caches within a microprocessor by externally probing the microprocessor. The touch bit 246 described above may reduce the amount of probing, particularly between caches. The touch bit may effectively act as a probe filter. For example, in response to a hit in L2 cache memory 130, control logic 265 may check the touch bit 246 corresponding to the cache line which hit. If the touch bit 246 is clear, then there is no need to back-probe L1 cache memory 101. If the touch bit is set, then L1 cache 101 may be back-probed to check for a more recent version of the requested data. In embodiments which employ two L1 caches (e.g., L1 instruction cache and an L1 data cache), if one of the corresponding L1 touch bits is set, then only that L1 cache may need to be back-probed.Software which writes into a code segment is classified as self-modifying code (SMC). To avoid cache coherency problems due to SMC, a check is made during data writes to see whether the data-memory location corresponds to a code segment memory location. A conventional microprocessor may determine whether or not a write is in a code segment by internally probing the L1 instruction cache. If the probe returns a hit, the cache line in the L1 instruction cache may be invalidated and any corresponding prefetched instructions may also be invalidated. However, in the illustrated embodiment, the touch bit 246 may be used to reduce the amount of probing and also to determine whether an SMC check is necessary.For example, in embodiments which employ two L1 caches (e.g., L1 instruction cache (L1 I-cache) and an L1 data cache (L1 D-cache)) such as described above, if the L1 D-cache requests a modified cache line from L2 cache memory 130, and the touch bit corresponding to the I-cache is clear, L2 cache memory 130 may return the requested cache line to the D-cache. If however, the touch bit corresponding to the I-cache is set, then control logic 265 of L2 cache memory 130 may initiate an SMC check prior to returning the requested data to the D-cache. Further, if the I-cache requests a cache line that is modified from L2 cache memory 130 and the touch bit corresponding to the D-cache is clear, then no SMC check need be performed. Conversely, if the touch bit corresponding to the D-cache is set, then an SMC check may need to be performed. Therefore, L2 cache memory 130 may detect the sharing of a modified cache line between the I-cache and the D-cache and to only allow one of them to have the modified line at any given time.In one embodiment, L1 cache 101 may be linearly indexed and physically tagged. In such an embodiment, it is possible for the L1 cache 101 to have two copies of the same cache line in different places. This may occur when two cache lines have the same physical address but different linear addresses. This condition is referred to as linear aliasing. To prevent linear aliasing, L2 cache 130 may use the touch bit 246 to determine whether a possible linear alias is occurring. For example, if L1 cache 101 requests a cache line and touch bit 246 corresponding to that cache line is set, then a copy of that cache line may exist in L1 cache memory. In the illustrated embodiment, in addition to touch bit 246 being stored with the address tag, the linear address bits 247 of each cache line may also be stored with the address tag. Control logic 265 may use linear address bits 247 to index into L1 cache memory 101 during a back-probe to find the cache line copy. The copy may be evicted, and the requested cache line may be returned to L1 cache memory 101.Many cache subsystems use a least recently used (LRU) algorithm to evict cache lines from a given cache. In other words, the cache line which is the least recently used may be evicted. However to more efficiently use L2 cache memory 130 and possibly reduce the number of back-probes to L1 cache memory 101, this LRU algorithm may be somewhat biased away from evicting cache lines which have a copy stored in a higher-level cache. For example, if L2 cache memory 130 is about to evict a given cache line, the corresponding touch bit 246 may be checked first to see if a cache line copy exists in L1 cache 101. If the corresponding touch bit 246 is set, it may be preferable to select a different cache line which has a clear touch bit 246 because if a cache line with a clear touch bit 246 is selected for eviction, it will not be necessary to evict the cache line copy from L1 cache 101.In addition, when evicting a given cache line from L2 cache 130, if the corresponding touch bit 246 is clear, it may not be necessary to back-probe L1 cache 101. Further, in embodiments employing an L1 I-cache and an L1 D-cache as described above, if one of the corresponding touch bits is set, only that cache may need to be back-probed.Turning to FIG. 3, a block diagram of another embodiment of a cache system 300 is shown. Components that correspond to those shown in FIG. 1 and FIG. 2 are numbered identically for simplicity and clarity. In one embodiment, cache system 300 is part of microprocessor 100 of FIG. 1. Cache system 300 includes an L1 cache memory 101 coupled to an L2 cache memory 130 via a plurality of cache transfer buses 255. Further, cache system 300 includes a cache control 210 which is coupled to L1 cache memory 101 and to L2 cache memory 130 via cache request buses 215A and 215B, respectively. It is noted that although L1 cache memory 101 is illustrated as a unified cache in FIG. 2, other embodiments are contemplated that include separate instruction and data cache units, such as instruction cache 101A and L1 data cache 101B of FIG. 1, for example.As described above in conjunction with the description of FIG. 2, memory read and write operations are generally carried out using a cache line of data as the unit of coherency and consequently as the unit of data transferred to and from system memory.Cache control 210 may include logic (not shown) which may control the transfer of data between L1 cache 101 and L2 cache 130. Similar to the embodiment described above in conjunction with the description of FIG. 2, cache control 210 may control the flow of data between a requester and cache system 300. It is noted that although in the illustrated embodiment cache control 210 is depicted as being a separate block, other embodiments are contemplated in which portions of cache control 210 may reside within L1 cache memory 101 and/or L2 cache memory 130.Similar to the embodiment described above in conjunction with the description of FIG. 2, L1 cache memory 101 of FIG. 3 also includes a tag portion 330 and a data portion 335. Each of the tags within tag portion 230 is an independent tag and may include address information corresponding to a cache line of data within data portion 235. Compare logic (not shown) within tag portion 250 compares the requested address with the address information within each tag stored within tag portion 250. If there is a match between the requested address and an address associated with a given tag, a hit is indicated as described above. If there is no matching tag, a miss is indicated. In the illustrated embodiment, tag A1 corresponds to data A1, tag A2 corresponds to data A2, and so forth, wherein each of data units A1, A2 . . . Am is a cache line within L1 cache memory 101.In the illustrated embodiment, L2 cache memory 130 also includes a tag portion 345 and a data portion 350. Each of the tags within tag portion 345 includes address information corresponding to a cache line of data within data portion 350. In the illustrated embodiment, each cache line includes four sub-lines of data. For example, tag B1 corresponds to the cache line B1 which includes the four sub-lines of data designated B1(0-3). Tag B2 corresponds to the cache line B2 which includes the four sub-lines of data designated B2(0-3), and so forth.In the illustrated embodiment, a cache line in L1 cache memory 101 is equivalent to one sub-line of the L2 cache memory 130. For example, the size of a cache line of L2 cache memory 130 (e.g., four sub-lines of data) is a multiple of the size of a cache line of L1 cache memory 101 (e.g., one sub-line of data). In the illustrated embodiment, the L2 cache line size is four times the size of the L1 cache line. In other embodiments, different cache line size ratios may exists between the L2 and L1 caches in which the L2 cache line size is larger than the L1 cache line size. Accordingly, the amount of data transferred between L2 cache memory 130 and system memory (or an L3 cache) in response to a single memory request may be greater than the amount of data transferred between L1 cache memory 101 and L2 cache memory 130 in response to a single memory request.During a cache transfer between L1 cache memory 101 and L2 cache memory 130, the amount of data transferred on cache transfer buses 255 each microprocessor cycle or "beat" is equivalent to an L2 cache sub-line, which is equivalent to an L1 cache line. A cycle or "beat" may refer to one clock cycle or clock edge within the microprocessor. In other embodiments, a cycle or "beat" may require multiple clocks to complete. In the illustrated embodiment, each cache may have separate input and output ports and corresponding cache transfer buses 255, thus data transfers between the L1 and L2 caches may be at the same time and in both directions. However, in embodiments having only a single cache transfer bus 255, it is contemplated that only one transfer may occur in one direction each cycle. In alternative embodiments, it is contemplated that other numbers of data sub-lines may be transferred in one cycle. In one embodiment, a sub-line of data may be 16 bytes, although other embodiments are contemplated in which a sub-line of data may include other numbers of bytes.As described above in conjunction with the description of FIG. 2, touch bits may be used to indicate whether a copy of the cache line corresponding to a given tag in the L2 cache memory is stored within the L1 cache 101. In the embodiment of FIG. 3, associated touch bits 346 are similarly stored along with each of the tags in tag portion 345. The touch bits 346 are designated (0-3), and correspond to cache sub-lines (0-3), respectively. The state of each touch bit 346 of FIG. 3 may be indicative of whether a copy of the cache sub-line corresponding to a given tag in the L2 cache memory is stored within the L1 cache 101. As described above, this is referred to as a precise indication. For example, in one embodiment, a set touch bit 346 may be indicative that a copy of the cache sub-line corresponding to a given tag in the L2 cache memory is stored within the L1 cache 101 and a clear touch bit 346 may be indicative that a copy of the cache sub-line corresponding to a given tag in the L2 cache memory is not stored within the L1 cache 101.It is noted that in an alternative embodiment, the state of each touch bit 346 may be indicative of whether a copy of the cache sub-line corresponding to a given tag in the L2 cache memory was at one time stored within the L1 cache 101. For example, certain L1 cache implementations provide no feedback to the L2 cache when a cache line is evicted. When using such L1 cache implementations, in response to an initial cache request from the L1 cache, the touch bit 346 corresponding to the requested cache sub-line may be set. However, that touch bit 346 may not be cleared in response to the eviction of the cache line copy from the L1 cache. Therefore, the touch bit may be indicative that the L2 cache sub-line copy was stored in the L1 cache, but may or may not be stored there now. This may be referred to as an imprecise indication.In another embodiment, L1 cache 101 may be implemented using an instruction cache and a data cache as described in conjunction with the description of FIG. 1. In such an embodiment, separate touch bits may be used for the L1 instruction cache and the L1 data cache. Accordingly, there may be two touch bits for each of the touch bits 346 shown in FIG. 3.Further, other embodiments are contemplated in which more than two L1 caches may be present. In still other embodiments, multiple processors (not shown) each having an L1 cache may all have access to the L2 cache memory 130. Accordingly, there may be a touch bit 346 which corresponds to each of the L1 caches of which L2 cache memory 130 is keeping track. Further, depending on the state of the touch bits, L2 cache memory 130 may be configured to notify a given L1 cache when its data has been displaced and to either write the data back or to invalidate the corresponding data as necessary.It is noted that the touch bits 346 of FIG. 3 may be used by L2 cache 130 as described above in conjunction with the description of FIG. 2. For example, touch bits 346 of FIG. 3 may be used: to effectively filter probes, to determine when SMC checks may be needed, detect linear aliasing, to effectively bias the L2 cache eviction policy and to determine if back-probing is needed when evicting a cache line from the L2 cache. When evicting a cache line from the L2 cache, touch bits 346 may be used to determine which of the four possible L1 sub-lines should be back-probed.Thus, the use of the touch bits in a lower-level cache such as an L2 cache may increase the available bandwidth of the cache subsystem by reducing the number of back-probes associated with a higher-level cache such as an L1 cache. Further, the use of the touch bits may simplify or eliminate some of the overhead operations associated with SMC checks and linear aliasing detection which conventional microprocessors may perform.Turning to FIG. 4, a block diagram of one embodiment of a computer system is shown. Components that correspond to those shown in FIG. 1-FIG. 3 are numbered identically for clarity and simplicity. Computer system 400 includes a microprocessor 100 coupled to a system memory 410 via a memory bus 415. Microprocessor 100 is further coupled to an I/O node 420 via a system bus 425. I/O node 420 is coupled to a graphics adapter 430 via a graphics bus 435. I/O node 420 is also coupled to a peripheral device 440 via a peripheral bus.In the illustrated embodiment, microprocessor 100 is coupled directly to system memory 510 via memory bus 515. For controlling accesses to system memory 510, microprocessor may include a memory controller (not shown) within bus interface unit 140 of FIG. 1, for example. It is noted however that in other embodiments, system memory 510 may be coupled to microprocessor 100 through I/O node 520. In such an embodiment, I/O node 520 may include a memory controller (not shown). Further, in one embodiment, microprocessor 100 includes a cache system such as cache system 200 of FIG. 2. In other embodiments, microprocessor 100 includes a cache system such as cache system 300 of FIG. 3.System memory 510 may include any suitable memory devices. For example, in one embodiment, system memory may include one or more banks of dynamic random access memory (DRAM) devices. Although it is contemplated that other embodiments may include other memory devices and configurations.In the illustrated embodiment, I/O node 520 is coupled to a graphics bus 535, a peripheral bus 540 and a system bus 525. Accordingly, I/O node 520 may include a variety of bus interface logic (not shown) which may include buffers and control logic for managing the flow of transactions between the various buses. In one embodiment, system bus 525 may be a packet based interconnect compatible with the HyperTransport(TM) technology. In such an embodiment, I/O node 520 may be configured to handle packet transactions. In alternative embodiments, system bus 525 may be a typical shared bus architecture such as a front-side bus (FSB), for example.Further, graphics bus 535 may be compatible with accelerated graphics port (AGP) bus technology. In one embodiment, graphics adapter 530 may be any of a variety of graphics devices configured to generate and display graphics images for display. Peripheral bus 545 may be an example of a common peripheral bus such as a peripheral component interconnect (PCI) bus, for example. Peripheral device 540 may any type of peripheral device such as a modem or sound card, for example.Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
The present invention discloses processors, systems, and methods for in-memory host convertible secure enclave. The processor includes a cryptographic engine to control access, using an secure regionkey identifier (ID), to one or more memory range of memory allocable for flexible conversion to secure pages of architecturally-protected memory regions, and a processor core. The processor core is to, responsive to receipt of a request to access the memory, perform a walk of page tables and extended page tables to translate a linear address of the request to a physical address of the memory. Theprocessor core is further to determine that the physical address corresponds to an secure page within the one or more memory range of the memory, that a first key ID located within the physical address does not match the secure region key ID, and issue a page fault and deny access to the secure page in the memory. |
1.A processor including:A cryptographic engine for controlling access to one or more memory ranges of the memory using a secure area key identifier (ID), the one or more memory ranges can be allocated to be flexibly converted into an architecturally protected memory area Security page; andThe processor core coupled to the cryptographic engine, the processor core being used for:Determining that the physical address associated with the request to access the memory corresponds to a secure page within one or more memory ranges of the memory;It is determined that the first key ID located in the physical address does not match the security zone key ID; andA page error is issued and access to the secure page in the memory is denied.2.The processor of claim 1, wherein the processor core further comprises a set of instructions in firmware executing a basic input output system (BIOS), wherein the processor core is used to execute the set of instructions to perform The following operations:It is found that the host can switch the safe zone mode and the safe extended mode are enabled;Programming a security extension key into the cryptographic engine to correspond to the security zone key ID; andThe one or more memory ranges of the memory are reserved for flexible conversion into the secure page.3.The processor of claim 2, wherein the processor core is further configured to execute memory checking firmware in response to detecting that the secure area key ID is not assigned to be used in conjunction with the secure extended key Causes the memory check process to fail.4.3. The processor of claim 2, wherein the processor core is further configured to execute the set of instructions to allocate one of a plurality of key IDs to be used exclusively as the secure area key ID.5.3. The processor of claim 2, wherein the processor core is further configured to execute a central processing unit identifier (CPUID) instruction, wherein the CPUID instruction has:The first register input is used to determine the one or more memory ranges of the memory allocated to be flexibly converted into secure pages; andThe second register input is used to determine the security zone key ID and the associated security attribute.6.The processor of claim 1, wherein the processor core is further configured to: use the secure zone key ID to map the guest virtual address of the secure page to the second physical address in the page table and the extended page table. Address such that the second physical address contains the security zone key ID.7.A processor including:A cryptographic engine for controlling access to one or more memory ranges of the memory using a secure area key identifier (ID), the one or more memory ranges can be allocated to be flexibly converted into an architecturally protected memory area Security page; andThe processor core coupled to the cryptographic engine, the processor core being used for:Determining that the physical address associated with the request to access the memory corresponds to a non-secure page of the memory;Determining that the first key ID located in the physical address matches the security zone key ID; andDeny access to non-secure pages of the memory.8.The processor of claim 7, wherein the processor core is also used for:Replace the physical address in the request with an abort page address, which is linked to an abort page that contains incorrect data; andThe system agent that issued the request is allowed to access the suspended page.9.The processor of claim 7, wherein the processor core further includes a set of instructions in firmware that executes a basic input output system (BIOS), wherein the processor core is used to execute the set of instructions to perform The following operations:It is found that the host can switch the safe zone mode and the safe extended mode are enabled;Programming a security extension key into the cryptographic engine to correspond to the security zone key ID; andThe one or more memory ranges of the memory are reserved for flexible conversion into the secure page.10.9. The processor of claim 9, wherein the processor core is further configured to execute the set of instructions to allocate one of a plurality of key IDs to be used exclusively as the secure area key ID.11.The processor according to claim 9, wherein the processor core is further configured to execute memory checking firmware, in response to detecting that the secure area key ID is not assigned, to be used in conjunction with the secure extended key The memory check process failed.12.9. The processor of claim 9, wherein the processor core is further configured to execute a central processing unit identifier (CPUID) instruction, wherein the CPUID instruction has:The first register input is used to determine the one or more memory ranges of the memory allocated to be flexibly converted into secure pages; andThe second register input is used to determine the security zone key ID and the associated security attribute.13.A system including:The cache and home agent (CHA) of the memory subsystem, the CHA is used to:Setting the grid security bit of the cache line in response to detecting that the first key identifier (ID) in the physical address of the cache line matches the secure area key ID; andIssue a write operation to the memory for the cache line; andThe cryptographic engine coupled to the CHA, wherein the cryptographic engine is used to: as part of completing the write operation, set the memory security bit in the metadata of the cache line existing in the memory to all State the value of the grid security bit.14.The system of claim 13, wherein the cryptographic engine is also used for:Detecting a read operation for the cache line stored in the memory; andIn order to perform the read operation, in response to detecting a mismatch between the values of the grid security bit and the memory security bit, the poison bit is returned to the requesting agent.15.The system according to claim 14, wherein the cryptographic engine is also used to return fixed pattern data to the requesting agent in order to perform the read operation.16.The system of claim 13, wherein the cryptographic engine is also used for:Detecting a read operation for the cache line stored in the memory; andIn order to perform the read operation, in response to determining that the values of the grid security bit and the memory security bit match, the data of the cache line is returned to the requesting agent.17.One method includes:The processor selects the eviction page of the memory to be converted into the first secure page; andThe processor executes a security zone conversion instruction to initialize the eject page as the first security page through the following steps:Writing the content of the ejected page as a zero value;Use the physical address of the eviction page, the data to be stored in the first secure page, and the secure area key identifier (ID) to calculate the message authentication code (MAC) value, and the secure area key ID corresponds to An architecturally protected memory area of the memory including the first secure page; andThe MAC value is stored in the first secure page.18.The method of claim 17, further comprising:Executing a memory barrier instruction by the processor to verify that the operation associated with the initialization of the first secure page is completed; andThe processor makes the first security page accessible to one of the virtual machines or applications that are authorized to access the architecturally protected memory area of the memory.19.The method of claim 18, further comprising:Selecting the first secure page by the processor to eject and transition to a non-secure page;Making the first security page inaccessible to one of the virtual machines or applications that are authorized to access the architecturally protected memory area of the memory;Invalidate the mapping of the first security page in the conversion backup buffer of the processor;The processor executes the non-secure area conversion instruction to cause the content of one or more cache lines corresponding to the first secure page and containing the secure area key ID to be written back to the memory and flushed ;andThe first secure page is returned to the list of eviction pages available to the processor for allocation for storage of data associated with the new key ID.20.The method of claim 17, further comprising:Determining that the physical address associated with the request to access the memory corresponds to the first secure page within one or more memory ranges of the memory;It is determined that the first key ID located in the physical address does not match the security zone key ID; andA page error is issued and the system agent that issued the request is denied access to the first secure page in the memory.21.The method of claim 17, further comprising:Determining that the physical address associated with the request to access the memory corresponds to a non-secure page of the memory;Determining that the first key ID located in the physical address matches the security zone key ID; andThe system agent that issued the request is denied access to the non-secure page of the memory.22.At least one machine-readable medium, the at least one machine-readable medium includes a plurality of instructions that, in response to being executed on a computing device, cause the computing device to execute any of claims 17-21 The method described in one item.23.A device comprising a device for performing the method according to any one of claims 17-21. |
Processor, system and method for host convertible safe enclave in memoryTechnical fieldThe present disclosure relates to the protection of data stored in the memory of a computer system, and more specifically, to a host switchable security area in the memory encrypted by a multi-key total memory with integrity.Background techniqueModern processors are designed to protect sensitive data in memory from hardware and software attacks. The area of the memory protected in this way is referred to herein as protected memory. Some processors provide cryptographic mechanisms for encryption, integrity, and replay protection. Memory encryption protects the confidentiality of data residing in memory. On the other hand, integrity protection prevents attackers from causing any hidden modifications to the ciphertext in the memory (such as encrypted data, not plaintext as unencrypted data), and replay protection eliminates any undetected ciphertext Time replacement. Without this protection, an attacker who can physically touch the system can record snapshots of the data lines and replay them at a later point in time.The static mode of protected memory management statically reserves a predetermined memory range of the main memory using enclave (or secure) pages and is a traditional mode adopted by many processors. An update mode of protected memory management allows the main memory to be flexibly converted to protected memory, which greatly increases the amount of memory that can be used as protected memory, which also increases the efficiency of protected memory allocation. In order to change the mode of memory protection management, the basic input/output system (BIOS) determines which mode to use, and communicates the mode to the operating system when the computer system starts. Therefore, in order to change the mode of protected memory management after the computer system is running, the computer system is restarted so that the BIOS can reset the mode of protected memory management for use by the operating system. This restarting process may be problematic because it consumes valuable computing time and resources.Summary of the inventionAccording to an aspect of the present disclosure, there is provided a processor including: a cryptographic engine for controlling access to one or more memory ranges of the memory using a secure area key identifier (ID), the one or more The memory range can be allocated to be flexibly converted into a secure page of the memory area protected from the architecture; and a processor core coupled to the cryptographic engine, the processor core being used to determine that it is associated with a request to access the memory The physical address corresponds to a secure page within one or more memory ranges of the memory; it is determined that the first key ID located in the physical address does not match the secure area key ID; and a page error is issued and the pair is rejected Access to the secure page in the memory.According to an aspect of the present disclosure, there is provided a processor including: a cryptographic engine for controlling access to one or more memory ranges of the memory using a secure area key identifier (ID), the one or more The memory range can be allocated to be flexibly converted into a secure page of the memory area protected from the architecture; and a processor core coupled to the cryptographic engine, the processor core being used to determine that it is associated with a request to access the memory The physical address of corresponds to the non-secure page of the memory; it is determined that the first key ID located in the physical address matches the secure area key ID; and the access to the non-secure page of the memory is denied.According to an aspect of the present disclosure, there is provided a system including: a cache of a memory subsystem and a home agent (CHA), the CHA is used to respond to detecting a first key identifier in a physical address of a cache line (ID) match the security zone key ID to set the grid security bit of the cache line; and issue a write operation to the memory for the cache line; and a cryptographic engine coupled to the CHA, wherein the cryptographic engine uses Yu: As part of completing the write operation, set the memory security bit in the metadata of the cache line in the memory to the value of the grid security bit.According to an aspect of the present disclosure, there is provided a method including: selecting, by a processor, an ejected page of a memory to convert into a first secure page; and executing a secure area conversion instruction by the processor to convert the The eviction page is initialized as the first security page: the content of the eviction page is written as a zero value; the physical address of the eviction page, the data to be stored in the first security page, and the security Area key identifier (ID) to calculate a message authentication code (MAC) value, the secure area key ID corresponding to an architecturally protected memory area of the memory containing the first secure page; and The MAC value is stored in the first secure page.According to an aspect of the present disclosure, at least one machine-readable medium is provided, the at least one machine-readable medium includes a plurality of instructions that, in response to being executed on a computing device, cause the computing device to Perform the method described above.According to an aspect of the present disclosure, there is provided a device including a device for performing the method as described above.Description of the drawings1A and 1B are system block diagrams of a computing device regarding the use of a host switchable security zone in a memory using multi-key total memory encryption with integrity (MK-TMEi) according to an implementation manner.Figure 2A is a block diagram of a physical memory address including a portion of address bits allocated to a key identifier (ID) according to various implementations.2B is a diagram illustrating a cryptographic key identifier (ID), a trusted domain extension (TDX) used in MK-TME, and a key corresponding to the host convertible security zone used in the memory according to an implementation mode Block diagram of the demarcation of the security zone key ID.Fig. 3 is a flowchart of a method for initializing a host switchable security area in a memory by using MK-TMEi and a security extension (SGX) according to an implementation manner.FIG. 4 is a memory diagram illustrating different ranges of memory that can be allocated for conversion into a secure page and a reserved memory that cannot be allocated for such conversion according to implementation.Fig. 5 is a flowchart of a method for host switchable security zone access control according to an implementation manner.FIG. 6 is a block diagram of a page table associated with the conversion of a linear address to a physical memory address using paging according to an implementation manner.FIG. 7A is a block diagram illustrating the conversion of a guest virtual address to a guest physical address and the conversion of a guest physical address to a host physical address according to an implementation manner.FIG. 7B is a block diagram illustrating the use of an extended page table (EPT) to convert a guest physical address into a host physical address according to an implementation manner.Fig. 8 is a functional flowchart illustrating a security check of a cryptographic engine using grid security bits and memory security bits as a reference security area memory operation according to an implementation manner.9A is a flowchart of a method for assigning evicted memory pages to a secure area key ID according to an implementation.FIG. 9B is a flowchart of a method for evicting a secure page in a memory for reassignment to a non-secure area key ID according to an implementation manner.FIG. 10A is a block diagram illustrating an in-order pipeline, a register renaming phase, and an out-of-order issue/execution pipeline according to an implementation manner.FIG. 10B is a block diagram illustrating a micro-architecture of a processor or integrated circuit that can implement hardware support for a multi-key cryptographic engine according to an implementation manner of the present disclosure.FIG. 11 illustrates a block diagram of a micro-architecture of a processor or an integrated circuit that implements hardware support for a multi-key cryptographic engine according to an implementation of the present disclosure.Figure 12 is a block diagram of a computer system according to one implementation.Fig. 13 is a block diagram of a computer system according to another implementation.Figure 14 is a block diagram of a system-on-chip according to one implementation.Figure 15 illustrates another implementation of a block diagram of a computing system.Figure 16 illustrates another implementation of a block diagram of a computing system.Detailed waysThe current trend in computing is to place data and enterprise workloads in the cloud by using managed services provided by cloud service providers (CSP). As a result of hosting data and enterprise workloads in the cloud, CSP customers (for example, tenants) are requesting better security and isolation solutions for their workloads. In particular, customers seek solutions to enable the operation of the software provided by the CSP outside the Trusted Computing Base (TCB) of the tenant's software. The TCB of a system refers to a set of hardware, firmware, and/or software components that have the ability to influence trust in the overall operation of the system. Therefore, for example, a virtual machine monitor (VMM or super supervisor) creates and controls a virtual machine (VM), and the VM executes tenant software. The tenant therefore wants the components of the VMM to operate outside of the tenant's TCB. If VMM is executed as software on the hardware of a virtualized server, VMM is considered untrusted software.In terms of facilitating data security in CSP-based systems, various techniques have been used to protect sensitive data residing in areas such as the memory of CSP servers. Some system processors provide cryptographic mechanisms for encryption, integrity, and replay protection. Memory encryption protects the confidentiality of data residing in memory. For example, total memory encryption (TME) can encrypt data that is moving from the processor core to the memory, and can decrypt the encrypted data as it is returned to the processor core. In addition, the CSP server can support the use of multiple encryption keys. For example, there is a different key for each security domain served by the server, and the security domains served by the server can be dozens or thousands of domains. Therefore, the TME engine can be adapted to serve as a multi-key TME (or MK-TME) engine to securely manage the use of multiple encryption keys, and the engine can be more generally referred to as a cryptographic engine in this document.Domains may refer to workloads, such as other types of workloads supported by client machines (eg, virtual machines), operating systems, applications, or servers that can be associated with different tenants. For example, the security domain can be tenant workloads, such as operating systems, and other ring-3 applications executed on top of the operating system, or VMs executed on top of the VMM (itself a separate domain), and other ring- 3 application. The benefit of supporting the use of multiple keys is to provide password isolation between different tenant domains. For example, one security domain cannot access encrypted data if the encrypted data belongs to different security domains protected by different encryption keys. These benefits extend to the ability of CSPs to support an increasing number of tenant workloads on the same server or within the same server cluster to adapt to the growing demand for cloud-based resources.The present disclosure describes hardware support for the static mode of protected memory management on the host convertible enclave (eg, secure area) platform built on the MK-TME technology. In one implementation, the present disclosure supports the operating system (OS) of the computer system to select between two modes of protected memory management. These two modes can include a static mode that uses static allocation of architecturally-protected memory and a host that enables flexible allocation of an architecturally-protected memory area to multiple memory ranges of the main memory. Switch to safe zone mode.In the implementation, an enclave refers to a secure container, for example, the isolation of code and data in the main memory or a secure memory area that is protected from the architecture with a certain level of security. Sexuality includes at least encryption, but integrity can also be included. The memory used by an enclave (for example, a secure area) is sometimes called an enclave page cache (EPC), which can be used by the company of Santa Clara, California, and is called a secure guard extension (secure guard). extension, SGX) security instruction architecture to protect. In addition, the host-convertible security zone mode may be referred to as a host-convertible (HC) EPC (or HC-EPC) mode. The memory as the HC-EPC memory can be managed by system software such as OS or virtual machine monitor (VMM). The ability to choose between the two modes of protected memory management also enables when the first OS is supported with traditional memory protection and the second OS is supported with host switchable security pages without user intervention (e.g., changing basic input/ Switch between two operating systems on a dual-boot (or multi-boot) platform in the case of output system (BIOS) settings) or multiple reboots.This dual-mode memory protection available for the first OS or for both the first OS and the second OS is implemented by the processor core of the processor executing the BIOS, which is configured to set the OS to make Two modes of protected memory management selected (understand that the present disclosure can be extended to more than two modes of protected memory management). For example, the BIOS can write the first information and the second information to a predetermined area of the main memory that operates like an e-mail box, and deliver the information to one or more operating systems for switching between modes of protected memory management select. The first information may, for example, define a memory range that can be allocated to the main memory and converted into a secure page. The second information may define a subset of the memory range allocated for reserved memory that cannot be converted by the host into a secure page.When operating in the host convertible secure area (HC-EPC) mode, the memory range may include multiple sections (or ranges) of convertible pages that can be converted into secure pages or non-secure pages. The software executing on the processor can identify the page to be converted in the main memory and can use the page conversion instruction (or function) to convert the page. The processor core can determine the convertible page in the memory range to be converted from the page conversion instruction in response to the page conversion instruction and convert the convertible page into a secure page or a non-secure page. Identifying pages that can be converted is the responsibility of the system software (for example, OS or VMM). For example, if the OS utilizes a non-secure page, the OS can recognize a secure page (if the non-secure page is not available) and execute a page conversion instruction on this secure page to generate a non-secure page in the memory.In some implementations, the processor core or input/output memory management unit (input/output memory management unit, IOMMU), for example, can look up the status of the target memory page at each memory access to determine whether the access is to a secure page or an insecure page. For example, only the secure area code can access the secure area data in the secure page. These involve additional memory accesses to enforce this isolation from unsafe processes.In order to perform the same check without additional memory access, additional hardware can be used as described in detail in this article, including the use of a reserved secure area (or EPC) key ID in the physical address of the memory, which indicates that it is compatible with this physical address. The page corresponding to the address is a secure page, not a non-secure page. In this way, the system software can then map the pages of the memory according to whether it is a secure page or a non-secure page. The system software can also use the special conversion function of the existing enclave (ENCLS) leaf instructions to convert memory pages back and forth between secure pages and non-secure pages (and vice versa), although other conversion instructions can also be used. The system hardware can then use the reserved security zone key ID and other security extension (SGX) mode access checks to implement architecturally controlled access semantics. When using SGX's Total Encrypted Memory (TEM), because of the different security requirements of the cipher engine that replaces the counter mode encryption, replay protection in the server is not required. The use of TEM in the host convertible enclave platform can significantly speed up memory operations.1A and 1B are related to the use of multi-key total memory encryption with integrity (multi-key total memory encryption with integrity, MK-TMEi) memory in the use of a computing device 100 for the use of a host switchable secure area according to an implementation manner. System block diagram. In one implementation, the computing device 100 includes a processor 101, a secondary storage 115, a communication circuit 117, and a memory 140 and/or other memory devices coupled as illustrated and described herein.In various implementations, the processor 101 includes one or more processor cores 102, a cryptographic engine 110, a memory controller 120 (for example, a memory management unit), and a last level cache (LLC) 114 (for example, , LLC corresponding to each processor core), basic input/output system (BIOS) firmware 150 (or “BIOS” for short), write model-specific register (write model-specific register, WRMSR) microcode 160, and Memory check (MCHECK) firmware 162 (or simply "MCHECK"). The processor 101 may be implemented as a single-core or multi-core processor(s), a digital signal processor, a microcontroller, or other processors or processing/control circuits.In an implementation manner, the memory 140 (for example, main memory) includes (for example, stores) a page table 142 and an extended page table (extended page table, EPT) 146, which is divided into (for example, stores EPC or enclave pages) secure The protected area 165 of the area stores the system software 167, and includes a predetermined area (PA) 169 of the memory accessible by both the BIOS and the system software 167. The protected area 165 is understood to be an architecturally protected memory area secured by a security instruction such as an SGX instruction.In one implementation, the memory controller 120 includes a cryptographic engine 110, and the cryptographic engine 110 can also store the key data structure 105 and the secure area circuit 124. The secure zone circuit 124 may also optionally be located in the non-core and coupled to the cryptographic engine (illustrated in dashed lines). The secure area circuit 124 may further include an enclave page cache map (EPCM) 126 and an integrity protection unit 128. The EPCM 126 is a security structure that is used by the processor 101 to track the contents of the protected area 165 of the memory, such as an enclave (or security) page. The EPCM 126 may hold an entry for each page currently loaded into the protected area 165, which is not software accessible, and the layout of the EPCM field may be implementation-dependent for the management of the secure page.In an implementation manner, the integrity protection unit 128 provides counter mode encryption by creating a message authentication code (MAC). In one implementation manner, the message authentication code is a physical address (PA), data, and key. The combined hash of the key is, for example, the security zone (e.g., enclave or EPC) key associated with the enclave key ID. The integrity protection unit 128 may generate a MAC every time a secure page in the protected area 165 is written, and may regenerate the MAC every time it is read and combine it with each data stored in the memory 140 The MAC in the metadata area associated with the line is compared to authenticate the MAC. In this way, the integrity protection unit can provide the integrity of the MK-TMEi engine, and the MK-TMEi engine can be represented by the cryptographic engine 110 in one implementation.As shown in FIG. 1B, each processor core 102 may include a cache 112, a hardware virtualization support circuit 116, a page miss handler (PMH) 122, and a hardware register 130. Each processor core 102 can communicate with a corresponding cache and home agent (CHA) 109 on the multi-core processor package via the interconnection network 107 and communicate with one or the other existing outside the multi-core processor package. Multiple system agents 170 communicate. The CHA 109 can cache a copy of the cache line belonging to the memory (for example, local to the memory) of the memory at the cache line granularity. CHA 109 can implement one of directory-based or prying-based coherence tracking solutions for caches and memories shared among multiple processor cores. In different implementations, the interconnection network 107 is a Peripheral Component Interconnect (PCITM) bus, such as a high-speed Peripheral Component Interconnect (PCIeTM) bus, or another customized bus. The system agent 170 may include disk storage, device drivers, I/O devices, and so on.The processor core 102 can execute instructions to run several hardware threads, also called logical processors, including a first logical processor 104A, a second logical processor 104B, and so on up to the Nth logical processor 104n. In one implementation, the first logical processor 104A is a virtual machine monitor (VMM) or super supervisor. Several virtual machines (VM) 155 can be executed and controlled by the VMM. In addition, as previously described, the VMM may assign key IDs associated with corresponding encryption keys to various security domains (eg, VMM, VM) operating on the computing device 100.1B, the hardware register 130 may include, for example, several general registers (not shown, such as EAX, EBX, ECX, EDX, etc.), model specific registers 132 (or MSR), and control registers 134 (for example, CR1, CR2 , CR3, etc.). The memory 140 may also include a page table 142 for paging, as well as a visitor page table 144 and an extended page table (EPT) 146 used by the VMM for address translation, which will be described in more detail with reference to FIGS. 6 and 7A-7B.In one implementation, the computing device 100 is a server serving domains, such as different workloads, such as client machines, operating systems, applications, or other types of workloads that are supported. In an implementation, the memory controller 120 may include (or be coupled to) a cryptographic engine 110 having a key data structure 105 (for example, an MK-TMEi engine).In various implementations, the cryptographic engine 110 may be implemented as a microcontroller, microprocessor, functional block, logic, or other circuit or a collection of circuits capable of performing the functions described herein. The cryptographic engine 110 can use the domain-specific encryption key to encrypt and/or decrypt domain data read from or written to the memory, and thus can work in conjunction with the memory controller 120 or can be integrated in the memory controller 120. The cryptographic engine 110 can cache an internal key data structure 105, and the cryptographic engine 110 can use the internal key data structure 105 to identify domain access to be protected. The key data structure 105 may be a table or other data structure that can be indexed and stored in the hardware of the cryptographic engine 110. In one implementation, the hardware is a cache, a set of registers, or other flash memory.In various implementations, the key data structure 105 can be controlled and/or programmed by the hardware of the cryptographic engine 110 or trusted software, for example, using the cryptographic engine programming support circuit of the processor 101. The key data structure 105 may be adapted to store keys and domain information for the domain. The encryption key and/or other secret information of the key data structure 105 may not be available to untrusted software (for example, OS or VMM). In some implementations, the cryptographic engine 110 may be included in a system-on-a-chip (SoC) of the computing device 100 together with the memory controller 120 and the processor core 102.2A is a block diagram of a physical memory address 200 including a portion of the address bits assigned to the key ID according to various implementations. This part of the address bits can contain N bits, which can be at least log2K, where K is the total number of available encryption key IDs. It may be advantageous to use at least some of the upper address bits of the physical memory address to encode the key ID, because systems rarely have a memory space so large that all physical address bits are required to map to physical addresses in memory 140. However, in other implementations, the N bits used for the key ID can be located elsewhere within the physical memory address, including exceeding the maximum physical address width. In an implementation manner, N bits can be further divided into M bits less than N bits and L bits less than M bits. The reason is discussed with reference to FIG. 2B.FIG. 2B is a diagram illustrating the cryptographic key identifier (ID), trust domain extensions (TDX) and the host convertible security zone used in the memory for use in the MK-TME according to an implementation mode. A block diagram of the demarcation of the security zone key ID corresponding to the key. These key IDs can be stored in the key data structure 105 in relation to their corresponding encryption keys. In some implementations, the architecture of the computing device 100 supports TDX keys (e.g., private keys), MK-TME keys (e.g., shared keys), and additional security extension keys. The MK-TME key can be shared with and/or managed by software such as VMM and can be reassigned from one domain to another, for example, one domain is decommissioned and the security zone is released for use by another domain .As shown in the figure, the key can be indexed against the key ID in the key data structure 105. The first key may be a TME key, which is a platform reserved key reserved for use by the internal processor. Subsequent keys of the key data structure 105 may include a range of 1-KMK keys used for MK-TME technology, which are used to read from and store secure pages in the protected area 165 (for example, enclaves). The security extension key 215 (2M-L-1) for encrypting the output data, and the range of 2M-L to 2M-L+KTD keys associated with the TDX domain, where TD is the supported "trust domain" Number of. The security extension key can be understood as an SGX key that can be used in conjunction with the HC-EPC mode of the architecture. A trusted domain is a VM that is required to run under the security guarantee of TDX. This security guarantee can be implemented via security hardware and/or software to ensure that the data stored in the memory 140 for each TD is private and inaccessible to another TD and system software such as OS or VMM . Different ranges of key IDs can be used to correspond to the various keys discussed above, as long as one key and the corresponding key ID are exclusively reserved for encryption to/from the secure enclave (EPC).Regarding the operation of HC-EPC, MK-TME and SGX, the BIOS firmware 150 can use the TME capability (TME_CAPABILITY) (RO) MSR and TME activation (TME_ACTIVATE) MSR in the MSR132 (Figure 1B) when the computing system 100 is started to enable and Configuration. The TME_CAPABILITY MSR can enumerate "N", where N is the number of most significant bits in the physical address that can be used for the key ID. The TME_CAPABILITY MSR can also enumerate the number "K", or the number of key IDs available for software, such as 2N-1 and the key ID zero reserved for TME.In an implementation, to enable MK-TME, the TME enable RWL bit in TME_ACTIVATE MSR can be set and bits 35:32 will have a non-zero value (this will specify the number of key ID bits configured for MK-TME ). These MK_TME_KEYID_BITS are the number of key ID bits to be allocated to MK-TME (for example, M bits less than or equal to a total of N bits). Similar to enumeration, this is the encoded value. The TME_ACTIVATE MSR can also be used to select the L bit of the M bits that can be used for the TD key ID (where L is less than or equal to M).In the implementation, writing a value greater than MK_TME_MAX_KEYID_BITS for M can cause a general protection error (#GP). Writing a non-zero value to this field can also cause general protection errors. If bit 1 of EAX (TME enable) is not and is set to "1", because TME will be enabled to use MK-TME. The TME_ACTIVATE MSR can also be used to lock other TME-related MSRs (for example, EXCLUDE_MASK, EXCLUDE_BASE), so that any writes to these MSRs after the MSR is locked will be ignored. When the computing system 100 is reset, the lock can be reset.In an implementation manner, when the computing device 100 is started, the BIOS may store specific information in the TME_ACTIVATE MSR for later use by the processor 101 (for example, including use by the cryptographic engine 110 and/or the memory controller 120) to restrict restrictions on Encryption key and key ID access. This information may include a bit range of several address bits for the physical memory address (for example, the host physical address) of the key ID. The specific information stored in the TME_ACTIVATE MSR by the BIOS may also include a security zone key ID, which may be assigned to specifically identify the security extension key programmed into the crypto engine 110. In addition, in one implementation, an additional key ID may be stored in the TME_ACTIVATE MSR, which identifies the last key ID (for example, KeykTD) to which the TDX key is allocated. The key ID beyond this number can be a non-architectural key ID for special purposes.The computing device 100 can be implemented as any type of computing or computing device capable of performing the functions described herein, including but not limited to computers, desktop computers, workstations, servers, laptop computers, notebook computers, tablet computers, mobile computing devices, Wearable computing devices, network appliances, web appliances, distributed computing systems, processor-based systems, and/or consumer electronic devices. In other implementations, the computing device 100 may include other or additional components, such as those commonly found in desktop computers (for example, various input/output devices). Furthermore, in some implementations, one or more of the illustrative components may be included in another component or otherwise form a part of another component. For example, in some implementations, the memory 140 or some parts thereof may be included in the processor core.In an implementation manner, a central processor unit (CPU) identifier leaf instruction (such as CPUID.SGX_LEAF) can be executed for the system software to enumerate from those key IDs enumerated for MK-TME and TDX Security zone key ID. In order to execute the CPUID.SGX_LEAF instruction, the processor core 102 obtains input from certain general registers, executes the instruction, and returns the hardware configuration information provided by the BIOS to the system software. In the first execution of the CPUID.SGX_LEAF instruction with the first register input, the software can enumerate the memory range that can be converted into a secure page, as configured by the BIOS. In the second execution of the CPUID.SGX_LEAF instruction with the second register input, the software can enumerate the EPC key ID and other security attributes associated with the HC-EPC-based memory. The software can enumerate the key ID size via TME_ACTIVATE MSR.The hardware virtualization support circuit 116 (FIG. 1B) may support the virtualization execution of the operating system, applications, and other software by the computing device 100. The hardware virtualization support circuit 116 may include virtual machine extension (VMX) support by providing the following two execution modes: VMX root mode and VMX non-root mode. The VMX root mode allows the executing software to have extensive control over the computing device 100 and its hardware resources. Instead, the super supervisor, VMM, or host operating system (OS) can execute in VMX root mode. The VMX non-root mode restricts access to certain hardware instructions, while still implementing the ordinary ring/privileged system of the processor core. One or more guest OSs (for example, VM 155) can be executed in VMX non-root mode. These guest OSs can be executed in Zero Ring, similar to being executed without virtualization. The hardware virtualization support circuit 116 can also support an extended page table (EPT) 146, and the EPT 146 can be implemented as a hardware-assisted second-level page address translation. The hardware virtualization support circuit 116 may be implemented as VT-x technology, for example.The memory 140 may be implemented as any type of volatile or non-volatile memory or data storage device capable of performing the functions described herein. In operation, the memory 140 may store various data and software used during the operation of the computing device 100, such as an operating system, applications, programs, libraries, and drivers. The memory controller 120 may be coupled to the memory 140 for storing to and fetching from the memory, which in some cases may depend on a miss to the cache 112.The secondary storage device 115 may be implemented as any type of one or more devices configured for short-term or long-term storage of data, such as memory devices and circuits, memory cards, hard disk drives, solid state drives, or other data storage devices. In some implementations, the secondary storage device 115 can be used to store content in one or more secure areas. When stored by the secondary storage device 115, the content of the secure area can be encrypted to prevent unauthorized access.The communication circuit 117 of the computing device 100 may be implemented as any communication circuit, device, or collection thereof that can enable communication between the computing device 100 and other remote devices through a network. The communication circuit 117 may be configured to use any one or more communication technologies (for example, wired or wireless communication) and associated protocols (for example, Ethernet, WiMAX, etc.) to achieve such communication.FIG. 3 is a flowchart of a method 300 for using MK-TMEi and a security extension (SGX) to initialize a host switchable security area in a memory according to an implementation manner. The method 300 may be performed by processing logic that may include hardware (eg, circuits, dedicated logic, programmable logic, microcode, etc.), firmware, or a combination thereof. In one implementation, the method 300 is executed by the processor 101 of FIG. 1A, for example, executed by the BIOS firmware 150 and/or other firmware. In another implementation, the method 300 is executed by any processor described with reference to FIGS. 10A to 16.Referring to FIG. 3, the method 300 may start when the processing logic discovers that the host can switch the secure zone (for example, HC-EPC) mode and the security extension mode (for example, SGX) (305). The method 300 can continue to configure the host switchable security zone architecture by the processing logic, for example, by setting one or more processor reserved memory range registers (PRMRR), which can be numbered in the hardware register 130 (310) . PRMRR can specify the EPC memory range and where the memory protection metadata is located. The host switchable security zone mode can extend the use of PRMRR to allow the PRMRR to be reconfigured or reprogrammed without restarting the computing device 100. When being reprogrammed to support the HC-EPC instruction set architecture, PRMRR is renamed here as a flexible EPC domain range register (FEDRR).Continuing to refer to FIG. 3, the method 300 may continue to set a bit by the processing logic to enable the SGX mode architecture, such as the FEATURE_CTRL.SGX_ENABLE bit (315). The method 300 may continue to be programmed by the processing logic using the key ID and other MK-TME and TDX related information to program the TME_ACTIVATE MSR, as described with reference to FIGS. 2A-2B (320). The method 300 may continue to communicate the memory layout (eg, the range of memory reserved for EPC) by the processing logic to the MCHECK firmware 162 (325). The method 300 may continue to load SGX patch-load instructions by the processing logic (for example, using WRMSR microcode 160) (330).In various implementations, BIOS firmware 150 initially programs PRMRR to support protected memory management according to static mode. At the same time, the BIOS can call the patch load instruction, which starts the allocation by the processor core to be used in the memory protection metadata in the HC-EPC mode (for example, EPCM, BEPOCH (blocked EPOCH), memory encryption engine (memory Encryption engine, MEE) tree, etc.) patch loading. The BIOS can also store the final configuration of the FEDRR and the memory map of the reserved memory to the hardware via a pointer stored in the PA 169. This reserved memory may include memory holes such as memory mapped I/O (MMIO) and system management random access memory (SMRAM).In the implementation, these pointers include the first pointer from the BIOS to the patch/core reserved area; the second pointer from the BIOS to the patch/core host convertible security area; and the third pointer from the patch/core to the BIOS reserved area pointer. In an implementation, the first pointer points to a memory address at which the BIOS stores a subset of the memory range (of the memory 140) that is reserved as a reserved area and thus cannot be used for conversion into a secure page. These subsets of the memory range may be reserved for use by, for example, certain hardware or other I/O processes, which will be discussed in more detail with reference to FIG. 4. The second pointer may point to the following memory addresses: at these memory addresses, the BIOS stores the set of memory ranges (of the memory 140) that are designated to be converted into secure pages (for example, the host convertible secure area memory range), which is also referred to in FIG 4 to discuss in more detail. These memory ranges can be delimited by the base address and mask of each memory reserved range, and can be accessible by the processor core.The third pointer may point to memory addresses at which the processor core may store a memory range reserved for code and data to be accessed by the BIOS when executing the patch load instruction. More specifically, the BIOS may store the first information and the second information to the PA 169 of the memory 140. The first information may, for example, delimit a memory range that can be allocated to the main memory to be flexibly converted into a secure page. The second information may delimit a subset of the memory range allocated for reserved memory (for example, memory that cannot be flexibly converted into a secure page). The BIOS 150 may also write other memory protection metadata for the processor core(s) 102 to implement the selected mode of protected memory management.In an implementation manner, the patch loading instruction is an instruction called by the BIOS to provide access to the PA169 of the memory by the processor core. This data (usable by the BIOS) can include memory protection metadata that will be used for protected memory management and allocated when the patch load instruction is executed. This memory range reserved for the BIOS can be treated similarly to hardware reserved memory.In various implementations, when the patch load instruction is executed, the method 300 may continue to verify the hardware configuration of the TME_ACTIVATE MSR and any PRMRR by the processing logic (eg, MCHECK firmware 162) (332). The method 300 may continue to be programmed by the processing logic to a security extension key (eg, SGX key) to correspond to the security zone key ID 215 (FIG. 2B) (334). Method 300 may continue to update the memory layout in PA 169 reserved for HC-EPC by processing logic (336). In the implementation, if any of the following is true, the MCHECK (or memory check) process fails and the security extension (SGX) is not activated: that is, (1) the processor hardware is not in all processors of the multi-core processor 101 The core and the package are programmed the same; (2) the security zone key ID is assigned, but the HC-EPC mode is not enabled; and (3) the security zone key ID is not assigned, where the HC-EPC mode is enabled Can (340).With continued reference to FIG. 3, the method 300 may continue to activate the SGX via the WRMSR microcode 160 by the processing logic (345). The method 300 may continue to update the memory map (eg, to reflect the configuration of reserved and host convertible EPC pages) and the Advanced Configuration and Power Interface (ACPI) table (355) by the processing logic. The method 300 may continue to fully boot the operating system (OS) by the processing logic (360).4 is a memory diagram illustrating different ranges of memory that can be allocated for conversion into a secure page and reserved memory that cannot be allocated for such conversion according to an implementation of the present disclosure. The memory diagram 400 illustrates a portion of the main memory 140 according to various implementations. In the HC-EPC mode, the BIOS firmware 150 can set a memory map, which includes a 4GB first flexible secure area memory range 410 and a 2GB second flexible secure area memory range 450, so these memory ranges can be changed.According to various implementations, although the first flexible secure area memory range 410 is generally convertible into secure pages ("EPC convertible" segments), there are several memory reserved segments, such as those previously discussed. More specifically, these memory reserved sections may include a traditional 1MB section 414, a BIOS code and data section 418, a hardware reserved section 424A, a memory hole section 428, and a hardware reserved section 430A depending on the implementation. The traditional EPC range 444 may be a statically allocated memory range supporting a static mode of secure pages. Once the FEDRR is reprogrammed, the traditional EPC range 444 can be converted into a secure page.According to various implementations, although the second flexible secure area memory range 450 can generally be converted into secure pages, there is no hardware reserved memory. The hardware reserved memory may include a hardware reserved section 424B and a hardware reserved section 430B depending on the implementation. The memory sections 424A, 424B, 430A, and 430B may be reserved for hardware use and thus cannot be converted into secure pages.When the computing device 100 is in an unlocked mode or locked in a static mode, the PRMRR may operate according to static (eg, traditional) secure page operations. In this case, the execution of instructions written to the model specific register (WRMSR) can be checked, for example, with the system management range register (SMRR) or advanced programmable interrupt controller (APIC) page. overlapping.When the computing device 100 is locked in the HC-EPC mode, as described previously, the processing core 102 can reprogram the PRMRR according to the FEDRR configuration passed by the BIOS via the predetermined area (PA) 169 of the memory 140. Since the intention is for FEDRR to cover the entire physical memory, the WRMSR instruction can stop checking for overlap with SMRR and APIC pages (this is the traditional activity of WRMSR, because the SMRR area and the internal memory of the APIC page cannot be used in the secure area). From this point on, the memory specified in FEDRR is available to the OS (except the memory reserved by the BIOS) and can be converted to enclave page cache (EPC) via the secure area conversion command EMKEPC, and can be converted via non-secure conversion The instruction EMKNONEPC is converted back to non-EPC. Both of these instructions can take the form of safety instructions or enclave (ENCLS) leaf instructions (for example, ENCLS supervisor instructions) in the form of safety functions.In various implementations, the processor core 112 may use bits in the PRMRR, or some other MSR included in the register hardware register 130, to identify whether the PRMRR is in static mode or FEPC mode, and according to the mode of the secure page To apply appropriate access control mechanisms. The BIOS firmware 150 can create a final FEDRR configuration to cover the physical memory in HC-EPC mode. Therefore, the computing device 100 may configure several FEDRRs in order to set the memory map of the entire physical memory. For example, it is possible that the client computer system can be configured with two or more FEDRRs, while the server computer system can be configured with up to 16 FEDRRs (or more) to effectively cover the expected memory configuration. For example, each local outlet may require a separate FEDRR.Although the memory size is not always a power of 2, FEDRR may be set according to a power of 2 related to the memory size of the main memory 140. This means that the BIOS 150 can set the FEDRR size to overlap the memory hole created when the FEDRR is greater than the size of the available memory. However, once the patch load instruction is executed, an enclave page metadata (EPCM) entry that completely covers FEDRR can be allocated. The BIOS 130 can pass the memory hole information to the patch loading mechanism via the PA 169 so that the EPCM entries covering the memory hole can be initialized to indicate that they cannot be converted into a secure page.For example, Figure 4 illustrates an example layout of FEDRR on the client computer system before the HC-EPC mode is activated. When the HC-EPC mode is activated, FEDRR_0 can be reconfigured according to the HC-EPC_Range_0 configuration (first flexible secure area memory range 410), and FEDRR_1 can be reconfigured according to HC-EPC_Range_1 (second flexible secure area memory range 450) . For example, the memory in the original PRMRR_0 corresponding to the traditional EPC range 444 becomes convertible and can be used as it is, because this memory has not been accessed before the HC-EPC mode is activated.FIG. 5 is a flowchart of a method 500 of host switchable security zone access control performed by the processor core 102 of FIGS. 1A and 1B according to an implementation manner. The method 500 may be performed by processing logic that may include hardware (eg, circuits, dedicated logic, programmable logic, microcode, etc.), firmware, or a combination thereof. In one implementation, the method 500 is executed by the processor 101 of FIG. 1A, for example, executed by one or more of the processor core(s) 102.In various implementations, the method 500 may begin with the processing logic receiving a linear address for a cache line in the memory 140, for example, as part of a request to access the memory 140, the linear address may be converted into a guest virtual address (guest virtual address). address, GVA). The method 500 may continue to call address translation by processing logic, including performing page table and extended page table (EPT) walks to convert the GVA into the physical address (PA) of the memory 140 (505). The method 500 may continue to determine by the processing logic whether the physical address indicates that the request is a request for access to a secure area, for example, to access the first secure area of the EPC in the memory 140 (510). For example, memory access can be of two types. The first type is an enclave access to a linear address corresponding to a memory range that falls within the architecturally protected memory area of the memory 140. The second type is non-enclave access to linear addresses corresponding to memory outside the range of such memory(s).Continuing to refer to FIG. 5, the method 500 may continue to be determined by the processing logic whether the key ID located in the physical address matches the enclave key ID (also referred to as the EPC key ID herein), and thus corresponds to the architecturally protected Security page (515) in a memory area (e.g., protected area 165 in FIG. 1A). In one scenario, the processing logic determines that the first key ID located in the physical address does not match the security zone key ID. Therefore, the method 500 may continue to issue a page error by the processing logic and deny access to the secure page in the memory 140 by the system agent that issued the request for memory access (520).Referring again to block 515 of FIG. 5, in another scenario, the processing logic determines that the first key ID located in the physical address does match the security zone key ID (515). Therefore, the method 500 can continue to check the metadata of the EPCM 126 by the processing logic to perform a security check on the ownership of the secure page, check that the system agent is authorized to access a certain type of secure page, and implement one secure area without access to another secure area Security page, and other checks (525). The method 500 may continue with the processing logic to determine whether the EPCM-based check has passed (530). If not, the method 500 may continue to generate a page error by the processing logic and deny access to the secure page in the memory (520). If so, the method 500 may continue to allow the memory access by the system agent by the processing logic (540).Continuing to refer to FIG. 5, in response to determining that the physical address is associated with the non-secure page at block 510, the method 500 may continue to determine by the processing logic whether the first key ID located in the physical address matches the secure zone key ID, thereby corresponding to the slave Secure pages (550) in an architecturally protected memory area (e.g., protected area 165 in FIG. 1A). If not, then due to the fact that the memory being accessed is a proper non-secure memory, the method 500 may continue to allow the memory access by the processing logic (540). If so, the first key ID and the secure zone key ID match, then the method 500 may continue to allow the processing logic to deny access to the non-secure page of the memory by the system agent that issued the request. In one implementation, this can be performed by replacing the physical address in the request with the abort page address (555). The abort page address can be linked to the abort page that contains incorrect data, such as all zeros or all ones. Method 500 may continue with processing logic allowing access to the suspended memory page (540).6 is a block diagram 600 of the page table 142 associated with the conversion of a linear address 605 to a physical memory address (PA) using paging according to an implementation. Paging supports a virtual memory environment, where a large linear address space is simulated with a small amount of physical memory (RAM and ROM) and some disk storage. When paging is used, each fragment is divided into pages (for example, 4KB each), and these pages are either stored in the memory 140 or on a disk, such as the secondary storage 115. The operating system and/or the memory controller 120 may maintain a page directory and a set of page tables to keep track of pages. When a program (or task) attempts to access an address location in the linear address space, the memory controller 120 can use the page directory and page table to convert the linear address 605 into a physical address, and then perform the requested operation on the memory location ( Read or write).If the page being accessed is not currently in physical memory, the processor interrupts the execution of the program (by generating a page fault exception). The memory controller 120 can then read the page from the disk into the physical memory and continue to execute the program.Continuing to refer to FIG. 6, the linear address 605 can be divided into page directory (PDE) bits, page table bits, and offset. The PDE bit can serve as a pointer to a page directory table (PDT) located via one of the bits in the CR3 control register. The address in the PDT table pointed to by the PDE bit can then serve as a pointer to locate the correct page table in the memory. The page table bit points to a page table entry (PTE) in the located page table. The PTE can then act as a pointer to the address of the correct 4K byte page in the memory, within which the offset points to the physical memory address. In an implementation manner, the processor core 102 (for example, system software executed on the processor core 102) includes the secure area key ID for the secure page in the memory 140 in the mapping to the physical address of the page table 142. In one implementation, the high-order bits in the page table entry contain the security zone key ID.7A is a block diagram 700 illustrating the conversion of a guest virtual address (GVA) to a guest physical address (GPA) and the conversion of GPA to a host physical address (HPA) according to an implementation manner. . In one implementation, in order to represent the VM simulation instruction, the VMM 140A may need to convert the linear address (for example, GVA) used by the instruction into a physical memory address, so that the VMM can access the data at the physical address. As mentioned earlier, VMM can also gain access to the additional key ID, and no protection measures are being implemented.In order to perform this conversion, the VMM may need to first determine paging and segmentation, including examining the segmentation status of the virtual machine (VM) 155. The VMM can also determine the paging mode of the VM 155 when the instruction is called, including checking the page table set by the VM and checking the control register 134 and MSR programmed by the VM 155. After discovering paging and segmentation modes, the VMM can generate GVAs for logical addresses and detect any segmentation errors.Assuming that no segmentation error is detected, the VMM can convert GVA to GPA and GPA to HPA, including executing page tables and EPT walks in the software. In order to perform these conversions in software, the VMM can load several paging structure entries and EPT structure entries originally set by the VM 155 into general-purpose registers or memory. Once these paging and EPT structure entries are loaded, the page miss processor (PMH) 122 can perform the conversion by modeling the conversion circuit.More specifically, referring to FIG. 7A, when the VMM executes VMRESUME using a virtual machine control structure (VMCS), PMH 122 can be programmed with the guest page table pointer and EPT pointer (stored in the memory 140) from the VMCS. in). PMH 122 may load multiple guest page table entries 144A from guest page table 142 and multiple extended page table entries 146A from EPT 134, which are established by VM 155. PMH 122 may then perform the conversion by walking (eg, sequential search) through the visitor page table entry 144A to generate a GPA from the GVA. PMH 122 may then use GPA to roam (eg, search sequentially) EPT 146 to generate HPA associated with GPA. The use of EPT 146 is a feature that can be used to support the virtualization of physical storage. When EPT is in use, some addresses that are usually regarded as physical addresses (and used to access memory) are turned into guest physical addresses. The visitor's physical address is transformed by traversing a group of EPT paging structures to generate the physical address for accessing the physical memory. In an implementation manner, the processor core 102 (for example, system software executed on the processor core 102) includes the secure area key ID in the page table 142 and/or the extended page table (EPT) for the secure page in the memory 140. ) 146 within the mapping of the physical address. In one implementation, the high-order bits in the page table entry and the extended page table entry contain the security zone key ID.FIG. 7B is a block diagram 750 illustrating the use of an extended page table (EPT) to convert a guest physical address (GPA) into a host physical address (HPA) according to an implementation. For example, according to one implementation, the VMM 104A or PMH 122 may walk the extended page table (EPT) entry 146A to convert GPA to HPA. For example, the visitor physical address (GPA) can be broken down into a series of offsets, each going to an index in the hierarchical table structure of the EPT entry 146A. In this example, the EPT from which the EPT entry is derived includes a four-level hierarchy table of entries, including a page mapping level 4 table, a page directory pointer table, a page directory entry table, and a page table entry table. (In other embodiments, different numbers of levels of hierarchy may exist in the EPT, and therefore, the disclosed embodiments are not limited by the specific implementation of EPT.) Each index at a certain level of the EPT hierarchy The result of can be added to the offset of the next table to locate the next result of the next table in the EPT hierarchy. The result of the fourth (page table entry) table can be combined with the page offset to locate a 4Kb page in physical memory (for example), which is the host physical address.FIG. 8 is a functional flowchart 800 illustrating the security check of the cryptographic engine 110 using the grid security bit and the memory security bit as a reference security area memory operation according to an implementation. In this implementation, the CHA 109 of the memory subsystem responds to detecting that the first key identifier (ID) in the physical address of the cache line matches the secure area key ID and sets the grid security bit of the cache line (MESH. S). The CHA 109 can also issue a write operation to the memory 140 for the cache line, which has a location identified by the PA in the memory operation request. In an implementation, the cryptographic engine 110 is coupled to CHA and as part of the completion of the write operation, the memory security bit (MEM.S) in the metadata of the cache line existing in the memory is set to the mesh security bit (MESH. S) value.Referring also to FIG. 8, the cryptographic engine 110 can also detect read operations for cache lines stored in the memory. In order to perform the read operation, the cryptographic engine 110 may return poison bits to the requesting agent in response to detecting a mismatch between the mesh security bits (MESH.S) and the memory security bits (MEM.S). In addition to poison bits, the cryptographic engine 110 can also return fixed pattern data to the requesting agent. Similarly, in order to perform the read operation, the cryptographic engine 100 may return the data of the cache line to the requesting agent in response to determining that the values of the mesh security bit (MESH.S) and the memory security bit (MEM.S) match. Table 1 illustrates a summary of these actions depending on the respective values of the mesh security bit (MESH.S) and the memory security bit (MEM.S).MESH.S MEM.S Reading returns 0 0 Normal data 0 1 Fixed mode (as data) and poison bit 1 0 Fixed mode (as data) and poison bit 1 1 Normal dataTable 19A is a flowchart of a method 900 for assigning an evicted memory page to a secure area key ID according to an implementation. The method 900 may be performed by processing logic that may include hardware (eg, circuits, dedicated logic, programmable logic, microcode, etc.), firmware, or a combination thereof. In one implementation, the method 900 is executed by the processor 101 in FIG. 1A, and in one implementation, it includes execution by system software executed on one or more processor cores 102.Referring to FIG. 9A, the method 900 may begin by processing logic selecting an eviction page of the memory to convert to the first secure page (905). The method 900 may continue with the processing logic executing the secure area (eg, enclave) conversion instruction (EMKEPC) to initialize the evicted page as the first secure page (910). Initializing the eviction page may include the method 900 continuing to write all zeros as the content of the eviction page by the processing logic (912), using the physical address of the eviction page, the data to be stored in the first secure page, and the data that contains the first security page. Calculate the new message authentication code (MAC) value (914) from the security zone key identifier (ID) corresponding to the architecturally protected memory zone of the memory of the secure page, and store the MAC value in the first security Page (916).In one implementation, the EMKEPC instruction can trigger the execution of the MOVDIR64B instruction, which has a security zone key ID in the operation object, which is to zero the target page and initialize the MAC value for the new secure page. This may be because MOVDIR64B can initialize the MAC value for a new secure page, and the system software cannot perform this initialization.Continuing to refer to FIG. 9A, the method 900 may continue to execute a memory fence (MFENCE) instruction by the processing logic to verify the completion of the operation associated with the initialization of the first secure page (920). The method 900 may continue with processing logic to make the first secure page accessible to one of the virtual machines or applications authorized to access the architecturally protected memory area of the memory.9B is a flowchart of a method 950 for evicting a secure page in memory for reassignment to a non-secure area key ID according to an implementation. The method 900 may be performed by processing logic that may include hardware (eg, circuits, dedicated logic, programmable logic, microcode, etc.), firmware, or a combination thereof. In one implementation, the method 950 is executed by the processor 101 in FIG. 1A, and in one implementation, the method 950 is executed by system software executed on one or more processor cores 102.Referring to Figure 9B, the method 950 may begin with processing logic selecting the first secure page to evict and convert to a non-secure page (960). The method 950 may continue with processing logic to make the first secure page inaccessible to one of the virtual machines or applications authorized to access the architecturally protected memory area of the memory (965). The method 950 may continue to use the processing logic to invalidate the mapping of the first secure page in the translation lookaside buffer (TLB) of the processor (975). For example, the processor core of FIG. 10A illustrates a data TLB unit, where the map mapping can be invalidated.With additional reference to FIG. 9B, the method 950 may continue to execute the non-secure zone conversion instruction (EMKNONEPC) by the processing logic to cause the content of the one or more cache lines containing the secure zone key ID corresponding to the first secure page to be written back ( To storage 140) and flush (975). In order to write back and flush, in one implementation, the EMKNONEPC instruction can trigger the execution of the cache line flush (CLFLUSH) instruction using the linear address converted from the security zone key ID to the physical address. The method 950 may continue with the processing logic to return the first secure page to the list of eviction pages available to the processor 101 for distribution for storage of data associated with the new key ID (980).FIG. 10A is a block diagram illustrating the microarchitecture of the processor 1000 that implements hardware support for restricting the use of encryption keys by untrusted software according to an implementation manner. Specifically, the processor 1000 depicts an in-order architecture core and register renaming logic and out-of-order issue/execution logic to be included in the processor according to at least one implementation of the present disclosure.The processor 1000 includes a front end unit 1030 coupled to an execution engine unit 1050, and both are coupled to a memory unit 1070. The processor 1000 may include a reduced instruction set computing (reduced instruction set computing, RISC) core, a complex instruction set computing (complex instruction set computing, CISC) core, a very long instruction word (VLIW) core, or a hybrid or replacement core Types of. As another option, the processor 1000 may include a dedicated core, such as a network or communication core, a compression engine, a graphics core, and so on. In one implementation, the processor 1000 may be a multi-core processor or may be part of a multi-processor system.The front-end unit 1030 includes a branch prediction unit 1032, which is coupled to the instruction cache unit 1034, the instruction cache unit 1034 is coupled to an instruction translation lookaside buffer (translation lookaside buffer, TLB) 1036, and the TLB 1036 is coupled to the instruction fetch unit 1038, which 1038 is coupled to the decoding unit 1040. The decoding unit 1040 (also called a decoder) can decode instructions and generate one or more micro-operations, micro-code entry points, micro-instructions, other instructions, or other control signals as output. These micro-operations, micro-code entry points, Microinstructions, other instructions, or other control signals are decoded from the original instructions, or reflect the original instructions in other ways, or are derived from the original instructions. The decoder 1040 can be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLA), microcode read only memory (read only memory, ROM), and so on. The instruction cache unit 1034 is also coupled to the memory unit 1070. The decoding unit 1040 is coupled to the rename/allocator unit 1052 in the execution engine unit 1050.The execution engine unit 1050 includes a rename/allocator unit 1052 coupled to the retirement unit 1054 and a set of one or more scheduler units 1056. The scheduler unit(s) 1056 represents any number of different scheduler circuits, including reservation stations (RS), central command windows, and so on. The scheduler unit(s) 1056 is coupled to the physical register aggregation unit(s) 1058. Each of the physical register set units 1058 represents one or more physical register sets, and different physical register sets in these physical register sets store one or more different data types (such as scalar integer, scalar floating point, packed integer, packed Floating point, vector integer, vector floating point, etc.), state (for example, an instruction pointer as the address of the next instruction to be executed), etc. The physical register collection unit(s) 1058 overlaps the retirement unit 1054 to illustrate various ways that can be used to implement register renaming and out-of-order execution (for example, using (one or more) reordering buffers and (a (Or more) retirement register set, using future file(s), history buffer(s) and retirement register set(s); using register map and register pool; etc.) .Generally speaking, the architectural registers are visible from the outside of the processor or from the perspective of the programmer. The register is not limited to any known specific type of circuit. Various different types of registers are appropriate as long as they can store and provide data as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, a combination of dedicated and dynamically allocated physical registers, and so on. The retirement unit 1054 and the physical register aggregation unit(s) 1058 are coupled to the execution cluster(s) 1060. The execution cluster(s) 1060 includes a group of one or more execution units 1062 and a group of one or more memory access units 1064. The execution unit 1062 can perform various operations (for example, shift, addition, subtraction, multiplication) and perform on various types of data (for example, scalar floating point, packed integer, packed floating point, vector integer, vector floating point) operating.Although some implementations may include several execution units dedicated to a specific function or set of functions, other implementations may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1056, the physical register collection unit(s) 1058, and the execution cluster(s) 1060 are shown as possibly multiple because some implementations are of certain types Data/operations create separate pipelines (for example, scalar integer pipeline, scalar floating point/compacted integer/compacted floating point/vector integer/vector floating point pipeline, and/or memory access pipeline, each with its own scheduler unit, physical Register collection unit and/or execution cluster-and in the case of a separate memory access pipeline, certain implementations are implemented in which only the execution cluster of this pipeline has memory access unit(s) (1064). It should also be understood that in the case of using separate pipelines, one or more of these pipelines may be issued/executed out of order, while the rest are ordered.The set of memory access units 1064 is coupled to a memory unit 1070, and the memory unit 1070 may include, for example, a data prefetcher 1080, a data TLB unit 1072, a data cache unit (DCU) 1074, and a level 2 (L2) Cache unit 1076. In some implementations, the DCU 1074 is also referred to as the first level data cache (L1 cache). DCU 1074 can handle multiple pending cache misses and continue to serve incoming stores and loads. It also supports maintaining cache coherency. The data TLB unit 1072 is a cache for improving the conversion speed of virtual addresses by mapping virtual and physical address spaces. In an exemplary implementation, the memory access unit 1064 may include a load unit, a storage address unit, and a storage data unit, each of which is coupled to a data TLB unit 1072 in the memory unit 1070. The L2 cache unit 1076 may be coupled to one or more other levels of cache and ultimately to the main memory.In one implementation, the data prefetcher 1080 speculatively loads/prefetches data to the DCU 1074 by automatically predicting which data the program will consume. Prefetching can refer to transferring data stored in a memory unit (e.g., location) (e.g., lower-level cache or memory) of the memory hierarchy before the processor actually needs the data to be closer to the processor (e.g., produce more Lower access latency) higher-level memory cells. More specifically, prefetching may refer to fetching data from one of the lower-level cache/memory back to the data cache and/or prefetch buffer before the processor issues a request to return specific data.The processor 1000 can support one or more instruction sets (for example, the x86 instruction set (with some extensions that have been added with the updated version); the MIPS instruction set of Imagination Technologies, Kings Langley, Hertfordshire, UK; ARM instruction set from ARM Holdings, Sunnyvale, California (with optional additional extensions, such as NEON).It should be understood that the core can support multithreaded processing (two or more parallel sets of execution operations or threads), and can support multithreaded processing in a variety of ways, including time-slicing multithreading, simultaneous Multi-threaded processing (where a single physical core provides a logical core for each thread that performs multi-threaded processing at the same time), or a combination of these (for example, time-slicing acquisition and decoding, and then simultaneous multi-threaded processing, such as Hyperthreading technology).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an ordered architecture. Although the illustrated implementation of the processor also includes separate instruction and data cache units and a shared L2 cache unit, alternative implementations can have a single internal cache for both instructions and data, such as level 1 (L1) internal caches, Or multi-level internal cache. In some implementations, the system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.FIG. 10B is a block diagram illustrating an in-order pipeline, a register renaming phase, and an out-of-order issue/execution pipeline implemented by the processor 1000 of FIG. 10A according to some implementations of the present disclosure. The solid line box in FIG. 10B illustrates the in-order pipeline 1001, and the dashed line box illustrates the register renaming, out-of-order issue/execution pipeline 1003. In Figure 10B, pipelines 1001 and 1003 include an acquisition stage 1002, a length decoding stage 1004, a decoding stage 1006, an allocation stage 1008, a rename stage 1010, a dispatch (also called dispatch or issue) stage 1012, a register read/memory read Fetch phase 1014, execution phase 1016, write back/memory write phase 1018, exception handling phase 1020, and commit phase 1022. In some implementations, the ordering of stages 1002-1024 may be different from that shown and is not limited to the specific ordering shown in FIG. 10B.FIG. 11 illustrates a block diagram of a microarchitecture of a processor 1100 according to an implementation of the present disclosure. The processor 1100 includes a logic circuit of a processor or an integrated circuit that implements hardware support for restricting the use of encryption keys by untrusted software. In some implementations, an instruction according to an implementation can be implemented in sizes of bytes, words, double words, quad words, etc., and data types such as single-precision and double-precision integers and floating-point data types. Operations on the data elements. In one implementation, the in-order front end 1101 is the part of the processor 1100 that takes the instructions to be executed and prepares them for later use in the processor pipeline. The implementation of page addition and content copy may be implemented in the processor 1100.The front end 1101 may include several units. In one implementation, the instruction prefetcher 1126 fetches instructions from the memory and feeds them to the instruction decoder 1128, and the instruction decoder 1128 then decodes or interprets the instructions. For example, in one implementation, the decoder decodes the received instructions into one or more operations called "micro instructions" or "micro operations" (also called micro ops or uops) that can be executed by the machine. In other implementations, the decoder parses instructions into opcodes and corresponding data and control fields, which are used by the microarchitecture to perform operations according to an implementation. In one implementation, the trace buffer 1130 obtains the decoded uops and assembles them into a sequence of programs or traces in the uop queue 1134 for execution. When the trace cache 1130 encounters a complex instruction, the microcode ROM (or RAM) 1132 provides the uop needed to complete the operation.Some instructions are converted into a single micro op, while others require several micro ops to complete the entire operation. In one implementation, if more than four micro ops are required to complete the instruction, the instruction decoder 1128 accesses the microcode ROM 1132 to perform the instruction. For one implementation, instructions can be decoded into a small number of micro ops for processing at the instruction decoder 1128. In another implementation, if several micro ops are required to complete the operation, the instructions can be stored in the microcode ROM 1132. The trace cache 1130 refers to the entry point programmable logic array (PLA) to determine the correct microinstruction pointer for reading the microcode sequence from the microcode ROM 1132 to complete one or more instructions according to an implementation. After the microcode ROM 1132 finishes sorting the micro ops for an instruction, the front end 1101 of the machine continues to obtain the micro ops from the trace cache 1130.The out-of-order execution engine 1103 is where instructions are prepared for execution. Out-of-order execution logic has several buffers to smooth and reorder the flow of instructions as they travel along the pipeline and are scheduled for execution to optimize performance. The allocator logic allocates machine buffers and resources required by each uop for execution. The register renaming logic renames the logical register to an entry in the register set. The allocator also allocates an entry for each uop in one of the two uop queues in front of the instruction scheduler. One of the two uop queues is used for memory operations and the other is used for non-memory operations. These instruction schedulers are: memory scheduler , Fast scheduler 1102, slow/general floating-point scheduler 1104, and simple floating-point scheduler 1106. The uop scheduler 1102, 1104, 1106 determines when the uop is ready for execution based on the readiness of the source of its dependent input register operation object and the availability of execution resources required by the uop to complete its operation. The fast scheduler 1102 of one implementation can schedule on each half of the main clock cycle, while other schedulers can only schedule once in each main processor clock cycle. The scheduler arbitrates for the dispatch port to schedule uop for execution.The register sets 1108 and 1110 are located between the scheduler 1102, 1104, and 1106 and the execution units 1112, 1114, 1116, 1118, 1120, 1122, 1124 in the execution block 1111. There are separate register sets 1108 and 1110 for integer and floating point operations, respectively. Each register set 1108, 1110 of an implementation also includes a bypass network, which can bypass or forward the just completed result that has not been written into the register set to a new subordinate uop. The integer register set 1108 and the floating point register set 1110 can also communicate data with one another. For one implementation, the integer register set 1108 is divided into two separate register sets, one register set is used for the low-order 32 bits of data, and the second register set is used for the high-order 32 bits of data. The floating-point register set 1110 of an implementation has 128-bit wide entries, because floating-point instructions usually have operation objects with a width from 64 to 128 bits.The execution block 1111 includes execution units 1112, 1114, 1116, 1118, 1120, 1122, 1124, in which instructions are actually executed. This part includes register sets 1108 and 1110 for storing integer and floating-point data operation object values that microinstructions need to execute. The processor 1100 of an implementation manner includes several execution units: an address generation unit (AGU) 1112, AGU 1114, fast ALU 1116, fast ALU 1118, slow ALU 1120, floating point ALU 1112, and floating point mobile unit 1114. For one implementation, floating point execution blocks 1112, 1114 perform floating point, MMX, SIMD, and SSE or other operations. One implementation of the floating point ALU 1112 includes a 64-bit by 64-bit floating point divider to perform division, square root, and remainder micro-ops. For the implementation of the present disclosure, floating-point hardware can be used to process instructions involving floating-point values.In one implementation, ALU operations go to high-speed ALU execution units 1116, 1118. One implementation of the fast ALU 1116, 1118 can perform fast operations with an effective delay of half a clock cycle. For one implementation, most complex integer operations go to the slow ALU 1120, because the slow ALU 1120 includes integer execution hardware for long-latency operations such as multipliers, shifts, flag logic, and branch processing. Memory load/store operations are performed by AGU1122, 1124. For one implementation, the integer ALUs 1116, 1118, and 1120 are described in the context of performing integer operations on 64-bit data operation objects. In alternative implementations, ALU 1116, 1118, 1120 can be implemented to support a variety of data bits, including 16, 32, 128, 256 bits, and so on. Similarly, the floating point units 1122, 1124 can be implemented to support a range of operation objects with various widths of bits. For one implementation, the floating-point units 1122, 1124 can combine SIMD and multimedia instructions to operate on 128-bit wide compressed data operation objects.In one implementation, the uop scheduler 1102, 1104, 1106 dispatches the subordinate operations before the parent load completes execution. Since uop is speculatively scheduled and executed in the processor 1100, the processor 1100 also includes logic to handle memory misses. If the data load is missed in the data cache, there may be slave operations in progress in the pipeline, which leave temporarily incorrect data to the scheduler. The replay mechanism tracks and re-executes instructions that use incorrect data. Only dependent operations need to be replayed, while independent operations are allowed to complete. The scheduler and replay mechanism of one implementation of the processor is also designed to capture instruction sequences for text string comparison operations.The term "register" may refer to an on-board processor storage location used as part of an instruction to identify an operation object. In other words, the registers may be those available from the outside of the processor (from the perspective of the programmer). However, the register of the implementation should not be limited to a specific type of circuit in meaning. More precisely, the register of the implementation mode can store and provide data, and perform the functions described herein. The registers described herein can be implemented by circuits within the processor using any number of different technologies (for example, dedicated physical registers, dynamically allocated physical registers using register renaming, a combination of dedicated and dynamically allocated physical registers, etc.). In one implementation, the integer register stores 32-bit integer data. The register set of an implementation also includes eight multimedia SIMD registers for compacting data.For the discussion in this article, a register is understood to be a data register designed to store compressed data, such as a 64-bit wide MMXTM register in a microprocessor enabled with MMX technology from Intel Corporation of Santa Clara, California (in some cases These MMX registers, also known as "mm" registers in both integer and floating-point formats, can be combined with packed data element operations that accompany SIMD and SSE instructions. Similarly, 128-bit wide XMM registers related to SSE2, SSE3, SSE4 or above (collectively referred to as "SSEx") technologies can also be used to store such compressed data manipulation objects. In one implementation, when storing compressed data and integer data, the register does not need to distinguish between the two data types. In one implementation, integers and floating points are contained in the same register set or in different register sets. Furthermore, in one implementation, floating-point and integer data can be stored in different registers or in the same register.Implementation can be implemented in many different system types. Referring now to FIG. 12, a block diagram of a multi-processor system 1200 that can implement hardware support for restricting the use of encryption keys by untrusted software is shown according to an implementation manner. As shown in FIG. 12, the multi-processor system 1200 is a point-to-point interconnection system, and includes a first processor 1270 and a second processor 1280 coupled via a point-to-point interconnection 1250. As shown in FIG. 12, each of the processors 1270 and 1280 may be a multi-core processor, including first and second processor cores (ie, processor cores 1274a and 1274b and processor cores 1284a and 1284b), although There may be many more cores in the processor. Although shown as having two processors 1270, 1280, it is understood that the scope of the present disclosure is not limited thereto. In other implementations, one or more additional processors may be present in a given processor.Processors 1270 and 1280 are shown as including integrated memory controller units 1272 and 1282, respectively. The processor 1270 also includes point-to-point (P-P) interfaces 1276 and 1278 as part of its bus controller unit; similarly, the second processor 1280 includes P-P interfaces 1286 and 1288. The processors 1270 and 1280 can exchange information via a point-to-point (P-P) interface 1250 using P-P interface circuits 1278 and 1288. As shown in Figure 12, IMC 1272 and 1282 couple the processors to their respective memories, namely memory 1232 and memory 1234, which may be part of the main memory attached locally to each processor.The processors 1270, 1280 can use point-to-point interface circuits 1276, 1294, 1286, 1298 to exchange information with the chipset 1290 via individual P-P interfaces 1252, 1254. The chipset 1290 can also exchange information with the high-performance graphics circuit 1238 via the high-performance graphics interface 1292.The chipset 1290 may be coupled to the first bus 1216 via the interface 1296. In an implementation manner, the first bus 1216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or an interconnect bus, although the scope of the present disclosure is not limited thereto.As shown in FIG. 12, various I/O devices 1214 may be coupled to the first bus 1216, and a bus bridge 1218 that couples the first bus 1216 to the second bus 1220. In one embodiment, the second bus 1220 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1220, including, for example, a keyboard and/or mouse 1222, a communication device 1227, and a storage unit 1228, such as a disk drive or other mass storage device, which in one embodiment may include instructions/code and Data 1230. In addition, the audio I/O 1224 may be coupled to the second bus 1220. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 12, the system can implement a multipoint branch bus or other such architectures.Referring now to FIG. 13, a block diagram of a third system 1300 that can implement hardware support for restricting the use of encryption keys by untrusted software is shown according to an implementation of the present disclosure. Similar elements in FIGS. 12 and 13 have similar reference numerals, and some aspects of FIG. 13 are omitted from FIG. 12 to avoid obscuring other aspects of FIG. 12.Figure 13 illustrates processors 1370, 1380. In one embodiment, the processors 970, 980 may implement a hybrid core as described above. The processors 1370 and 1380 may include integrated memory and I/O control logic ("CL") 1372 and 1392, respectively, and are connected to each other via a point-to-point (PP) interface 1378 and 1388, respectively. Communication. The processors 1370 and 1380 each communicate with the chipset 1390 via point-to-point interconnects 1352 and 1354 via the respective P-P interfaces 1376 to 1394 and 1386 to 1398 as shown in the figure. For at least one implementation, the CL 1372, 1382 may include, for example, an integrated memory controller unit as described herein. In addition, CL1372, 1392 can also include I/O control logic. Figure 13 illustrates that memory 1332, 1334 is coupled to CL 1372, 1392, and I/O device 1314 is also coupled to control logic 1372, 1392. The conventional I/O device 1315 is coupled to the chipset 1390 via the interface 1396.14 is an exemplary system on chip (SoC) 1400, which may include one or more of the cores 1402A...1402N that can implement hardware support for restricting the use of encryption keys by untrusted software. Known in the art for use in laptop computers, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network equipment, network hubs, switches, embedded processors, digital signal processors (digital signal processor , DSP), graphics devices, video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and other system designs and configurations of various other electronic devices are also appropriate. In summary, many types of systems or electronic devices capable of containing the processor and/or other execution logic disclosed herein are generally suitable.In the exemplary SoC 1400 of FIG. 14, the dashed box is a feature on a more advanced SoC. The (one or more) interconnection unit 1403 may be coupled to: the application processor 1417, which includes a set of one or more cores 1402A-N (which respectively include one or more cache units 1404A...1404N), and (one or Multiple) shared cache unit 1406; system proxy unit 1410; (one or more) bus controller unit 1416; (one or more) integrated memory controller unit 1414; a set of one or more media processors 1420, which It may include integrated graphics logic 1408, image processor 1424 for providing still and/or video camera functions, audio processor 1426 for hardware audio acceleration, and video processor 1428 for video encoding/decoding acceleration; static; A static random access memory (SRAM) unit 1430; a direct memory access (DMA) unit 1432; and a display unit 1440 for coupling to one or more external displays.Next, turning to FIG. 15, it describes the implementation of a system on chip (SoC) design that can implement hardware support for restricting the use of encryption keys by untrusted software according to the implementation of the present disclosure. As an illustrative example, SoC 1500 is included in user equipment (UE). In one implementation, UE refers to any device that will be used by end users to communicate, such as handheld phones, smart phones, tablet devices, ultra-thin notebooks, notebooks with broadband adapters, or any other similar communication devices. The UE can be connected to a base station or node, which in nature can correspond to a mobile station (MS) in a GSM network. The implementation of page addition and content copy can be implemented in SoC 1500.Here, SoC 1500 includes 2 cores-1506 and 1507. Similar to the above discussion, the cores 1506 and 1507 can conform to the instruction set architecture, such as processors with architecture CoreTM, Advanced Micro Devices (Inc., AMD) processors, MIPS-based processors, ARM-based processor designs, or their customers, and their licensees or users. The cores 1506 and 1507 are coupled to the cache control 1508 associated with the bus interface unit 1509 and the L2 cache 1510 to communicate with other parts of the system 1500. Interconnect 1511 includes on-chip interconnects, such as IOSF, AMBA, or other interconnects discussed above, which can implement one or more aspects of the disclosure described.In one implementation, the SDRAM controller 1540 may be connected to the interconnect 1511 via the cache 1510. The interconnection 1511 provides communication channels to other components, for example, to Subscriber Identity Module (SIM) 1530 to interface with the SIM card, and to the boot ROM 1535 to save the boot code for execution by the core 1506 and 1507 to initialize and boot SoC1500 is provided to SDRAM controller 1540 to interface with external memory (such as DRAM 1560), to flash memory controller 1545 to interface with non-volatile memory (such as flash memory 1565), and to peripheral control 1550 (such as , Serial peripheral interface) to interface with peripherals, provide to the video codec 1520 and video interface 1525 to display and receive input (for example, touch enable input), provide to GPU 1515 to perform graphics-related calculations, etc. . Any of these interfaces can include aspects of the implementation described herein.In addition, the system illustrates peripherals for communication, such as power control module 1555, module 1570, 3G modem 1575, GPS 1580, and 1585. Note that as described above, the UE includes a radio device for communication. As a result, not all of these peripheral communication modules may be included. However, in the UE, some form of radio should be included for external communication.Figure 16 illustrates a diagrammatic representation of a machine in the exemplary form of a computing system 1600 in which there is a set of instructions for causing the machine to restrict untrusted software to encryption according to any one or more of the methods discussed herein. The use of keys enables hardware support. In alternative implementations, the machine can be connected (eg, networked) to other machines in the LAN, intranet, extranet, or the Internet. The machine can operate as a server or a client device in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (Personal Digital Assistant, PDA), a cellular phone, a web appliance, a server, a network router, a switch, or a network Bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify the actions to be taken by the machine. In addition, although only a single machine is illustrated, the term "machine" should also be understood to include any collection of machines that individually or jointly execute a set (or sets of) instructions to perform one or more of the methods discussed herein. The implementation of page adding and content copying can be implemented in the computing system 1600.The computing system 1600 includes a processing device 1602, a main memory 1604 (for example, flash memory, dynamic random access memory (dynamic random access memory, DRAM) (for example, synchronous DRAM (SDRAM, SDRAM) or DRAM (RDRAM), etc.), and static memory 1606 (For example, flash memory, static random access memory (SRAM), etc.) and data storage devices 1616, which communicate with each other via a bus 1608.The processing device 1602 represents one or more general processing devices, such as a microprocessor, a central processing unit, and so on. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computer (RISC) microprocessor, a very long instruction word (very long instruction word, VLIW) microprocessors, or processors that implement other instruction sets, or processors that implement a combination of instruction sets. The processing device 1602 may also be one or more dedicated processing devices, such as an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor , DSP), network processor, etc. In one implementation, the processing device 1602 may include one or more processor cores. The processing device 1602 is configured to execute processing logic 1626 for performing the operations discussed herein.In one implementation, the processing device 1602 may be part of a processor or integrated circuit that includes the disclosed LLC cache architecture. Alternatively, the computing system 1600 may include other components as described herein. It should be understood that the core can support multithreaded processing (two or more parallel sets of execution operations or threads), and can support multithreaded processing in a variety of ways, including time-slicing multithreading, simultaneous Multi-threaded processing (where a single physical core provides a logical core for each thread that performs multi-threaded processing at the same time), or a combination of these (for example, time-slicing acquisition and decoding, and then simultaneous multi-threaded processing, such as Hyperthreading technology).The computing system 1600 may also include a network interface device 1618 communicatively coupled to the network 1619. The computing system 1600 may also include a video display device 1610 (for example, a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1612 (for example, a keyboard), and a cursor control device 1614 (For example, a mouse), a signal generating device 1620 (for example, a speaker), or other peripheral devices. In addition, the computing system 1600 may include a graphics processing unit 1622, a video processing unit 1628, and an audio processing unit 1632. In another implementation, the computing system 1600 may include a chipset (not shown), which refers to a set of integrated circuits designed to work with the processing device 1602 and control communication between the processing device 1602 and external devices Or chip. For example, a chipset may be a set of chips on a motherboard that links the processing device 1602 to ultra-high-speed devices, such as main memory 1604 and graphics controllers, and a lower-speed peripheral bus that links the processing device 1602 to peripherals , Such as USB, PCI or ISA bus.The data storage device 1616 may include a computer-readable storage medium 1624 on which is stored software 1626 that implements any one or more methods of the functions described herein. During execution of the software 1626 by the computing system 1600, the software 1626 may also exist completely or at least partially as instructions 1626 in the main memory 1604 and/or as processing logic in the processing device 1602; the main memory 1604 and the processing device 1602 also constitute computer readable Storage medium.The computer-readable storage medium 1624 may also be used to store instructions 1626 for using the processing device 1602, and/or a software library containing methods for invoking the aforementioned applications. Although the computer-readable storage medium 1624 is shown as a single medium in the example implementation, the term “computer-readable storage medium” should be understood to include a single medium or multiple mediums that store one or more sets of instructions (eg, centralized Or distributed databases, and/or associated caches and servers). The term "computer-readable storage medium" should also be understood to include any medium capable of storing, encoding, or carrying a set of instructions for execution by a machine and causing the machine to perform any one or more methods of the disclosed implementations. The term "computer-readable storage medium" should be understood accordingly to include but not limited to solid-state memory, as well as optical and magnetic media.The following examples are further implementations.Example 1 is a processor, including: 1) A cryptographic engine for controlling access to one or more memory ranges of the memory using a secure area key identifier (ID), and the one or more memory ranges can be allocated To be flexibly converted into a secure page of an architecturally protected memory area; and 2) a processor core coupled to the cryptographic engine, the processor core being used to: a) determine that it is associated with a request to access the memory The physical address of corresponds to a secure page within one or more memory ranges of the memory; b) it is determined that the first key ID located in the physical address does not match the secure area key ID; and c) the issue page An error occurs and access to the secure page in the memory is denied.In Example 2, the processor according to Example 1, wherein the processor core further includes a set of instructions in firmware that executes a basic input output system (BIOS), and wherein the processor core is used to execute the one Group instructions to perform the following operations: a) find that the host can switch the security zone mode and the security extension mode to be enabled; b) program the security extension key into the cryptographic engine to correspond to the security zone key ID; And c) reserve the one or more memory ranges of the memory for flexible conversion into the secure page.In example 3, the processor according to example 2, wherein the processor core is further configured to: execute memory checking firmware to combine the security extension in response to detecting that the security zone key ID is not assigned The key is used and the memory check process fails.In example 4, the processor according to example 2, wherein the processor core is further configured to execute the set of instructions to allocate one of a plurality of key IDs to be used exclusively as the secure area key ID.In Example 5, the processor as described in Example 2, wherein the processor core is further configured to execute a central processing unit identifier (CPUID) instruction, wherein the CPUID instruction has: 1) a first register input, It is used to determine one or more memory ranges allocated for the memory to be flexibly converted into a secure page; and 2) a second register input used to determine the secure area key ID and the associated security attribute.In Example 6, the processor as described in Example 1, wherein the processor core is further configured to: use the secure zone key ID to map the second guest virtual address of the secure page to a page table and an extended page The second physical address in the table is such that the second physical address includes the security zone key ID.Various implementations may have different combinations of the structural features described above. For example, all the optional features of the processor and method described above can also be implemented for the system described herein and the specific details in the examples can be used anywhere in one or more implementations.Example 7 is a processor, including: 1) A cryptographic engine for controlling access to one or more memory ranges of the memory using a secure area key identifier (ID), and the one or more memory ranges can be allocated To be flexibly converted into a secure page of an architecturally protected memory area; and 2) a processor core coupled to the cryptographic engine, the processor core being used to: a) determine that it is associated with a request to access the memory The physical address corresponding to the non-secure page of the memory; c) determining that the first key ID located in the physical address matches the secure area key ID; and d) rejecting the request to the non-secure page of the memory access.In Example 8, the processor as described in Example 7, wherein the processor core is further configured to: a) replace the physical address in the request with an abort page address, the abort page address being linked to A suspension page with incorrect data; and b) allowing the system agent that issued the request to access the suspension page.In Example 9, the processor according to Example 7, wherein the processor core further includes a set of instructions in firmware executing a basic input output system (BIOS), wherein the processor core is used to execute the one Group instructions to perform the following operations: a) find that the host can switch the security zone mode and the security extension mode to be enabled; b) program the security extension key into the cryptographic engine to correspond to the security zone key ID; And c) reserve the one or more memory ranges of the memory for flexible conversion into the secure page.In Example 10, the processor according to Example 9, wherein the processor core is further configured to execute the set of instructions to allocate one of a plurality of key IDs to be used exclusively as the secure area key ID.In Example 11, the processor according to Example 9, wherein the processor core is further configured to: execute memory checking firmware to combine the security extension in response to detecting that the security zone key ID is not assigned The key is used and the memory check process fails.In example 12, the processor as described in example 9, wherein the processor core is also used to execute a central processing unit identifier (CPUID) instruction, wherein the CPUID instruction has: 1) the first register input, To determine the one or more memory ranges allocated for the memory to be flexibly converted into a secure page; and 2) a second register input for determining the secure area key ID and associated security attributes.Various implementations may have different combinations of the structural features described above. For example, all the optional features of the processor and method described above can also be implemented for the system described herein and the specific details in the examples can be used anywhere in one or more implementations.Example 13 is a system that includes: 1) the cache and home agent (CHA) of the memory subsystem, the CHA is used to: a) respond to detecting the first key identifier (ID) in the physical address of the cache line ) Match the security zone key ID to set the grid security bit of the cache line; and b) issue a write operation to the memory for the cache line; and 2) a cryptographic engine coupled to the CHA, wherein the cryptographic The engine is configured to: as a part of completing the write operation, set the memory security bit in the metadata of the cache line in the memory to the value of the grid security bit.In Example 14, the system of Example 13, wherein the cryptographic engine is further used to: a) detect a read operation for the cache line stored in the memory; and b) in order to perform the read The fetch operation returns the poison bit to the requesting agent in response to detecting a mismatch between the values of the grid security bit and the memory security bit.In Example 15, the system according to Example 14, wherein the cryptographic engine is also used to return fixed pattern data to the requesting agent in order to perform the read operation.In Example 16, the system of Example 13, wherein the cryptographic engine is further used to: a) detect a read operation for the cache line stored in the memory; and b) in order to perform the read The fetching operation returns the data of the cache line to the requesting agent in response to determining that the values of the grid security bit and the memory security bit match.Various implementations may have different combinations of the structural features described above. For example, all the optional features of the processor and method described above can also be implemented for the system described herein and the specific details in the examples can be used anywhere in one or more implementations.Example 17 is a method, including: 1) the processor selects the eviction page of the memory to be converted into the first secure page; and 2) the processor executes the security zone conversion instruction to evict the eviction through the following steps The page is initialized as the first secure page: 3) The content of the eviction page is written as a zero value; 4) The physical address of the eviction page is used to store the data in the first secure page And a secure area key identifier (ID) to calculate a message authentication code (MAC) value, the secure area key ID corresponding to an architecturally protected memory area of the memory containing the first secure page; And 5) storing the MAC value in the first secure page.In Example 18, the method of Example 17, further comprising: 1) executing a memory barrier instruction by the processor to verify that the operation associated with the initialization of the first secure page is completed; and 2) by the The processor makes the first secure page accessible to one of the virtual machines or applications that are authorized to access the architecturally protected memory area of the memory.In Example 19, the method of Example 18 further includes: 1) selecting the first secure page by the processor to evict and transition to a non-secure page; 2) making the first secure page to One of the virtual machines or applications authorized to access the architecturally protected memory area of the memory is inaccessible; 3) Invalidate the mapping of the first security page in the conversion backup buffer of the processor 4) The processor executes the non-secure area conversion instruction to cause the content of one or more cache lines corresponding to the first secure page and containing the secure area key ID to be written back to the Memory and flushing; and 5) returning the first secure page to the list of eviction pages available to the processor for distribution for storing data associated with the new key ID.In Example 20, the method of Example 17, further comprising: 1) determining that the physical address associated with the request to access the memory corresponds to the first security within one or more memory ranges of the memory Page; 2) Determine that the first key ID located in the physical address does not match the secure area key ID; and 3) Issue a page error and refuse to be sent to the memory by the system agent that issued the request Access to the first security page.In Example 21, the method of Example 17, further comprising: 1) determining that the physical address associated with the request to access the memory corresponds to a non-secure page of the memory; 3) determining that it is located within the physical address The first key ID matches the secure area key ID; and 4) the system agent that issued the request is denied access to the non-secure area page of the memory.Various implementations may have different combinations of the structural features described above. For example, all the optional features of the processor and method described above can also be implemented for the system described herein and the specific details in the examples can be used anywhere in one or more implementations.Example 22 is a non-transitory computer-readable medium storing instructions that, when executed by a processor having a core coupled to a system memory, cause the processor to perform multiple logical operations, including: 1) The processor selects the evicted page of the memory to convert to the first secure area page; and 2) the processor executes the secure area conversion instruction to initialize the evicted page to the first secure page through the following steps: 3 ) Write the content of the eviction page as a zero value; 4) Use the physical address of the eviction page, the data to be stored in the first secure page, and the security zone key identifier (ID) to Calculate a message authentication code (MAC) value, the secure area key ID corresponding to an architecturally protected memory area of the memory containing the first secure page; and 5) store the MAC value in all Describe the first safety page.In Example 23, the non-transitory computer-readable medium of Example 22, the plurality of logical operations further include: 1) executing a memory barrier instruction by the processor to verify the initialization of the first secure page The associated operation is completed; and 2) the processor makes the first secure page accessible to one of the virtual machines or applications that are authorized to access the architecturally protected memory area of the memory.In Example 24, the non-transitory computer-readable medium according to Example 23, the plurality of logical operations further include: 1) selecting the first secure page by the processor to evict and transition to non-secure Page; 2) Make the first secure page inaccessible to one of the virtual machines or applications that are authorized to access the architecturally protected memory area of the memory; 3) Make the conversion backup buffer of the processor The mapping of the first secure page in the device is invalid; 4) The processor executes a non-secure area conversion instruction to cause one or the key ID of the secure area corresponding to the first secure page. The contents of multiple cache lines are written back to the memory and flushed; and 5) the first secure page is returned to the list of eviction pages available to the processor for allocation for storage and new The data associated with the key ID.In Example 25, the non-transitory computer-readable medium of Example 22, the plurality of logical operations further include: 1) in response to receiving a request to access the memory, executing a page table and an extended page table To convert the requested guest virtual address into a physical address; 2) determine that the physical address corresponds to the first secure page within the one or more memory ranges of the memory; 3) determine that it is located at The first key ID in the physical address does not match the secure area key ID; and 4) an error occurred in issuing a page and the system agent that issued the request refused to access the first secure page in the memory. access.In Example 26, the non-transitory computer-readable medium of Example 22, the plurality of logical operations further include: 1) in response to receiving a request to access the memory, executing a page table and an extended page table To convert the requested guest virtual address into a physical address; 2) determine that the physical address corresponds to the non-secure page of the memory; 3) determine that the first key ID located in the physical address matches the Security zone key ID; and 4) The system agent that issued the request is denied access to the non-secure page of the memory.Example 27 is a system including means for performing the method of any of Examples 17-21.Although the present disclosure has been described for a limited number of implementations, those skilled in the art will appreciate many modifications and changes from these implementations. It is hoped that the appended claims cover all such modifications and changes that fall within the true spirit and scope of the present disclosure.In the description of this article, many specific details are explained, such as specific type of processor and system configuration, specific hardware structure, specific architecture and microarchitecture details, specific register configuration, specific instruction type, specific system component, specific measurement/ Examples of height, specific processor pipeline stages and operations, etc., in order to provide a thorough understanding of the present disclosure. However, it will be clear to those skilled in the art that these specific details are not required to implement the present disclosure. In other cases, well-known components or methods are not described in detail, such as specific and alternative processor architectures, specific logic circuits/codes for described algorithms, specific firmware codes, specific interconnect operations, specific logic configurations, Specific manufacturing technologies and materials, specific compiler implementations, specific expressions of algorithms in code, specific power-off and gating technology/logic, and other specific operating details of the computer system are to avoid unnecessarily obscuring the present disclosure.The implementation is described with reference to determining the validity of data in the cache line of a sector-based cache in a specific integrated circuit such as a computing platform or a microprocessor. The implementation method is also applicable to other types of integrated circuits and programmable logic devices. For example, the disclosed implementation is not limited to desktop computer systems or portable computers, such as UltrabooksTM computers. And can also be used in other devices, such as handheld devices, tablet devices, other thin notebooks, system-on-chip (SoC) devices, and embedded applications. Some examples of handheld devices include cellular phones, internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications usually include microcontrollers, digital signal processors (digital signal processors, DSP), system-on-chips, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or capable of performing the following teaching Function and operation of any other system. It is described that the system can be any kind of computer or embedded system. The disclosed implementation is particularly useful for low-end devices, such as wearable devices (for example, watches), electronic implants, sensory and control infrastructure devices, controllers, supervisory control and data acquisition (SCADA) System, etc. In addition, the devices, methods, and systems described herein are not limited to physical computing devices, but may also involve software optimization for energy saving and efficiency.Although the implementation here is described with reference to the processor, other implementations are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of the implementation of the present disclosure can be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teaching of the implementation of the present disclosure is applicable to any processor or machine that performs data manipulation. However, the present disclosure is not limited to processors or machines that perform 512-bit, 256-bit, 128-bit, 64-bit, 32-bit, or 16-bit data operations, but can be applied to any processor and machine in which data manipulation or management is performed. machine. In addition, the description here provides examples, and the drawings show various examples for illustration. However, these examples should not be interpreted in a limiting sense, because they are only intended to provide examples of implementations of the present disclosure, rather than providing an exhaustive list of all possible implementations of the implementations of the present disclosure.Although the above examples describe the processing and distribution of instructions in the context of execution units and logic circuits, other implementations of the present disclosure can be implemented by data or instructions stored on a machine-readable tangible medium. The data or instructions should be When executed by the machine, the machine executes a function conforming to at least one implementation of the present disclosure. In one implementation, the functions associated with the implementation of the present disclosure are embodied in machine executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor programmed with these instructions to perform the steps of the present disclosure. The implementation of the present disclosure may be provided as a computer program product or software. The computer program product or software may include a machine or computer-readable medium on which instructions are stored. These instructions may be used to program a computer (or other electronic device) to Perform one or more operations according to the implementation of the present disclosure. Alternatively, the operations of the implementation of the present disclosure may be performed by specific hardware components containing fixed function logic for performing these operations, or by any combination of programmed computer components and fixed function hardware components.Instructions used to program logic to execute implementations of the present disclosure may be stored in a memory (such as DRAM, cache, flash memory, or other storage device) in the system. In addition, the instructions can be distributed via a network or through other computer-readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (for example, a computer)—but not limited to—floppy disks, optical disks, compact disc read-only memories (Compact Disc, Read- OnlyMemory, CD-ROM), and magneto-optical disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), erasable programmable read-only memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), magnetic or optical card, flash memory, or in the transmission of signals via electrical, optical, acoustic or other forms (for example, Carrier waves, infrared signals, digital signals, etc.) Tangible machine-readable storage devices used when transmitting information over the Internet. Therefore, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).The design can go through various stages, from creation to simulation to manufacturing. The data representing the design can represent the design in several ways. First, as useful in simulation, hardware description language or another functional description language can be used to represent hardware. In addition, circuit-level models with logic and/or transistor gates can be generated at some stages of the design process. In addition, most designs reach the level of data representing the physical placement of various devices in the hardware model at some stage. In the case of using traditional semiconductor manufacturing technology, the data representing the hardware model may be data specifying the presence or absence of various features on different mask layers for a mask used to produce an integrated circuit. In any representation of the design, data can be stored in any form of machine-readable medium. A memory or a magnetic storage device such as a disk or an optical storage device may be a machine-readable medium used to store information transmitted via light waves or electric waves, which are modulated or otherwise generated to transmit such information. When transmitting instructions or carrying code or designed electrical carrier waves, a new copy is made in terms of performing copying, buffering or retransmission of electrical signals. Thereby, the communication provider or the network provider may at least temporarily store items, such as information encoded in a carrier wave, on a tangible machine-readable medium, which embodies the technology of the implementation of the present disclosure.The module used herein refers to any combination of hardware, software, and/or firmware. As an example, the module includes hardware, such as a microcontroller, which is associated with a non-transitory medium to store code adapted to be executed by the microcontroller. Therefore, referring to a module in one implementation refers to hardware that is specifically configured to recognize and/or execute code to be stored on a non-transitory medium. In addition, in another implementation manner, the use of a module refers to a non-transitory medium including code, which is specifically adapted to be executed by a microcontroller to perform a predetermined operation. As can be inferred, in another implementation, the term module (in this example) can refer to a combination of a microcontroller and a non-transitory medium. In general, the boundaries of the modules that are illustrated as separate generally vary and may overlap. For example, the first and second modules may share hardware, firmware, or a combination thereof, while some independent hardware, software, or firmware may be reserved. In one implementation, the use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.The use of the phrase "configured to" in one implementation refers to arranging, assembling, manufacturing, offering to sell, importing, and/or designing devices, hardware, logic, or elements to perform specified or determined tasks. In this example, the device or its elements that are not in operation are still "configured" to perform the specified task if it is designed, coupled, and/or interconnected to perform the specified task. As a purely illustrative example, logic gates may provide 0 or 1 during operation. But logic gates "configured to" provide an enable signal to the clock do not include every possible logic gate that can provide 1 or 0. Instead, logic gates are those that are coupled in a way that enables the clock with a 1 or 0 output during operation. Note again that the use of the term "configured to" does not require operation, but instead focuses on the potential state of the device, hardware, and/or element, where the device, hardware, and/or element is designed to act as the device, The hardware and/or components perform specific tasks when operating.In addition, the use of the phrases "used to", "capable of" and/or "operable to" in an implementation refers to a design that enables devices, logic, hardware, and/or elements to be used in a specified manner A certain device, logic, hardware and/or element. As above, it should be noted that the use of "used to", "able" or "operable" in an implementation refers to the potential state of the device, logic, hardware, and/or element, where the device, logic, hardware And/or the element is not operating, but is designed in a way that enables the device to be used in a specified way.Values used herein include any known representation of numbers, states, logic states, or binary logic states. Generally, the use of logic levels, logic values or logical values is also called 1 and 0, which simply means binary logic states. For example, 1 refers to a high logic level, and 0 refers to a low logic level. In one implementation, a memory cell, such as a transistor or flash memory cell, may be able to store a single logic value or multiple logic values. However, other representations of values in the computer system are used. For example, the decimal digit ten can also be represented as the binary value 1010 and the hexadecimal letter A. Therefore, the value includes any representation of information that can be stored in a computer system.In addition, the state can be represented by a value or part of a value. As an example, a first value, such as a logic one, may indicate a default or initial state, and a second value, such as a logic zero, may indicate a non-default state. In addition, the terms reset and set in one implementation refer to the default and updated values or states, respectively. For example, the default value may include a high logic value, that is, reset, and the updated value may include a low logic value, that is, set. Any combination of note values can be used to represent any number of states.The implementation of the methods, hardware, software, firmware, or code described above can be implemented via instructions or codes stored on a machine-accessible, machine-readable, computer-accessible, or computer-readable medium that can be executed by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (ie, stores and/or transmits) information in a form readable by a machine (eg, a computer or an electronic system). For example, non-transitory machine-accessible media include random access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic storage media or optical storage media; flash memory devices; electrical storage devices; optical storage devices Acoustic storage devices; other forms of storage devices used to store information received from transient (propagated) signals (for example, carrier waves, infrared signals, digital signals); etc., they must be related to non-temporary storage devices that can receive information from them Distinguish the medium phase.Instructions used to program logic to execute implementations of the present disclosure may be stored in a memory (such as DRAM, cache, flash memory, or other storage device) in the system. In addition, the instructions can be distributed via a network or through other computer-readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (for example, a computer)—but not limited to—floppy disks, optical disks, compact disc read-only memories (Compact Disc, Read- OnlyMemory, CD-ROM), and magneto-optical disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), erasable programmable read-only memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), magnetic or optical card, flash memory, or in the transmission of signals via electrical, optical, acoustic or other forms (for example, Carrier waves, infrared signals, digital signals, etc.) Tangible machine-readable storage devices used when transmitting information over the Internet. Therefore, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).Reference in various places in this specification to "an implementation" or "an implementation" means that a specific feature, structure, or characteristic described in connection with the implementation is included in at least one implementation of the present disclosure. Thus, the appearances of the phrase "in one implementation" or "in an implementation" in various places in this specification are not necessarily all referring to the same implementation. In addition, specific features, structures, or characteristics can be combined in any suitable manner in one or more implementations.In the foregoing specification, a detailed description is given with reference to specific exemplary implementations. However, it is obvious that various modifications and changes can be made to it without departing from the broader spirit and scope of the present disclosure as described in the appended claims. Therefore, the description and drawings should be viewed in an explanatory sense rather than a restrictive sense. In addition, the foregoing use of implementation manners and other exemplary languages does not necessarily refer to the same implementation manner or the same example, but may refer to different and differentiated implementation manners, and possibly the same implementation manner.Some parts of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits in computer memory. These algorithm descriptions and representations are the means used by those skilled in the data processing field to most effectively convey the essence of their work to others in the field. Algorithms are here and generally conceived as a self-consistent sequence of operations leading to a desired result. These operations are those that require physical manipulation of physical quantities. Usually (but not necessarily), these quantities take the form of electrical or magnetic signals that can be stored, transferred, combined, compared, and otherwise manipulated. It has been proven that sometimes it is convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, etc., mainly for reasons of idiom. The blocks described herein can be hardware or firmware.However, it should be remembered that all of these and similar terms will be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless it is clear from the above discussion that there are other specific statements, it should be understood that throughout the specification, such as "definition", "receive", "determine", "send", "link", "associate", "obtain" The discussion of terms such as “authentication”, “prohibition”, “execution”, “request”, and “communication” refers to the data represented as physical (for example, electronic) quantities in the registers and memories of the computing system The actions and processes of computing systems or similar electronic computing devices that manipulate and transform into computing system memory or registers or other such information storage, transmission, or display device other data similarly represented as physical quantities.The words "exemplary" or "exemplary" are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" or "exemplary" is not necessarily construed as being more preferable or advantageous than other aspects or designs. More precisely, the use of the word "exemplary" or "exemplary" is intended to give a concept in a specific way. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless otherwise indicated, or clearly visible from the context, "X includes A or B" is intended to mean any natural inclusive arrangement. That is, if X includes A; X includes B; or X includes both A and B, "X includes A or B" is satisfied in any of the foregoing cases. In addition, the article "a" used in this application and the appended claims should generally be interpreted as referring to "one or more", meaning that it is in the singular form unless otherwise specified or clearly visible from the context. In addition, the use of the terms "an implementation" or "an implementation" or "an implementation" or "an implementation" in various places is not intended to imply the same implementation or implementation unless described as such. In addition, the terms "first", "second", "third", "fourth", etc. used herein are intended to be used as labels to distinguish different elements, and may not necessarily have an ordinal meaning according to their number. |
A quantum well transistor or high electron mobility- transistor may be formed using a replacement metal gate process. A dummy gate electrode may be used to define sidewall spacers and source drain contact metallizations. The dummy gate electrode may be removed and the remaining structure used as a mask to etch a doped layer to form sources and drains self -aligned to said opening. A high dielectric constant material may coat the sides of said opening and then a metal gate electrode may be deposited. As a result, the sources and drains are self-aligned to the metal gate electrode. In addition, the metal gate electrode is isolated from an underlying barrier layer by the high dielectric constant material . |
What is claimed is : 1. A method comprising : forming a self-aligned source drain in a quantum well transistor . 2. The method of claim 1 including forming a self- aligned source drain from a doped layer, forming an opening in said doped layer, and depositing a gate electrode in said doped layer . 3. The method of claim 2 including depositing a metal gate electrode . 4. The method of claim 3 including using a dummy gate over said doped layer and subsequently removing said dummy gate . 5. The method of claim 4 including using said dummy gate to define a sidewall spacer . 6. The method of claim 5 including using said sidewall spacer to define self-aligned source drain contacts . 7. The method of claim 6 including removing said dummy gate after defining said spacers and said contacts . 8. The method of claim 7 including using said contacts and said spacers as a mask to etch said doped layer and to define a source and drain . 9. The method of claim 8 including etching said doped layer so as to undercut said spacers . 10. The method of claim 9 including depositing a layer in said opening having a dielectric constant greater than 10. 11. The method of claim 10 including forming a metal gate electrode over said dielectric . 12. The method of claim 11 including forming a barrier layer under said gate dielectric . 13. The method of claim 12 including separating said metal gate electrode from said barrier layer by said dielectric . 14. The method of claim 1 including forming a depletion mode transistor by etching through said doped layer . 15. The method of claim 13 including forming an enhancement mode transistor by forming said doped layer over an upper barrier layer and etching into said upper barrier layer such that said gate dielectric extends through said doped layer and into said upper barrier layer . 16. The method of claim 9 including controlling the depth of etching to determine whether an enhancement mode or a depletion mode device is formed . 17. The method of claim 16 including etching through said doped layer and into an underlying barrier layer to form an enhancement device . 18. A method comprising : forming a quantum well transistor with a barrier layer and a Schottky gate metal and a dielectric, between said gate metal and said barrier layer, having a dielectric constant greater than 10. 19. The method of claim 18 including forming a self- aligned source drain in said quantum well transistor . 20. The method of claim 19 including forming a self- aligned source drain from a doped layer, forming an opening in said doped layer , and depositing a gate electrode in said doped layer . 21. The method of claim 20 including depositing a metal gate electrode . 22. The method of claim 21 including using a dummy gate over said doped layer and subsequently removing said dummy gate . 23. The method of claim 22 including using said dummy gate to define a sidewall spacer . 24. The method of claim 23 including using said sidewall spacer to define self-aligned source drain contacts . 25. The method of claim 24 including removing said dummy gate after defining said spacer and said contacts . 26. The method of claim 25 including using said contacts and said spacer as a mask to etch said doped layer and to define a source and drain. 27. The method of claim 26 including etching said doped layer so as to undercut said spacer . 28. The method of claim 27 including depositing a dielectric in said opening having a dielectric constant greater than 10. 29. The method of claim 28 including forming a metal gate electrode over said dielectric . 30. The method of claim 29 including forming said barrier layer under said dielectric . 31. The method of claim 30 including separating said metal gate electrode from said barrier layer by said dielectric . 32. The method of claim 20 including forming a depletion mode transistor by etching through said doped layer . 33. The method of claim 28 including forming an enhancement mode transistor by forming said doped layer over said barrier layer and etching into said barrier layer such that said dielectric extends through said doped layer and into said barrier layer . 34. The method of claim 27 including controlling the depth of etching to determine whether an enhancement mode or a depletion mode device is formed. 35. The method of claim 34 including etching through said doped layer and into an underlying barrier layer to form an enhancement device . 36. A quantum well transistor comprising : a first and second barrier layer; a quantum well layer between said barrier layers ; a gate electrode; and a source drain self-aligned to said gate electrode . 37. The transistor of claim 36 including sidewall spacers on said gate electrode . 38. The transistor of claim 37 wherein said gate electrode is a metal gate electrode . 39. The transistor of claim 38 including a contact metallization to said source and drain . 40. The transistor of claim 36 including a dielectric between said gate electrode and said first barrier layer, said dielectric having a dielectric constant greater than 10. 41. The transistor of claim 40 wherein said dielectric is U-shaped . 42. A quantum well transistor comprising : a first and second barrier layer; a quantum well layer between said barrier layers ; a metal gate electrode ; and a dielectric between said gate electrode and said first barrier layer, said dielectric having a dielectric constant greater than 10. 43. The transistor of claim 42 including a self- aligned source drain . 44. The transistor of claim 42 including sidewall spacers on said gate electrode . 45. The transistor of claim 42 including a contact metallization to said source and drain. 46. The transistor of claim 42 wherein said dielectric is U-shaped. |
QUANTUM WELL TRANSISTOR USING HIGH DIELECTRIC CONSTANT DIELECTRIC LAYERBackgroundThis invention relates generally to the formation of quantum well transistors .A quantum well is a potential well that confines particles in a dimension forcing them to occupy a planar region. A first material , sandwiched between two layers of a material with a wider band gap than the first material , may form a quantum well . Quantum well or high electron mobility transistors (HEMTs) are field effect transistors with a junction between two materials with different band gaps as the channel . The junction may exhibit very low resistance or high electron mobility. A voltage applied to the gate may alter the conductivity of the junction .Quantum well transistors may be prone to high gate leakage and parasitic series resistance . Particularly, quantum well transistors using elements from columns III through V of the periodic table may be prone to such problems . Examples of such materials include indium gallium arsenide/indium aluminum arsenide and indium antimony/aluminum indium antimony.In current state of the art quantum well transistors , a direct Schottky metal gate may be deposited on a barrier layer to form the Schottky junction which may be prone to high gate leakage . Also, the source and drain regions may be patterned and source and drain contact metallization completed before gate patterning . The gate patterning is done as the last step in the process , which may result in non-self-aligned source drain regions . Such non-self- aligned source drain regions may be prone to parasitic series resistance . Devices with parasitic series resistance may exhibit poor performance .Thus , there is a need for better ways to make quantum well transistors .Brief Description of the DrawingsFigure 1 is an enlarged, cross-sectional view of one embodiment of the present invention;Figure 2 is an enlarged, cross-sectional view of the embodiment shown in Figure 1 at an early stage of manufacture in accordance with one embodiment of the present invention;Figure 3 is an enlarged, cross-sectional view of the embodiment shown in Figure 2 after subsequent processing in accordance with one embodiment of the present invention; Figure 4 is an enlarged, cross-sectional view corresponding to Figure 3 after subsequent processing in accordance with one embodiment of the present invention;Figure 5 is an enlarged, cross-sectional view corresponding to Figure 4 after subsequent processing in accordance with one embodiment of the present invention;Figure 6 is an enlarged, cross-sectional view corresponding to Figure 5 after subsequent processing in accordance with one embodiment of the present invention;Figure 7 is an enlarged, cross-sectional view corresponding to Figure 6 after subsequent processing in accordance with one embodiment of the present invention;Figure 8 is an enlarged, cross-sectional view corresponding to Figure 7 after subsequent processing in accordance with another embodiment of the present invention; Figure 9 is an enlarged, cross-sectional view corresponding to Figure 8 after subsequent processing in accordance with a depletion mode embodiment of the present invention; andFigure 10 is an enlarged, cross-sectional view corresponding to Figure 7 after subsequent processing in accordance with an enhancement mode embodiment of the present invention;Detailed DescriptionReferring to Figures 1 and 10 , a depletion (Figure 1) or enhancement mode (Figure 10) self-aligned source drain quantum well transistor may be formed with a high dielectric constant dielectric layer 24 and a metal gate electrode 38 that acts as a Schottky gate metal . As used herein "high dielectric constant" refers to dielectrics having dielectric constants of 10 or greater . Over a silicon substrate 10 may be an accommodation layer 12. The accommodation layer 12 may be AlInSb with 15% aluminum in one embodiment . Over a silicon substrate 10 , a germanium layer (not shown) may be included under the layer 12 as well . The accommodation layer 12 functions to accommodate for the lattice mismatch problem and to confine dislocations or defects in that layer 12.Over the accommodation layer 12 may be formed a lower barrier layer 14 in accordance with one embodiment of the present invention . The lower barrier layer 14 may, for example, be formed of aluminum indium antimonide or indium aluminum arsenide, as two examples . The lower barrier layer 14 may be formed of a higher band gap material than the overlying quantum well 16.Over the lower barrier layer 14 is formed the undoped quantum well 16. In one embodiment , the undoped quantum well 16 may be formed of indium antimonide or indium gallium arsenide, as two examples . Next , the upper barrier layer 20 may be formed. The upper barrier layer 20 may be formed of the same or different materials as the lower barrier layer 14. The upper barrier layer 20 may include a delta doped donor layer 18. The delta doping may be done using silicon or tellurium, as two examples . The doped donor layer 18 supplies carriers to the quantum well 16 for transport . The doped donor layer 18 is formed by allowing Te or Si dopants to flow into the MBE (Molecular Beam Epitaxy) chamber in a controlled fashion from a solid source .Thus , the quantum well 16 is sandwiched between the upper and lower barrier layers 20 and 14. The upper barrier layer 20 may be an electron supplying layer whose thickness will determine the threshold voltage of the transistor, along with the workfunction of the Schottky metal layer forming the gate electrode 38.The metal gate electrode 38 may be formed over a high dielectric constant dielectric material 26. The material 26 brackets the metal gate electrode 38 on three sides . The high dielectric constant layer 26 may, in turn, be bracketed by a self-aligned source drain contact metallization 22 and a spacer layer 28.Fabrication of the depletion mode transistor, shown in Figure 1 , and the enhancement mode transistor of Figure 10 may begin, as shown in Figure 2 , by forming the structure up to and including an n+ doped layer 30. The layer 30 may include an indium antimonide or indium gallium arsenide doped with Te and Si impurities . The layer 30 may be highly doped to later form the source drain regions in the finished transistor .The multilayer epitaxial substrate 10 may be grown using molecular beam epitaxy or metal organic chemical vapor deposition, as two examples . Referring to Figure 3 , a dummy gate 32 may be formed over the n+ doped layer 30 in accordance with one embodiment of the present invention. It may be formed after patterning and etch out of nitride , carbide , or oxide films (not shown) . Advantageously, these films may be formed by low temperature deposition to preserve the integrity of the epitaxial layer structure . The dummy gate 32 may, for example, be formed of silicon nitride or metal . The dummy gate 32 may be formed by patterning through either lithography and etching, in the case of a silicon nitride dummy gate 32 , or through evaporation and liftoff in the case of a metal gate 32 , such as an aluminum metal dummy gate .Referring next to Figure 4 , low temperature silicon oxide , nitride or carbide spacers 28 may be formed that bracket the dummy gate 32. The spacers 28 may be formed by a low temperature deposition technique, followed by anisotropic etching .Turning next to Figure 5 , the self-aligned source drain contact metallizations 22 may be formed by electron beam evaporation or reactive sputtering, either followed by a chemical mechanical planarization process , to create self- aligned contacts to the yet to be formed source drain regions in the layer 30. The source drain contact metallization 22 may, for example, be formed of titanium or gold.Then, as shown in Figure 6 , the dummy gate 32 may be selectively etched out using a wet etch. As a result , an opening 34 is formed . A metal dummy gate removal process may, for example, include a wet etch using phosphoric acid etch. For a nitride dummy gate, hydrochloric acid may be used. For a silicon dioxide dummy gate a hydrofluoric acid etch can be used . The wet etch process is selective to the n+ doped layer 30.Then, as shown in Figure 7 for a depletion mode device, a selective etch out of the n+ doped layer 30 may be achieved to form an inverted T-shaped opening having wings 36 and a base 34. Dry or wet etching may be utilized to form the wings 36. For example , the n+ doped layer 30 is selectively removed using a wet etch process such as citric acid plus peroxide . Atomic layer deposition of the high dielectric constant material 26 may be followed by electron beam evaporation or sputtering of a metal gate electrode 38. The gate electrode 38 may, for example, be platinum, tungsten, palladium, or molybdenum, to mention a few examples . The high dielectric constant dielectric 26 may, for example, be hafnium dioxide or zirconium dioxide, as two examples . A low temperature deposition process may be utilized with an organic precursor (such as alkoxide precursor for hafnium dioxide deposition) . The structure shown in Figure 8 may then be subj ected to a chemical mechanical polish of the metal gate electrode38 and the high dielectric constant dielectric 26 to achieve the depletion mode structure shown in Figure 9.Right after the n+ doped layer 30 etch out to form the opening 34 including wings 36 and base 34 , as shown in Figure 7 , a further recess etch may be done through the electron supplying barrier layer 20 , stopping just above the delta doped layer 18 to make an enhancement mode device as shown in Figure 10. A time drive etch (not shown in Figure 7) may partially recess into the electron supplying barrier layer 20 in Figure 7 and under the spacers 28 , to increase the threshold voltage of the transistor and to form an enhancement mode device . The device layer structure survives the high dielectric constant deposition process . This may be followed by sputter deposition or electron beam deposition of the Schottky gate electrode 38. The gate electrode 38 workfunction may be chosen to be as high as possible to create an enhancement mode device .Some embodiments of the present invention may achieve lower gate leakage from the incorporation of a high dielectric constant dielectric 20 in between the Schottky gate metal of the electrode 38 and the semiconductor barrier layer 20. Lower parasitic series resistance may result , in some embodiments , from the highly doped source drain region self-aligned to the gate . In some embodiments , the recess etch of the electron supplying barrier layer 20 to the desired thickness forms an enhancement mode quantum well field effect transistor .While the present invention has been described with respect to a limited number of embodiments , those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention . |
A monitoring and protection circuit associated with a voltage regulator supplying power to a CMOS circuit device can sense over current levels precisely enough for determining if a fault has occurred, e.g., latch-up, failed or shorted transistor, etc., then this monitoring and protection circuit may automatically generate a fault alert signal and/or cycle power to the CMOS circuit device when an unexpected over current may occur, e.g., CMOS circuit latch-up. The monitoring and protection circuit may be integrated with a voltage regulator, e.g., low drop-out (LDO) voltage regulator. The monitoring and protection circuit may be integrated with a CMOS circuit device, e.g., digital processor. The monitoring and protection circuit may be a stand alone device. |
1.An apparatus for monitoring and protecting a complementary metal oxide semiconductor (CMOS) circuit device, comprising:a current measuring circuit having a measured current output;a comparator having a first input coupled to the measured current output of the current measuring circuit;a current trip setpoint circuit having a current trip setpoint output coupled to a second input of the comparator;a power switch controlled by the output of the comparatorWherein the comparator compares the measured current from the current measuring circuit to the current trip set point, whereby the power switch is when the measured current is greater than the current trip set point Disconnected, and when the measured current is less than or equal to the current trip set point, the power switch is turned "on".2.The device of claim 1 wherein said current trip set point is programmable.3.The apparatus of claim 1 wherein said power switch is adapted to supply power to a CMOS circuit device and said power switch is continuously turned off for a determined time prior to being turned back on.4.The apparatus of claim 3 wherein said determining time is long enough for said CMOS circuit device to be unlocked prior to being reapplied.5.The apparatus of claim 1 further comprising a watchdog timer that controls said power switch, wherein said watchdog timer is disconnected if said watchdog timer does not receive a reset signal within a determined time The power switch.6.The apparatus of claim 1 wherein said current measuring circuit, said comparator and said power switch are fabricated on a semiconductor integrated circuit die.7.The apparatus of claim 6 wherein said semiconductor integrated circuit dies are enclosed in an integrated circuit package.8.The apparatus of claim 5 wherein said current measuring circuit, said comparator, said power switch and said watchdog timer are fabricated on a semiconductor integrated circuit die.9.The apparatus of claim 8 wherein said semiconductor integrated circuit dies are enclosed in an integrated circuit package.10.The apparatus of claim 1 further comprising a voltage regulator coupled to said current measuring circuit and said power switch.11.The apparatus of claim 1 wherein said voltage regulator is a low leakage (LDO) voltage regulator.12.The apparatus according to claim 10, wherein said current measuring circuit, said comparator, said power switch, and said voltage regulator are fabricated on a semiconductor integrated circuit die.13.The apparatus of claim 5 further comprising a voltage regulator coupled to said current measuring circuit and said power switch.14.The apparatus according to claim 13, wherein said current measuring circuit, said comparator, said power switch, said watchdog timer, and said voltage regulator are fabricated on a semiconductor integrated circuit die.15.The apparatus of claim 12 wherein said semiconductor integrated circuit dies are enclosed in an integrated circuit package.16.A digital system having an automatic detection of a CMOS circuit device lock state and a reset of power to unlock the CMOS circuit device, the system comprising:a current monitoring and protection circuit;a CMOS circuit device coupled to the current monitoring and protection circuit and powered from the current monitoring and protection circuit, wherein the digital device supplies a current trip point value to the current monitoring and protection circuit such that When the current drawn by the CMOS circuit device is greater than the current trip point value, power is removed from the CMOS circuit device.17.The digital system of claim 16 wherein said current monitoring and protection circuit comprises:a current measuring circuit having a measured current output;a comparator having a first input coupled to the measured current output of the current measuring circuit;a current trip setpoint circuit having a current trip setpoint output coupled to a second input of the comparator, wherein the current trip setpoint output is by the slave from the CMOS circuit device Current trip point value to control; anda power switch controlled by an output of the comparator, the power switch supplying the power to the CMOS circuit device,Wherein the comparator compares the measured current from the current measuring circuit with the current trip set point output, whereby when the measured current is greater than the current trip set point output, The power switch is turned off, and the power switch is turned on when the measured current is less than or equal to the current trip set point output.18.The digital system of claim 16 wherein said current monitoring and protection circuit receives power from a voltage regulator.19.The digital system of claim 18 wherein said voltage regulator is a low leakage (LDO) voltage regulator.20.The digital system of claim 16 wherein said CMOS circuit device is a digital processor.21.The digital system of claim 16 wherein said digital processor is a microcontroller.22.The digital system of claim 16 wherein said digital processor is a digital signal processor (DSP).23.The digital system of claim 16 wherein said digital processor is a microcomputer.24.The digital system of claim 16 wherein said digital processor is selected from the group consisting of an application specific integrated circuit (ASIC) and a programmable logic array (PLA).25.The digital system of claim 17 further comprising a watchdog timer that controls said power switch, wherein if said watchdog timer does not receive a reset signal from said CMOS circuit device within a determined time, then The watchdog timer will disconnect the power switch.26.The digital system of claim 17 wherein said power switch is continuously turned off for a time sufficient to cause said CMOS circuit device to unlock before said power is reapplied.27.The digital system of claim 16 wherein said current monitoring and protection circuit and said CMOS circuit device are fabricated on a semiconductor integrated circuit die.28.The digital system of claim 27 wherein said semiconductor integrated circuit dies are enclosed in an integrated circuit package.29.The digital system of claim 18 wherein said current monitoring and protection circuit, said CMOS circuit device, and said voltage regulator are fabricated on a semiconductor integrated circuit die.30.The digital system of claim 29 wherein said semiconductor integrated circuit dies are enclosed in an integrated circuit package.31.A method of automatically detecting a CMOS circuit device lock state and resetting power to unlock the CMOS circuit device, the method comprising:Monitoring current drawn by CMOS circuit devices;Comparing the current drawn by the CMOS circuit device to a current trip point, wherein if the current drawn by the CMOS circuit device is greater than the current trip point, then power is disconnected from the CMOS circuit device A certain time.32.The method of claim 31 further comprising the step of programming said current trip point.33.The method of claim 31 wherein said step of programming said current trip point is performed by said CMOS circuit device.34.The method of claim 31 further comprising the step of:Reset the watchdog timer;If the watchdog timer is not reset, power is disconnected from the CMOS circuit device for the determined time.35.The method of claim 31 wherein said determining time is long enough to cause said CMOS circuit device to unlock before being reapplied. |
Automatic detection and resetting of power of a complementary MOS circuit device in a locked stateTechnical fieldThe present disclosure relates to the detection of reset states of CMOS circuit devices and their resetting, and more particularly to the automatic detection and resetting of power of CMOS circuit devices in a locked state. Background techniqueComplementary metal oxide semiconductor (CMOS) circuits are widely used in digital integrated circuit devices, such as digital processors and the like. However, CMOS circuits are susceptible to lock states for a variety of reasons such as electrical fast transients (EFT), electrostatic discharge (ESD), and the like; overvoltage conditions, ionizing radiation (eg, aerospace and military) Use) and so on. When a locked state occurs in a CMOS circuit, there may be an abnormally high current drawn that may damage or destroy the CMOS circuit and may also damage or destroy the voltage regulator that supplies the CMOS circuit. The locked state of the CMOS circuit can cause the circuit to not operate. The method of correcting the locked state of the CMOS circuit is to cycle power it, for example, turn off and then return to turn on.Summary of the inventionThere is a need for more robust CMOS circuit devices that can resist or prevent exposure to a variety of locked states, such as, for example, without limitation, individual event interference (SEU) and/or individual event locking ( The appearance of SEL) is recoverable. If the monitoring and protection circuit associated with the voltage regulator that supplies the CMOS circuit device can accurately sense the overcurrent level to determine if a fault has occurred, such as a locked state, a damaged or shorted transistor, etc., then an unexpected overcurrent This monitoring and protection circuit can automatically generate a fault alarm signal and/or cycle power to the CMOS circuit device when possible (eg, CMOS circuit lock state). The monitoring and protection circuitry can be integrated with a voltage regulator such as a low leakage (LDO) voltage regulator. The monitoring and protection circuitry can be integrated with CMOS circuit devices (eg, digital processors). The monitoring and protection circuitry can be a standalone device.The CMOS circuit device operating current demand (load) can vary greatly during its normal operation, and indicating that the expected current demand (eg, CMOS circuit device electrical load) or "status information" would be useful for CMOS circuit devices. This status information may indicate when the current limit is changed, and/or when it is appropriate to disable or enable current monitoring. Status information from CMOS circuit devices can also be used as the core of the watchdog timer function when monitoring proper operation of CMOS circuit devices.For example, if the protection circuit detects excessive current (eg, CMOS lock condition) relative to the expected operating current obtained from the state information, then power recycling may be initiated. A system reset can be generated if the watchdog timer function fails to respond within a certain time (eg, the CMOS circuit device is not operating). The monitoring and protection circuit can also be used as a solid state circuit breaker, which can have at least one current trip value, and the at least one current trip value can be programmed during operation of the CMOS circuit device or during system manufacturing and/or startup. .In accordance with a particular embodiment of the present disclosure, an apparatus for monitoring and protecting a CMOS circuit device can include: a current measurement circuit having a measured current output; and a comparator having a first measured current output coupled to the current measuring circuit Input; a current trip setpoint circuit having a current trip setpoint output coupled to a second input of the comparator; and a power switch controlled by an output of the comparator, wherein the comparator compares from the current measurement The measured current and current of the circuit trip off the set point, whereby when the measured current is greater than the current trip set point, the power switch is turned off, and when the measured current is less than or equal to the current trip set point, the power switch is connected through. The current trip set point can be programmable. The power switch can be adapted to power the CMOS circuit device and the power switch is continuously turned off for a determined time before returning to turn-on. The determined time may be long enough for the CMOS circuit device to be unlocked before power is reapplied there. A watchdog timer can be added to control the power switch, wherein if the reset signal is not received by the watchdog timer within a certain time, the watchdog timer will turn off the power switch. The current measuring circuit, the comparator and the power switch can be fabricated on a semiconductor integrated circuit die. The semiconductor integrated circuit die can be enclosed in an integrated circuit package. Current measuring circuits, comparators, power switches, and watchdog timers can be fabricated on a semiconductor integrated circuit die. The semiconductor integrated circuit die can be enclosed in an integrated circuit package. A voltage regulator can be coupled to the current measurement circuit and the power switch. The voltage regulator can be a low leakage (LDO) voltage regulator. The current measuring circuit, the comparator, the power switch, and the voltage regulator can be fabricated on a semiconductor integrated circuit die. Current measuring circuits, comparators, power switches, watchdog timers, and voltage regulators can be fabricated on a semiconductor integrated circuit die.In accordance with another particular embodiment of the present disclosure, a digital system has automatic detection of CMOS circuit device lock states and resetting of power to unlock CMOS circuit devices, which may include: current monitoring and protection circuits; CMOS circuit devices, It is coupled to and is powered by the current monitoring and protection circuit, wherein the digital device supplies a current trip point value to the current monitoring and protection circuit such that when the current drawn by the CMOS circuit device is greater than the current hop When the value is off, the power is removed from the CMOS circuit device. The current monitoring and protection circuit can include: a current measurement circuit having a measured current output; a comparator having a first input coupled to the measured current output of the current measuring circuit; and a current trip setpoint circuit having a coupling to the comparison The second input current of the device trips off the set point output, wherein the current trip set point output is controlled by a current trip point value from the CMOS circuit device; and a power switch that is controlled by the output of the comparator The power switch supplies power to the CMOS circuit device, wherein the comparator compares the measured current from the current measuring circuit with the current trip set point output, whereby when the measured current is greater than the current trip set point output, the power switch is turned off On, and when the measured current is less than or equal to the current trip setpoint output, the power switch is turned on. Current monitoring and protection circuitry can receive power from a voltage regulator. The voltage regulator can be a low leakage (LDO) voltage regulator. The watchdog timer can control the power switch, wherein if the reset signal from the CMOS circuit device is not received by the watchdog timer within a certain time, the watchdog timer will turn off the power switch. The power switch can be continuously turned off for the CMOS circuit device for a sufficient amount of time to unlock before power is reapplied there. Current monitoring and protection circuits and CMOS circuit devices can be fabricated on a semiconductor integrated circuit die. The semiconductor integrated circuit die can be enclosed in an integrated circuit package. Current monitoring and protection circuits, CMOS circuit devices, and voltage regulators are fabricated on a semiconductor integrated circuit die. The semiconductor integrated circuit die can be enclosed in an integrated circuit package.In accordance with yet another particular embodiment of the present disclosure, a method of automatically detecting a CMOS circuit device lock state and resetting power to unlock a CMOS circuit device, the method can include: monitoring a current drawn by the CMOS circuit device; The current drawn by the CMOS circuit device is a current trip point, wherein if the current drawn by the CMOS circuit device is greater than the current trip point, then the power is disconnected from the CMOS circuit device for a determined time. The method can further include the step of programming a current trip point. The step of programming the current trip point can be accomplished by a CMOS circuit device. The method can further include the steps of: resetting a watchdog timer; and disconnecting power from the CMOS circuit device for a determined time if the monitor monitor is not reset. The determined time may be long enough for the CMOS circuit device to be unlocked before power is reapplied there.DRAWINGSA more complete understanding of the present invention may be obtained by reference to1 illustrates a schematic block diagram of a monitoring and protection circuit, a voltage regulator, and a digital processor in accordance with certain embodiments of the present disclosure.While the disclosure is susceptible to various modifications and alternatives, the specific embodiments are shown in the drawings and are described in detail herein. It should be understood, however, that the description of the specific exemplary embodiments herein is not intended to limit the invention to the specific forms disclosed herein. All modifications and equivalents.Detailed waysDETAILED DESCRIPTION OF THE INVENTION Referring now to the drawings, the details of the particular embodiments are illustrated. The same elements in the drawings will be denoted by the same numerals, and similar elements will be denoted by the same numerals with different lowercase suffixes.Referring to Figure 1, depicted is a schematic block diagram of a monitoring and protection circuit, a voltage regulator, and a digital processor in accordance with certain embodiments of the present disclosure. The monitoring and protection circuitry (generally represented by numeral 104) can include current measurement circuitry 108, current trip setpoint circuitry 110, comparator 112, and power switch 114. Watch timer 116 can also control power switch 114 as needed.A voltage regulator 106 (eg, a low dropout voltage (LDO) regulator) can supply the desired voltage to the monitoring and protection circuit 104. Power source 150 can supply voltage and current to regulator 106. Regulator 106 can be fabricated on monitoring and protection circuitry 104 on an integrated circuit substrate, generally designated by numeral 102.A digital processor 118 (eg, a microcomputer, a microcontroller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic array (PLA), and the like) can be from the monitoring and protection circuit 104 (eg, The load side of the power switch 114 receives power (eg, voltage and current). The digital processor 118 can have an output 132 indicative of an expected current draw such that the current trip setpoint circuit 110 can apply a current trip point 130 to the input of the comparator 112. Digital processor 118 may also have an output 134 that may be used to reset watch timer 116. The output 132 can also be used to reset the watchdog timer 116 such that the output 134 can be removed. The monitoring and protection circuit 104 can be fabricated with a digital processor 118 on a single integrated circuit substrate, generally represented by numeral 103. It is contemplated and within the scope of the present disclosure, the monitoring and protection circuit 104, the regulator 106, and/or the digital processor 118 can be fabricated on at least one integrated circuit substrate packaged in an integrated circuit package (not shown). .Whenever the measured current 128 from the current measuring circuit 108 exceeds the current trip point 130, the comparator 112 turns off the power switch 114 using the control line 136, thereby removing power (voltage) from the digital processor 118. If the digital processor 118 CMOS circuit is in a locked state, then removing and reconnecting power may allow the CMOS circuit of digital processor 118 to unlock and begin proper operation again. The amount of time suitable for clearing the lock condition (removing power with the power switch 114) can be programmed into the monitoring and protection circuit 104.If the current drawn during the locked state of digital processor 118 is insufficient to exceed the value of current trip point 130, then watchdog timer 116 can control power switch 114 (if it is not reset by digital processor 118 in time). Thereby power is removed and reconnected to allow the CMOS circuitry of digital processor 118 to unlock and begin proper operation again. With the operation of the current sensing of the comparator 112 and the timeout of the watchdog timer 116, it is possible to detect the lock condition in the shortest possible time and recover from the lock condition.It is contemplated and within the scope of the present disclosure that the monitoring and protection circuit 104 can also function as a solid state circuit breaker that can have at least one current trip value, and that the at least one current trip value can be operational at the digital processor 118 Programming during or during system manufacturing and/or startup.The embodiments of the present disclosure have been described, illustrated, and described with reference to the embodiments of the present disclosure, which are not intended to limit the scope of the disclosure, and do not infer any such limitation. The subject matter that is disclosed is to be construed as a versa vers 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 The depicted and described embodiments of the present disclosure are only examples, and are not intended to be exhaustive. |
A microelectromechanical system (MEMS) bond release structure is provided for manufacturing of three-dimensional integrated circuit (3D IC) devices with two or more tiers. The MEMS bond release structure includes a MEMS sacrificial release layer which may have a pillar or post structure, or alternatively, a continuous sacrificial layer for bonding and release. |
CLAIMSWHAT IS CLAIMED IS:1. A microelectromechanical system (MEMS) bond release structure, comprising:a carrier wafer;a MEMS sacrificial release layer on the carrier wafer;a semiconductor oxide layer on the MEMS sacrificial release layer; and an active semiconductor layer on the semiconductor oxide layer.2. The structure of claim 1, wherein the semiconductor oxide layer comprises a silicon dioxide (S1O2) layer.3. The structure of claim 2, wherein the active semiconductor layer comprises an active silicon layer on the S1O2layer.4. The structure of claim 1, wherein the MEMS sacrificial release layer comprises a plurality of MEMS posts spaced apart from one another.5. The structure of claim 4, wherein each of the MEMS posts comprises a sacrificial release material.6. The structure of claim 5, wherein the sacrificial release material comprises a material selected from the group consisting of molybdenum (Mo), germanium (Ge), germanium oxide (GeOx) and silicon oxide (SiOx).7. The structure of claim 5, wherein each of the MEMS posts further comprises an oxide material on the sacrificial release material for bonding the sacrificial release material with the semiconductor oxide layer.8. The structure of claim 4, wherein the MEMS posts comprise one or more inner MEMS posts and one or more outer MEMS posts at least partially surrounding said one or more inner MEMS posts on the carrier wafer.9. The structure of claim 1, wherein the MEMS sacrificial release layer comprises a continuous sacrificial release layer on the carrier wafer.10. The structure of claim 1, wherein the semiconductor oxide layer and the active semiconductor layer together form a buried oxide (BOX) layer.11. A method of making a microelectromechanical system (MEMS) bond release structure, comprising:providing a carrier wafer;providing a MEMS sacrificial release layer on the carrier wafer;providing a semiconductor oxide layer on the MEMS sacrificial release layer; andproviding an active semiconductor layer on the semiconductor oxide layer.12. The method of claim 1 1, wherein the step of providing the MEMS sacrificial release layer comprises:depositing a sacrificial release material on the carrier wafer; anddepositing an oxide material on the sacrificial release material.13. The method of claim 12, wherein the step of depositing the sacrificial release material comprises the step of depositing a material selected from the group consisting of molybdenum (Mo), germanium (Ge), germanium oxide (GeOx) and silicon oxide (SiOx).14. The method of claim 12, wherein the step of depositing the sacrificial release material comprises the step of depositing the sacrificial release material by chemical vapor deposition (CVD), plasma-enhanced chemical vapor deposition (PECVD), or physical vapor deposition (PVD).15. The method of claim 12, wherein the step of depositing the oxide material comprises the step of depositing the oxide material by chemical vapor deposition (CVD), plasma-enhanced chemical vapor deposition (PECVD), or physical vapor deposition (PVD).16. The method of claim 1 1, further comprising the steps of:providing a transfer substrate comprising a bulk wafer and at least one semiconductor oxide layer on a surface of the bulk wafer; andapplying H+ delta implantation to a first portion of the bulk wafer near the surface of the bulk wafer in contact with the semiconductor oxide layer.17. The method of claim 16, further comprising bonding the transfer substrate to a carrier substrate comprising the carrier wafer, the MEMS sacrificial release layer, the semiconductor oxide layer and the active semiconductor layer.18. The method of claim 17, wherein the step of bonding the transfer substrate to the carrier substrate comprises the step of bonding the semiconductor oxide layer of the transfer substrate to the MEMS sacrificial release layer.19. The method of claim 18, further comprising:separating a second portion of the bulk wafer of the transfer substrate from the carrier substrate; andleaving the first portion of the bulk wafer with H+ delta implantation in contact with the semiconductor oxide layer.20. The method of claim 19, wherein the semiconductor oxide layer of the transfer substrate comprises silicon dioxide (S1O2), and wherein the bulk wafer of the transfer substrate comprises silicon (Si).21. A three-dimensional integrated circuit device, comprising:a substrate;a first tier of one or more integrated circuits comprising one or more metal layers and one or more inter-layer dielectric (ILD) layers;a second tier of one or more integrated circuits comprising one or more metal layers and one or more ILD layers; and a first buried oxide (BOX) layer in contact with at least one of the ILD layers in the second tier of one or more integrated circuits, wherein one or more portions of the first BOX layer and one or more portions of said at least one of the ILD layers in the second tier of one or more integrated circuits are removed to form one or more vias through the first BOX layer and said at least one of the ILD layers in the second tier.22. The device of claim 21, further comprising one or more metal interconnects over said one or more vias.23. The device of claim 22, further comprising one or more metal layers and one or more ILD layers on the metal interconnects.24. The device of claim 23, further comprising a plurality of bonding pads on said one or more metal layers.25. The device of claim 21, wherein the first BOX layer comprises a silicon (Si) layer and a silicon dioxide (S1O2) layer.26. The device of claim 21, wherein the first tier of one or more integrated circuits further comprises a first set of one or more bonding pads, wherein the second tier of one or more integrated circuits further comprises a second set of one or more bonding pads, and wherein at least one of the bonding pads in the first set is connected to at least one of the bonding pads in the second set.27. The device of claim 21, further comprising:a third tier of one or more integrated circuits comprising one or more metal layers and one or more ILD layers;a second BOX layer in contact with at least one of the ILD layers in the third tier of one or more integrated circuits, wherein one or more portions of the second BOX layer and one or more portions of said at least one of the ILD layers in the third tier of one or more integrated circuits are removed to form one or more vias through the second BOX layer and said at least one of the ILD layers in the third tier.28. The device of claim 27, wherein the second tier of one or more integrated circuits further comprises a second set of one or more bonding pads, wherein the third tier of one or more integrated circuits further comprises a third set of one or more bonding pads, and wherein at least one of the bonding pads in the second set is connected to at least one of the bonding pads in the third set.29. The device of claim 21, wherein the substrate comprises a silicon-on- insulator (SOI) substrate.30. The device of claim 21, wherein the substrate comprises a silicon (Si) bulk handler.31. A method of making a three-dimensional integrated circuit device, comprising:providing a substrate;forming a first tier of one or more integrated circuits comprising one or more metal layers and one or more inter-layer dielectric (ILD) layers;forming a second tier of one or more integrated circuits comprising one or more metal layers and one or more ILD layers;forming a first buried oxide (BOX) layer in contact with at least one of the ILD layers in the second tier of one or more integrated circuits; andforming one or more vias through the first BOX layer and said at least one of the ILD layers in the second tier.32. The method of claim 31, wherein the step of forming the second tier of one or more integrated circuits comprises aligning one or more bonding pads in the second tier of one or more integrated circuits with one or more bonding pads in the first tier of one or more integrated circuits.33. The method of claim 32, wherein the step of forming the second tier of one or more integrated circuits further comprises bonding said one or more bonding pads in the second tier of one or more integrated circuits with said one or more bonding pads in the first tier of one or more integrated circuits.34. The method of claim 31, wherein the step of forming the second tier of one or more integrated circuits comprises forming the second tier of one or more integrated circuits on a microelectromechanical system (MEMS) bond release structure comprising a carrier wafer, a MEMS sacrificial release layer on the carrier wafer, and the first BOX layer on the MEMS sacrificial release layer.35. The method of claim 34, further comprising separating the carrier wafer from the second tier of one or more integrated circuits by releasing the MEMS sacrificial release layer.36. The method of claim 35, further comprising:forming a one or more metal interconnects over said one or more vias;forming one or more metal layers and one or more ILD layers on the metal interconnects; andforming a plurality of bonding pads on said one or more metal layers.37. The method of claim 31, further comprising:forming a third tier of one or more integrated circuits comprising one or more metal layers and one or more ILD layers; andforming a second BOX layer in contact with at least one of the ILD layers in the third tier of one or more integrated circuits.38. The method of claim 37, further comprising forming one or more vias through the second BOX layer and said at least one of the ILD layers in the third tier.39. The method of claim 37, wherein the step of forming the third tier of one or more integrated circuits comprises forming the third tier of one or more integrated circuits on a microelectromechanical system (MEMS) bond release structure comprising a carrier wafer, a MEMS sacrificial release layer on the carrier wafer, and the second BOX layer on the MEMS sacrificial release layer.40. The method of claim 39, further comprising separating the carrier wafer from the third tier of one or more integrated circuits by releasing the MEMS sacrificial release layer. |
MICROELECTROMECHANICAL SYSTEM (MEMS) BOND RELEASE STRUCTURE AND METHOD OF WAFER TRANSFER FOR THREE-DIMENSIONAL INTEGRATED CIRCUIT (3D IC)INTEGRATIONField of Disclosure[0001] Various embodiments described herein relate to three-dimensional integrated circuit (3D IC) devices, and more particularly, to 3D IC stacking using a microelectromechanical mechanical system (MEMS) bond release structure.Background[0002] Three-dimensional circuit integration by stacking integrated circuits in multiple tiers allows circuit designers to achieve benefits of improved power, performance, area and cost (PPAC) beyond the Moore's law scaling limit. Various schemes of three- dimensional integrated circuit (3D IC) stacking, including silicon-in-package (SiP) 3D IC stacking schemes such as wire-bond, flip-chip bond, through-silicon via (TSV) and silicon interposer technologies have been developed in order to achieve higher densities in circuitry, inter-tier links and vias. 3D ICs with multi-tier stacking are desirable in devices in which form factor requirements are stringent, such as smartphones and other mobile devices. In addition to conventional SiP 3D IC stacking schemes, sequential monolithic 3D IC (sM3DIC) technology has been developed. In sM3DIC, a single crystal semiconductor layer is sequentially integrated and bonded onto a finished lower- tier complementary metal oxide semiconductor (CMOS) wafer, and an upper-tier CMOS is then built upon it.[0003] The sM3DIC technology is currently considered to have the potential of achieving huge PPAC benefits with high inter-tier link/via densities, on the order of more than 1,000,000 links per square millimeter. However, the sM3DIC technology currently faces several significant process integration challenges that need to be overcome before it can become commercially feasible. Such challenges may include, for example, low thermal budget/process requirements for upper-tier source/drain (S/D) ohmic contact, channel/well dopant activation, S/D recrystallization, and potential contamination problems related to copper interconnect processes when the lower-tier wafer completed by a back end-of-line (BEOL) process is brought to the front end-of- line (FEOL). [0004] Another 3D IC stacking scheme, called parallel monolithic 3D IC (pM3DIC), may be capable of achieving inter-tier link/via densities on the order of about 100,000 to 1,000,000 links per square millimeter. In pM3DIC, a wafer-to-wafer (W2W) hybrid bonding (metal-to-metal and oxide-to-oxide fusion bonding) technique is used which includes a high-precision W2W alignment process having a tolerance of less than 0.5μιη in combination with a very thin upper-tier wafer having a thickness of less than 5μιη after removal of the bulk silicon. The high-precision W2W alignment process allows the landing pad size to be reduced while the very thin upper-tier wafer allows the size of through-silicon and through-oxide inter-tier vias to be reduced, thereby achieving an increase the inter-tier link/via density.[0005] Even though the pM3DIC approach is currently considered to be capable of offer an intermediate level of inter-tier link/via density within a shorter development period, significant process challenges may still exist. For example, while it is possible to thin the upper-tier wafer down to 5μιη or less by using existing wafer thinning techniques, such as mechanical wafer backgrinding including a coarse grinding and a fine polish followed by chemical-mechanical polish (CMP), CMOS device characteristics are found to drift when the wafer is thinned down to 25μιη or less due to particle-induced stress impact onto the CMOS device during the mechanical grinding process in the bumping line. Moreover, with existing mechanical wafer grinding and CMP technologies, it may still be difficult to achieve a reasonable total thickness variation (TTV) of Ιμιη or less.[0006] Another approach for wafer thinning for a CMOS imager utilizes selective wet etch on a P+ etch stop layer. However, such an approach may present challenges for obtaining a reasonable process window to control precise and uniform layer thickness, to control defect density, and to manage boron doping diffusion during the remaining CMOS process. Alternatively, a silicon-on-insulator (SOI) wafer may provide an acceptable solution for precise wafer thinning down to the "buried oxide" (BOX) layer, that is, a layer including a silicon (Si) layer and a silicon dioxide (S1O2) layer processed by coarse and fine grinding, followed by CMP, and then followed by selective wet etch of Si and S1O2. The SOI wafer may be used as the starting wafer for the upper-tier. However, once the wafer is processed by mechanical grinding through the bumping line, the wafer may often be contaminated with heavy metals such as gold, silver, tin, or other metals in practice. With heavy metal contamination, the wafer can no longer be practically processed in the BEOL to add additional backside metals with fine pitch metal layers, thus losing 3D integration flexibility in terms of interconnect configurations. Moreover, other factors such as wafer cost, material utilization, and throughput considerations, for example, may not be favorable for pM3DIC integration.SUMMARY[0007] Exemplary embodiments are directed to a microelectromechanical system(MEMS) bond release structure for wafer transfer and a method of making the same, and a three-dimensional integrated circuit device and a method of making the same by using the MEMS bond release structure for wafer transfer.[0008] In an embodiment, a microelectromechanical system (MEMS) bond release structure is provided, the structure comprising: a carrier wafer; a MEMS sacrificial release layer on the carrier wafer; a semiconductor oxide layer on the MEMS sacrificial release layer; and an active semiconductor layer on the semiconductor oxide layer.[0009] In another embodiment, a method of making a microelectromechanical system(MEMS) bond release structure is provided, the method comprising: providing a carrier wafer; providing a MEMS sacrificial release layer on the carrier wafer; providing a semiconductor oxide layer on the MEMS sacrificial release layer; and providing an active semiconductor layer on the semiconductor oxide layer.[0010] In another embodiment, a three-dimensional integrated circuit device is provided, the device comprising: a substrate; a first tier of one or more integrated circuits comprising one or more metal layers and one or more inter-layer dielectric (ILD) layers; a second tier of one or more integrated circuits comprising one or more metal layers and one or more ILD layers; and a first BOX layer in contact with at least one of the ILD layers in the second tier of one or more integrated circuits, wherein one or more portions of the first BOX layer and one or more portions of said at least one of the ILD layers in the second tier of one or more integrated circuits are removed to form one or more vias through the first BOX layer and said at least one of the ILD layers in the second tier.[0011] In yet another embodiment, a method of making a three-dimensional integrated circuit device is provided, the method comprising: providing a substrate; forming a first tier of one or more integrated circuits comprising one or more metal layers and one or more inter-layer dielectric (ILD) layers; forming a second tier of one or more integrated circuits comprising one or more metal layers and one or more ILD layers; forming a first BOX layer in contact with at least one of the ILD layers in the second tier of one or more integrated circuits; and forming one or more vias through the first BOX layer and said at least one of the ILD layers in the second tier.BRIEF DESCRIPTION OF THE DRAWINGS[0012] The accompanying drawings are presented to aid in the description of embodiments and are provided solely for illustration of the embodiments and not limitations thereof.[0013] FIG. 1 is a cross-sectional view of a carrier wafer and a microelectromechanical system (MEMS) sacrificial release layer, illustrating an embodiment of a structure in the initial step of fabricating a MEMS bond release structure.[0014] FIG. 2 is a cross-sectional view of a carrier wafer, a MEMS sacrificial release layer, and an oxide layer, illustrating an embodiment of a structure in the second step of fabricating a MEMS bond release structure.[0015] FIG. 3 is a cross-sectional view of a plurality of MEMS posts or pillars on a carrier wafer, illustrating an embodiment of a structure in the third step of fabricating aMEMS bond release structure.[0016] FIG. 4A is a cross-sectional view of an embodiment of a transfer substrate, which is initially provided separately from the carrier wafer as shown in FIGs. 1-3, for fabricating a MEMS bond release structure.[0017] FIG. 4B is a cross-sectional view of an embodiment of a carrier substrate comprising the structure of FIG. 3 before the transfer substrate is bonded to the carrier substrate.[0018] FIG. 5A is a cross-sectional view of the embodiment of the transfer substrate ofFIG. 4A flipped upside down before it is bonded to the carrier substrate.[0019] FIG. 5B is a cross-sectional view of the carrier substrate of FIG. 4B ready for accepting bonding of the flipped-over transfer substrate of FIG. 5 A.[0020] FIG. 6 is a cross-sectional view of an embodiment of a bonded structure after the flipped-over transfer substrate is bonded to the carrier substrate.[0021] FIG. 7 is a cross-sectional view of an embodiment showing separation of a portion of the transfer substrate from the carrier substrate while leaving another portion of the transfer substrate intact with the carrier substrate.[0022] FIG. 8 is a cross-sectional view of a MEMS pillar/post bond release structure after separation of a portion of the transfer substrate. [0023] FIG. 9 is a cross-sectional view of an embodiment of a finished MEMS pillar/post bond release structure.[0024] FIGs. 10A and 10B are cross-sectional and top views, respectively, of an embodiment of a MEMS post/pillar bond release structure in which outer and innerMEMS posts or pillars have different widths.[0025] FIG. 1 1 is a cross-sectional view of an alternate embodiment of a MEMS bond release structure with a continuous MEMS sacrificial release layer.[0026] FIG. 12 is a cross-sectional view of a first tier (Tier 1) of integrated circuits prepared before one or more additional tiers of integrated circuits are stacked on Tier 1.[0027] FIG. 13 is a cross-sectional view of a second tier (Tier 2) of integrated circuits prepared on a MEMS bond release structure, embodiments of which are described above with references to FIGs. 1-11, before Tier 2 is stacked on Tier 1 of integrated circuits as shown in FIG. 12.[0028] FIG. 14 is a cross-sectional view illustrating the alignment of Tier 1 and Tier 2 of integrated circuits as shown in FIGs. 12 and 13 before they are bonded together.[0029] FIG. 15 is a cross-sectional view illustrating wafer-to-wafer (W2W) bonding ofTier 1 and Tier 2 of integrated circuits as shown in FIG. 14.[0030] FIG. 16 is a cross-sectional view illustrating an embodiment of a two-tier 3D IC after the MEMS sacrificial release layer of the MEMS bond release structure is removed.[0031] FIG. 17 is a cross-sectional view of the 3D IC of FIG. 16 after removing the remaining thin oxide layer on the BOX layer to form a smooth top surface.[0032] FIG. 18 is a cross-sectional view of the 3D IC of FIG. 17 after vias are formed in the BOX layer and the inter-layer dielectric (ILD) layer directly beneath the BOX layer in Tier 2.[0033] FIG. 19 is a cross-sectional view of the 3D IC of FIG. 18 after metal interconnects are formed as part of an additional metal layer over the vias in Tier 2.[0034] FIG. 20 is a cross-sectional view of the 3D IC of FIG. 19 after additional ILD layers are formed on the additional metal layer over the vias.[0035] FIG. 21 is a cross-sectional view of an embodiment of a three-tier 3D IC, in which Tier 2 and Tier 1 of integrated circuits are formed and combined together by using the MEMS bond release structure. DETAILED DESCRIPTION[0036] Aspects of the disclosure are described in the following description and related drawings directed to specific embodiments. Alternate embodiments may be devised without departing from the scope of the disclosure. Additionally, well known elements will not be described in detail or will be omitted so as not to obscure the relevant details of the disclosure.[0037] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments" does not require that all embodiments include the discussed feature, advantage or mode of operation.[0038] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments. As used herein, the singular forms "a," "an," and "the," are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof. Moreover, it is understood that the word "or" has the same meaning as the Boolean operator "OR," that is, it encompasses the possibilities of "either" and "both" and is not limited to "exclusive or" ("XOR"), unless expressly stated otherwise.[0039] FIG. 1 is a cross-sectional view of a carrier wafer 102 and a microelectromechanical system (MEMS) sacrificial release layer 104 on top of the carrier wafer 102, illustrating an embodiment of a structure in the initial step of fabricating a MEMS bond release structure. In an embodiment, the carrier wafer 102 comprises a silicon wafer. The MEMS sacrificial release layer 104 may comprise a material such as molybdenum (Mo), germanium (Ge), germanium oxide (GeOx), silicon oxide (SiOx) including silicon dioxide (S1O2), or other types of sacrificial material. In an embodiment, the MEMS sacrificial release layer 104 may be provided on the carrier wafer 102 by using a conventional deposition process, such as a chemical vapor deposition (CVD), a plasma-enhanced chemical vapor deposition (PECVD), a physical vapor deposition (PVD) process, for example.[0040] FIG. 2 is a cross-sectional view of the carrier wafer 102, the MEMS sacrificial release layer 104 on top of the carrier wafer 102, and an oxide layer 106 for oxide-to- oxide fusion bonding on top of the MEMS sacrificial release layer 104, illustrating an embodiment of a structure in the second step of fabricating the MEMS bond release structure. In an embodiment, the oxide layer 106 comprises a thin oxide material such as silicon dioxide (S1O2) for oxide-to-oxide bonding. In an embodiment, the oxide layer 106 may be provided on the MEMS sacrificial release layer 104 by using a conventional deposition process such as a CVD process, for example. In a further embodiment, the oxide layer 106 may be provided on the MEMS sacrificial release layer 104 by a plasma-enhanced chemical vapor deposition (PECVD) process, for example.[0041] FIG. 3 is a cross-sectional view of a MEMS post/pillar bond release structure having a plurality of MEMS posts or pillars 108a, 108b, 108c, ... on the carrier wafer 102, illustrating an embodiment of a structure in the third step of fabricating the MEMS bond release structure. In an embodiment, each of the MEMS posts or pillars 108a, 108b, 108c, ... comprises a MEMS sacrificial release layer 104 on top of the carrier wafer 102 and an oxide layer 106 on top of the MEMS sacrificial release layer 104. In an embodiment, the MEMS posts or pillars 108a, 108b, 108c, ... as shown in FIG. 3 may be formed by patterning and etching the continuous MEMS sacrificial release layer 104 and the oxide layer 106 as shown in FIG. 2, for example.[0042] Although the cross-sectional view of FIG. 3 shows a plurality of MEMS posts or pillars 108a, 108b, 108c, ... as being substantially identical to one another with substantially equal spacing, the MEMS posts or pillars on a given carrier wafer need not have the same width, and spacings between adjacent MEMS posts or pillars need not be identical, as will be described below with reference to an embodiment as shown in FIGs. 10A and 10B. Moreover, in an alternate embodiment, MEMS posts or pillars need not be fabricated, and the MEMS sacrificial release layer 104 may be formed instead as a continuous layer, which will be described below with reference to an embodiment as shown in FIG. 11.[0043] FIG. 4A is a cross-sectional view of an embodiment of a transfer substrate 200, which is initially provided separately from the carrier wafer 102 as shown in FIGs. 1-3. In the embodiment shown in FIG. 4A, the transfer substrate 200 comprises a bulk wafer 202 having opposite surfaces 204 and 206, and two semiconductor oxide layers 208 and 210 disposed on the surfaces 204 and 206 of the bulk wafer 202, respectively. In this embodiment, two semiconductor oxide layers 208 and 210 are thermally-oxidized and positioned to sandwich the bulk wafer 202. Alternatively, only one semiconductor oxide layer 208 is provided for illustration on the surface 204 of the bulk wafer 202. In an embodiment, the bulk wafer 202 of the transfer substrate 200 comprises a silicon wafer, whereas each of the semiconductor oxide layers 208 and 210 comprises Si02. In an embodiment, the S1O2 layers 208 and 210 may be formed on the bulk silicon wafer 202 by thermally-oxidizing the surfaces of the bulk silicon wafer 202, for example.[0044] In an embodiment, a dopant is implanted to a portion 212 near the surface 204 of the bulk wafer 202 in contact with the semiconductor oxide layer 208. In an embodiment in which the bulk wafer 202 is sandwiched by two semiconductor oxide layers 208 and 210 as shown in FIG. 4A, dopant implantation may be applied to only a portion near one of the surfaces of the bulk wafer in contact with one of the semiconductor layers, for example, the portion 212 near the surface 204 of the bulk wafer 202 in contact with the semiconductor oxide layer 208. In an embodiment, an ion implantation process such as H+ delta implantation may be applied to the portion 212 near the surface 204 of the bulk wafer 202, for example. FIG. 4B is a cross-sectional view of a carrier substrate 300, which includes the carrier wafer 102 and the plurality of MEMS posts or pillars 108a, 108b, 108c, ... each having a MEMS sacrificial release layer 104 and an oxide layer 106 as shown in FIG. 3, before the transfer substrate 200 of FIG. 4A is flipped over and bonded to the carrier substrate 300 of FIG. 4B.[0045] In practice, it is usually easier to implant a dopant from the top surface 204 rather than the bottom surface 206 of the bulk wafer 202 of the transfer substrate 200, the initial orientation of which is shown in FIG. 4A, consistent with the initial orientation of the MEMS post/pillar bond release structure as shown in FIG. 4B. In an embodiment, the transfer substrate 200 is flipped upside down, as shown in the cross- sectional view of FIG. 5A, before it is bonded to the carrier substrate 300, the cross- sectional view of which is shown in FIG. 5B.[0046] FIG. 6 is a cross-sectional view of an embodiment of a bonded structure which is a combination of the carrier substrate 300 and the transfer substrate 200 after the flipped-over transfer substrate 200 is bonded to the carrier substrate 300. In the embodiment shown in FIG. 6, the semiconductor oxide layer 208, which is in contact with the surface 204 of the bulk wafer 202 to which H+ delta implantation was applied, is directly bonded to the oxide layer 106 of each of the MEMS posts or pillars 108a, 108b, 108c, ... .[0047] FIG. 7 is a cross-sectional view of an embodiment showing separation of a portion of the transfer substrate 200 from the carrier substrate 300, while leaving the H+ delta implanted portion 212 of the bulk wafer 202 and the semiconductor oxide layer 208 on the surface 204 of the H+ delta implanted portion 212 of the bulk wafer 202 intact with the carrier substrate 300. In an embodiment, an undoped portion 220 of the bulk wafer 202 and the semiconductor oxide layer 210 on the surface 206 opposite the H+ delta implanted portion 212 of the bulk wafer 202 are separated from the H+ delta implanted portion 212 of the bulk wafer 212. Separation of the undoped portion 220 of the bulk wafer 202 from the H+ delta implanted portion 212 of the bulk wafer 202 may be achieved by cleavage, for example.[0048] FIG. 8 is a cross-sectional view of a MEMS pillar/post bond release structure after the undoped portion of the bulk wafer of the transfer substrate is separated or cleaved from the H+ delta implanted portion 212 of the bulk wafer 202. As shown in FIG. 8, the H+ delta implanted portion 212 of the bulk wafer 202 and the semiconductor oxide layer 208 on the surface 204 of the H+ delta implanted portion 212 of the bulk wafer 202 are now integrated parts of the MEMS post/pillar bond release structure, which also includes the carrier wafer 102 and the plurality of MEMS posts or pillars 108a, 108b, 108c, ... each having a MEMS sacrificial release layer 104 and a thin oxide layer 106.[0049] In an embodiment, the semiconductor oxide layer 208, which may comprise anS1O2 layer, is directly positioned on the thin oxide layer 106 of each of the MEMS posts or pillars 108a, 108b, 108c , ... . In an embodiment in which the bulk wafer 202 of the transfer substrate 200 comprises silicon and the semiconductor oxide layer 208 of the transfer substrate 200 comprises S1O2, the H+ delta implanted portion 212 of the bulk wafer 202 and the semiconductor oxide layer 208 together form a silicon-on-insulator (SOI) substrate 400. Such a SOI substrate 400 may also be regarded as a S1O2 BOX layer in the fabrication of 3D ICs, which will be described below with reference to FIGs. 13-21.[0050] FIG. 9 is a cross-sectional view of an embodiment of a finished MEMS pillar/post bond release structure, after the SOI substrate 400 is subjected to wafer surface finishing processes. In an embodiment, the finishing processes may include, for example, a post-bonding chemical mechanical polish (CMP) process to smooth the top surface 230 of the H+ delta implanted portion 212 bulk wafer 202 resulting from the separation or cleavage of the undoped portion of the bulk wafer from the MEMS pillar/post bond release structure as described above with reference to FIG. 7. In a further embodiment, the finishing processes may also include ozone oxidation treatment for the SOI substrate 400, for example.[0051] FIGs. 10A and 10B are side/cross-sectional and top views, respectively, of an embodiment of a MEMS post/pillar bond release structure across a semiconductor wafer, in which some of the MEMS posts or pillars may have different widths. In FIGs. 10A and 10B, a plurality of inner MEMS posts or pillars 108a, 108b, 108c, ... are provided as part of the MEMS post/pillar bond release structure similar to the structure described above with reference to FIG. 9. In addition to the inner MEMS posts or pillars 108a, 108b, 108c, ... , FIGs. 10A and 10B also shows a plurality of outer MEMS posts or pillars 150a, 150b, ... surrounding the inner MEMS posts or pillars 108a, 108b, 108c, ... . In an embodiment, each of the outer MEMS posts or pillars 150a, 150b, ... has a width greater than the width of each of the inner MEMS posts or pillars 108a, 108b, 108c, ... on the carrier wafer 102 to provide sufficient structural support of the entire bonded-wafers during the sacrificial layer release process. Other than having different widths, the outer MEMS posts or pillars 150a, 150b, ... have the same two- layer structure as that of the inner MEMS posts or pillars 108a, 108b, 108c, ... , including a MEMS sacrificial release layer 104 and a thin oxide layer 106. In a further embodiment, a sealing ring 160 is also provided along the outer perimeter of the carrier wafer 102. In an embodiment, the sealing ring 160 also has the same two-layer structure as that of the inner and outer MEMS posts or pillars, including a MEMS sacrificial release layer 104 and a thin oxide layer 106.[0052] FIG. 1 1 is a cross-sectional view of an alternate embodiment of a MEMS bond release structure. Instead of the MEMS posts or pillars described above, the MEMS bond release structure in the embodiment shown in FIG. 1 1 includes a continuous MEMS sacrificial release layer 504 on the carrier wafer 102 to form a carrier substrate 500. In an embodiment, the SOI substrate 400, which comprises a semiconductor oxide layer 208 and a layer of bulk wafer 202, which may be doped by an H+ delta implantation process, for example, is bonded to the carrier substrate 500. In an embodiment, the carrier wafer 102 comprises silicon whereas the bulk wafer 202 comprises H+ delta doped silicon, for example. In an embodiment, the MEMS sacrificial release layer 504, which is continuously disposed across the carrier wafer 102, may comprise a sacrificial material such as Mo, Ge, GeOxor SiOxincluding S1O2, for example. In an embodiment, the semiconductor oxide layer 208 of the SOI substrate 400 comprises S1O2.[0053] FIGs. 12-21 are cross-sectional views illustrating embodiments of processes for making a 3D IC by stacking multiple tiers of integrated circuits using one or more MEMS bond release structures, embodiments of which are described above with respect to FIGs. 1-11. FIG. 12 is a cross-sectional view of a first tier (Tier 1) of integrated circuits. A wafer 1202 for Tier 1, which may comprise a silicon bulk handler or a silicon-on-insulator (SOI) substrate, may be prepared in a conventional manner. In the embodiment shown in FIG. 12, the integrated circuits in Tier 1 may include one or more metal layers (Ml, M2, M3, M4 layers) 1204, 1206, 1208 and 1210, and one or more inter-layer dielectric (ILD) layers (ILD-0, ILD-1, ILD-2, ILD-3 layers) 1212, 1214, 1216 and 1218.[0054] A plurality of vias may also be provided for electrical interconnects between some or all of the metal layers through one or more of the ILD layers, including, for example, vias (VI, V2, V3) 1220, 1222 and 1224 as shown in FIG. 12. In an embodiment, an additional ILD layer (ILD-4 layer) 1230 is provided on the top metal layer (M4 layer) 1210 for oxide bonding with a second tier (Tier 2) of integrated circuits, which will be described below with reference to FIG. 13. Referring to FIG. 12, a plurality of bonding pads such as bonding pads 1240 and 1242 are provided on top of at least some of the metal contacts in the top metal layer (M4 layer) 1210 in the top ILD layer 1230 for bonding with corresponding bonding pads in the second tier of integrated circuits.[0055] FIG. 13 is a cross-sectional view of a second tier (Tier 2) of integrated circuits prepared on a MEMS bond release structure, embodiments of which are described above with references to FIGs. 1-11. In FIG. 13, a MEMs post/pillar bond release structure as shown in FIG. 10A is provided, which comprises a carrier wafer 102, a plurality of inner MEMS posts or pillars 108a, 108b, 108c, ... , a plurality of outer MEMS posts or pillars 150a, 150b, ... and a sealing ring 160 each having a MEMS sacrificial release layer 104 and a thin oxide layer 106, and a SOI substrate 400, also called a BOX layer, which may comprise a silicon dioxide (S1O2) layer and an active silicon layer. Referring to FIG. 13, a plurality of metal layers (Ml, M2, M3, M4 layers) 1302, 1304, 1306 and 1308 and a plurality of ILD layers (ILD-0, ILD-1, ILD-2, ILD-3 layers) 1310, 1312, 1314 and 1316 are provided on top of the SOI substrate or BOX layer 400.[0056] Similar to Tier 1, Tier 2 of integrated circuits may also include a plurality of vias(VI, V2, V3) 1320, 1322 and 1324 provided for electrical interconnects between some or all of the metal layers through one or more of the ILD layers. Again, similar to Tier1, Tier 2 of integrated circuits may also include a top ILD layer (ILD-4 layer) 1330 on the top metal layer (M4 layer) 1308 for oxide bonding. Furthermore, a plurality of bonding pads such as bonding pads 1340 and 1342 are provided on top of at least some of the metal contacts in the top metal layer (M4 layer) 1308 in the top ILD layer 1330 for bonding with corresponding bonding pads in the first tier (Tier 1) of integrated circuits.[0057] FIG. 14 is a cross-sectional view illustrating the alignment of Tier 1 and Tier 2 of integrated circuits before they are bonded together. In FIG. 14, Tier 2 of integrated circuits is flipped upside down from the orientation in the cross-sectional view of FIG. 13. In FIG. 14, after Tier 2 of integrated circuits attached to the BOX layer 400 of the MEMS post/pillar bond release structure is flipped upside down, the bonding pads 1342 and 1340 of Tier 2 are aligned with the bonding pads 1240 and 1242 of Tier 1, respectively. As shown in FIG. 14, the width of each bonding pad of a given tier need not be the identical to the width of the corresponding bonding pad of the other tier, as long as the corresponding pads are aligned with one another such that sufficiently good electrical connections, i.e., sufficient contact areas are established once the pads are bonded to one another.[0058] FIG. 15 is a cross-sectional view illustrating the wafer-to-wafer (W2W) bonding of Tier 2 and Tier 1 of integrated circuits. In FIG. 15, upon bonding of Tier 2 to Tier 1, the bonding pads 1342 and 1340 of Tier 2 are in direct contact with the bonding pads 1240 and 1242 of Tier 1, respectively, thereby establishing electrical connections between the corresponding bonding pads. Moreover, the top ILD layer (ILD-4 layer) 1230 of Tier 1 is also in direct contact with the top ILD layer (ILD-4 layer) 1330 of Tier2, thereby forming a two-tier 3D IC. [0059] FIG. 16 is a cross-sectional view illustrating an embodiment of the two-tier 3DIC after the MEMS sacrificial release layer 104 in each of the inner MEMS posts or pillars 108a, 108b, 108c, ... , the outer MEMS posts or pillars 150a, 150b, ... and the sealing ring 160 is removed, thus leaving the thin oxide layer 106 in each of the inner and outer MEMS posts or pillars and the sealing ring intact with the BOX layer 400. In an embodiment in which the MEMS bond release structure includes a plurality of MEMS posts or pillars as shown in FIG. 2 and described above, the thin oxide layer 106 after removal of the MEMS sacrificial release layer 104 would be in the form of small protrusions on the BOX layer 400 as shown in FIG. 16. The MEMS sacrificial release layer 104 may be removed easily by release-etching process, either in wet or dry etch chemistry. For example, XeF2is widely used as a dry-etch release chemistry for Mo or Si sacrificial layer. With the removal of the MEMS sacrificial release layer, the carrier wafer 102 is completely released or detached from Tier 2 of integrated circuits.[0060] FIG. 17 is a cross-sectional view of the 3D IC of FIG. 16 after removing the remaining thin oxide layer 106 on the BOX layer 400 to form a smooth top surface 1702 of the BOX layer 400. The top surface 1702 of the BOX layer 400 may be smoothed by a conventional polishing process, such as a chemical mechanical polish (CMP) process in back end-of-line (BEOL) in an embodiment. FIG. 18 is a cross-sectional view of the 3D IC of FIG. 17 after a plurality of vias 1802a, 1802b, 1802c, ... are provided through the BOX layer 400 and the ILD layer (ILD-0 layer) 1310 of Tier 2 directly beneath the BOX layer 400. The vias 1802a, 1802b, 1802c may be formed by removing designated portions of the BOX layer 400 and corresponding portions of the ILD layer (ILD-0 layer) 1310 directly beneath the BOX layer 400 in a conventional manner. After the via formation, these vias are filled with metal (e.g., Cu), then followed by the CMP process typically used in the BEOL.[0061] FIG. 19 is a cross-sectional view of the 3D IC of FIG. 18 after one or more metal interconnects 1902a, 1902b, 1902c, ... are formed as part of an additional metal layer (M5 layer) over the vias 1802a, 1802b, 1802c, ... . FIG. 20 is a cross-sectional view of the 3D IC of FIG. 19 after one or more additional ILD layers (ILD-5, ILD-6 layers) 2002 and 2004 are formed on the M5 layer in an embodiment. In a further embodiment, another additional metal layer (M6 layer) having metal interconnects 2006a, 2006b, 2006c, ... are provided on top of the ILD-5 layer 2002 and within the ILD-6 layer 2004. In yet a further embodiment, a plurality of vias 2008a, 2008b, 2008c, ... are provided in the ILD-5 layer 2002 to allow for electrical connections between metal interconnects of M5 and M6 layers. In an embodiment, one or more bonding pads such as bonding pads 2010a and 2010b are formed on top of one or more metal interconnects of the M6 layer.[0062] FIG. 21 is a cross-sectional view of an embodiment of a three-tier 3D IC, in which Tier 2 and Tier 1 of integrated circuits are formed and combined together by the bonding processes described above with references to FIGs. 12-20. In the embodiment shown in FIG. 21, an additional tier, Tier 3, of integrated circuits is formed on top of Tier 2 in the same manner as the formation and bonding of Tier 2 to Tier 1 described above. In FIG. 21, bonding pads 21 10a and 21 10b are provided in Tier 3 and aligned with bonding pads 2010a and 2010b of Tier 2, respectively, in a wafer-to-wafer (W2W) hybrid-bonding process, for example.[0063] In FIG. 21, an additional BOX layer 2120, which is formed by a silicon dioxide(S1O2) layer and an active silicon layer in an embodiment of a MEMS bond release structure described above, is provided for Tier 3. Furthermore, one or more metal layers (M5, M6 layers) and one or more ILD layers (ILD-5, ILD-6 layers) may be provided on top of the BOX layer 2120, and one or more bonding pads 2130a and 2130b may be provided on top of the M6 layer in an embodiment to allow an additional tier of integrated circuits (not shown) to be bonded to Tier 3 in a W2W hybrid-bonding process. Multiple tiers of integrated circuits may be stacked in a similar manner to produce a multi-tier 3D IC.[0064] Although some of the embodiments described above relate to the processing of silicon integrated circuits, the principles of the disclosure are also applicable to integrated circuits based on other materials. In other embodiments, the semiconductor materials of upper-tier wafers may be other than silicon, such as silicon germanium (SiGe), gallium arsenide (GaAs), indium phosphide (InP), gallium nitride (GaN), or other semiconductor. Moreover, the lower-tier wafer can be non-semiconductors such as insulative substrate materials. For example glass, quartz substrate, or even glass panel used in flat panel displays or sensors may be used as insulative substrate materials for the lower-tier wafer. Moreover, the MEMS bond release structure according to embodiments of the disclosure allows precise upper-tier wafer thinning and thickness control by controlling the thickness of the BOX layer for each tier, rather than by conventional mechanical grinding processes such as coarse and fine grinding, thus achieving a very small wafer total thickness variation (TTV).[0065] Moreover, by avoiding the need for a conventional mechanical wafer grinding process, adverse effects on electrical properties of circuit elements in the upper tiers due to mechanical stress introduced during mechanical wafer grinding may be avoided. Furthermore, with the MEMS sacrificial layer release process, higher throughput in manufacturing of multi-tier 3D IC devices may be achieved because MEMS sacrificial layer release by etching may be faster than time-consuming mechanical grinding processes. By using the MEMS bond release structure according to embodiments of the disclosure in the manufacturing of 3D IC devices, lower material cost, higher yield and better material utilization may be achieved by avoiding waste of semiconductor materials and mechanical stress on the circuit elements resulting from conventional mechanical grinding processes.[0066] While the foregoing disclosure describes illustrative embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the appended claims. The functions, steps or actions in the method and apparatus claims in accordance with the embodiments described herein need not be performed in any particular order unless explicitly stated otherwise. Furthermore, although elements may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
An integrated circuit package (10) that includes a first die (20) with a memory (22) positioned physically at a predetermined memory location in the first die; a second die (40) positioned in covering relationship with at least the predetermined memory location in the first die; penetration detection circuitry positioned at least partially in said second die (40), that generates a penetration detection signal in response to physical penetration of the second die (40); and memory circuitry operatively associated with the memory in the first die and the penetration detection circuitry, which is adapted to perform an operation on the memory, such as data erasure, in response to the penetration detection signal. |
CLAIMS What is claimed is: 1. An integrated circuit package comprising: a first die having a memory positioned physically at a predetermined memory location in said first die; a second die positioned in covering relationship with at least said predetermined memory location in said first die and electrically connected to said first die; penetration detection circuitry, positioned at least partially in said second die, that generates a penetration detection signal in response to physical penetration of said second die; and memory circuitry operatively associated with said memory in said first die and said penetration detection circuitry and adapted to perform an operation on said memory in response to said penetration detection signal. 2. The integrated circuit package of claim 1 comprising an interface substrate adapted to electrically connect at least said first die to a printed circuit board, wherein at least said first die is mounted on and electrically connected to said interface substrate. 3. The integrated circuit package of claim 1, wherein said penetration detection circuitry comprises at least one electrical trace arranged in said second die in a screening pattern above at least said predetermined memory location on said first die. 4. The integrated circuit package of claim 3, wherein said penetration detection circuitry is arranged in a serpentine pattern. 5. The integrated circuit package of claim 3, wherein said penetration detection circuitry detects changes in resistance in said at least one electrical trace. 6. The integrated circuit package of claim 1, wherein said memory circuitry comprises memory erasure circuitry that erases said memory in response to said detection signal. 7. The integrated circuit package of claim 2, wherein said first die comprises a plurality of electrical connection(s) connecting said first die to said interface substrate and wherein said penetration detection circuitry comprises a plurality of electrical traces arranged in said second die in a screening pattern above at least said predetermined memory location on said first die and all of said plurality of electrical connection on said first die. 8. A method of preventing unauthorized access to data in a memory of a first semiconductor die that is covered by a second semiconductor die, comprising: sensing physical penetration of the second die; and performing an operation on the memory in response to said sensing. 9. The method of claim 8, wherein said performing an operation on the memory comprises erasing the data in the memory. 10. The method of claim 8, wherein said sensing comprises detecting a change in the resistance of a conductor pattern provided in said second die. 11. The method of claim 8, further comprising mounting the first die in covering relationship with a substrate. 12. The method of claim 11 , wherein said mounting the first die in covering relationship with a substrate comprises mounting the first die in covering relationship with an electrical connection substrate. 13. The method of claim 12, wherein said mounting the first die in covering relationship with an electrical connection substrate comprises mounting the first die in covering relationship with an electrical connection substrate comprising a ball grid array. 14. A payment card comprising: a first die having a memory positioned physically at a predetermined memory location in said first die that is readable by an authorized payment card reading device; and a memory protection assembly that erases said memory in response to an unauthorized attempt to access said memory. 15. The payment card of claim 14, wherein said memory protection assembly comprises: a second die positioned in covering relationship with at least said predetermined memory location in said first die and electrically connected to said first die; penetration detection circuitry, positioned at least partially in said second die, that generates a penetration detection signal in response to physical penetration of said second die; and memory circuitry operatively associated with said memory in said first die and said penetration detection circuitry and adapted to erase said memory in response to said penetration detection signal. 16. The payment card of claim 15 comprising an electrical connection substrate, wherein said first die is mounted on said electrical connection substrate. 17. The payment card of claim 15, wherein said first die is electrically connected to said electrical connection substrate. 18. The payment card of claim 16 comprising a printed circuit board, wherein said electrical connection substrate is electrically and physically connected to said printed circuit board. 19. The payment card of claim 17 comprising encapsulant wherein said first and second dies, said electrical connection substrate and said printed circuit board are encased in said encapsulant. 20. The payment card of claim 17, wherein said second die is electrically connected to said electrical connection substrate. |
INTEGRATED CIRCUIT PACKAGE BACKGROUND [0001] The term "payment card" refers to a card that may be presented by a cardholder to make a payment. There are different types of payment cards used for various transactions. Credit cards, debit cards, charge cards, stored-value cards, fleet cards, and gift cards are all payment cards. Virtually all payment cards include an integrated circuit package that has a memory provided on a semiconductor die. In many types of payment cards, confidential information such as security codes, financial information, or other data of a proprietary nature is stored in the memory. BRIEF DESCRIPTION OF THE DRAWINGS [0002] Fig. 1 is a truncated cross sectional view of an integrated circuit package. [0003] Fig. 2 is a top plan view of a substrate and first die of the integrated circuit package of Fig. 1. [0004] Fig. 3 is a top plan view of a second die of the integrated circuit package of Fig. 1. [0005] Fig. 4 is a cross sectional view of a portion of a payment card incorporating the integrated circuit package of Fig. 1. [0006] Fig. 5 is a perspective view of the payment card of Fig. 4. [0007] Fig. 6 is a circuit diagram of the penetration detection circuitry of the integrated circuit package of Fig. 1. [0008] Fig.7 is a block diagram illustrating the operation of circuitry of the integrated circuit package of Fig. 1. [0009] Fig. 8 is a flow chart of a method of preventing unauthorized access to data in a memory of a first semiconductor die that is covered by a second semiconductor die. [0010] Fig. 9 is a flow chart illustrating a method of making a tamper resistant integrated circuit package. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0011] The use of payment cards has become ubiquitous in modern society. Not surprisingly, payment card fraud has become a huge problem, costing card owners and the institutions that issue such cards millions of dollars daily. One manner in which such fraud is practiced is through the perpetrator's obtaining unauthorized access to proprietary data in the card memory. One techniques used to obtain such access involves insertion of a physical probe, a needle like object, through the surface of the card and into the card memory or a memory access point. Sophisticated electronics are then used to read or copy the information in the memory. Applicant has developed an integrated circuit package that may be used in a payment card to prevent such unauthorized access to stored information. [0012] Figs. 1-7, in general, disclose an integrated circuit package 10 including a first semiconductor die 20 (sometimes referred to herein as "first die 20") and a second semiconductor die 40 (sometimes referred to herein as "second die 40"). The first die 20 has a memory 22, Fig. 2, positioned physically at a predetermined memory location 24 in the first die 20. The second die 40 is positioned in covering relationship with at least the predetermined memory location 24 in the first die 20. The second die 40 may be electrically connected to the first die 20. Penetration detection circuitry 100, etc., Figs. 3 and 6, is positioned at least partially in the second die 40. The penetration detection circuitry generates a penetration detection signal 108 in response to physical penetration of the second die 40. Memory erasure circuitry 110 is operatively associated with the memory 22 in the first die 20 and the penetration detection circuitry 100, etc. and is adapted to erase or otherwise prevent accurate copying of the memory 22 in response to the penetration detection signal 108. A method of making such an integrated circuit package, Fig. 8, and a method of using an integrated circuit package 10 to protect data, Fig. 9, are also described. Having thus described an integrated circuit package and methods of making and using an integrated circuit package generally, various details thereof will now be described in further detail. [0013] Fig. 1 is a partial cross sectional view of an integrated circuit package 10. Fig. 2 is a top plan view of the integrated circuit package 10 with an upper portion thereof removed. The integrated circuit package 10 includes a first semiconductor die 20 having a generally flat top surface 21 and an opposite, generally flat bottom surface 23. A plurality of contact pads 26 are formed on the top surface 21. The contact pads may be electrically connected to other components by bond wires 27, 28. The formation of contact pads on a die and the connection of contact pads to other devices with bond wires is well known in the art and will thus not be further described herein. As best illustrated in Fig. 1, the first semiconductor die 20 is attached by a connecting structure 30 to a second semiconductor die 40 ("die 40"). The connecting structure 30 may be conventional die connecting structure comprising a first layer 32 of die attach paste, a second layer 33 that may be a silicon spacer or the like, and a third layer 34 of die attach paste. Such die connecting structure is well known in the art. The first die 20 comprises a memory 22, which is physically located in the first die 20 at a predetermined memory location 24, Fig. 2. In some embodiments, the memory 22 stores proprietary information such as financial data and security codes. [0014] The second die 40 is positioned in overlying relationship with the first die 20 and covers at least memory location 24 and any contact pads 26 or electrical connectors such as bond wires 27, 28 which might allow access to the memory 22. The footprint of the second die 40 with respect to the first die 20, in one embodiment of the integrated circuit package 10, is illustrated in Fig. 2. Such a stacked die arrangement wherein the top die is larger than the bottom die is known as a "reverse pyramid stack." The second die 40 has a generally flat top surface 41 and an opposite, generally flat bottom surface 43, Fig.l . As illustrated by Fig. 3, the top surface 41, in one embodiment, comprises a first trace 46 and a second trace 48 positioned in generally parallel relationship in a serpentine pattern which may substantially cover the entire top surface 41 of the second die 40. The traces 46, 48 may be connected at opposite ends thereof to contact pads 50, 51, 52, 53. The contact pads 50 through 53 may connect the traces 46, 48 to other circuitry within the second die 40 or may connect the traces to other circuitry in the first die 20 or an associated printed circuit board 80, Fig. 4. Operation of this other circuitry will be described in further detail below. The purpose of the first and second traces 46, 48 is to provide a "screen" which will sense any attempted penetration of the second die 40 as will also be discusses in further detail below. [0015] The first die 20 may be mounted on a substrate 60 having a generally flat top surface 61 and a generally flat bottom surface 63. As illustrated by Figs. 1 and 2, the substrate 60 may be an electrical connection substrate, which in the illustrated embodiment comprises a conventional ball grid array substrate. The substrate 60 may comprise a plurality of contact pads 64, 66, etc., Fig. 2, provided on top surface 61. The contact pads 64, 66, etc., may be connected by internal electrical routing 68, Fig. 1, to a ball grid array 72 comprising a plurality of solder balls 74, 76, etc. The construction of ball grid array substrates is well known in the art and will thus not be further described herein. The solder balls 74, 76 may be connected by reflow soldering to contacts on a PC board 80, Fig. 4. Various other types of electrical connection substrates, for example those having pin type connectors, may also be used. [0016] The first and second dies 20, 40, the connecting substrate 60 and the PC board 80 may be suitably encased in mold compound 88, Figs. 4 and 5, which is typically plastic (epoxy), to provide a tamper resistant payment card 90. The payment card 90 may be provided with appropriate surface contacts (not shown) or other electrical communication structure which enable it to be placed in communication with other devices, depending upon the type of payment card. For example, payment card 90 may be an ATM card, credit card, gift card, or other type of payment card, each of which is associated with a particular type of reader or other interaction device. The integrated circuit package 10 including the first die 20, second die 40 and substrate 60 may be initially encased in transfer mold, and then mounted on a PC board 80. This assembly may be further encased in other materials depending upon the type and use of the particular payment card 90. In another embodiment the first die 20, second die 40, substrate 60 and PC board 80 may are all first connected together and are then encased in mold compound or the like in a single encapsulation operation. [0017] As shown schematically in Fig. 6, penetration detection circuitry 100 may include a voltage source 101 connected to traces 46, 48. These traces have a normal combined resistance "R." The penetration detection circuitry 100 may further include a resistance sensor 102 that generates a signal 104 indicative of the resistance in the circuit 100. As will be understood by those skilled in the art, the resistance detector 102 may comprise a volt meter or amp meter. Referring to Fig. 3, the spacing of the traces 46, 48 in the serpentine network is sufficiently close such that any typical conductive probe which penetrates the top surface 41 of the first die 40 will either break or short the circuit. A circuit break (open circuit) caused by a probe is illustrated at 122 and a circuit short caused by a probe is illustrated at 124. A break will cause a substantial increase in the resistance of the circuit and a short in the circuit will cause a substantial decrease in the resistance of the circuit. In one embodiment the space between traces 46, 48 may be less than about 10 microns, to ensure that penetration by any probe having a minimum cross sectional dimension greater than 10 microns will be detected. Any desired spacing between traces 46, 48 may be provided. Also, rather than two traces 46, 48 only one trace or more than two traces may be used with suitable modifications to circuit 100. [0018] The resistance signal 104 may be used to detect a penetration of the first die 40 by a probe by comparing the present resistance of the circuit 100 to the known resistance R of the circuit when it is in an undamaged state. To implement such a comparison, the resistance signal 104 may be transmitted to a comparator 106, Fig. 7, which compares the resistance value of signal 104 to the known resistance R of the circuit 100 in an undamaged state. If the resistance indicated by signal 104 is more than the known prior resistance R by more than a predetermined amount, then a penetration detection signal 108 is generated by the comparator circuit 106. Similarly, if the present resistance indicated by circuit 104 is less than the known resistance R by a predetermined amount, a penetration detection signal 108 is also generated. The penetration detection signal 108 triggers erasure circuitry 110 to erase the memory 22. An integrated circuit memory may be erased by any of the various techniques known in the art or other techniques now known or later developed. Rather than erasing the data in memory 22, some other operation may be performed on the memory 22 to prevent data therein from being accurately read. The circuitry for performing the operations indicated in the block diagram of Fig. 7 may be provided either in the first die 20 or in the second die 40 or partially in both dies 20, 40, or some combination of dies 20, 40 and PC board 80, Fig 4. For example, in an embodiment in which erasure circuitry 110 is provided in the first die 20 and the circuitry 100, 106 is provided in second die 40, the signal 108 may be transmitted through a bond wire 44 connected to a contact pad 45 on the second die 40, Fig. 1, which is in turn connected to a contact pad 64 on substrate 60. Contact pad 64 on substrate 60 may in turn have a bond wire 27 connecting it to a contact pad 26 on the first die 20. [0019] Fig. 8 illustrates a method of preventing unauthorized access to data in a memory 22 of a first semiconductor die 20 that is covered by a second semiconductor die 40. The method includes, as indicated at 141, sensing physical penetration of the second die 40. The method also includes, as shown at 142, performing an operation on the memory 22 in response to the sensing of physical penetration of the second die 40. [0020] Fig. 9 illustrates a method of making a tamper resistant integrated circuit package 10. The method includes, as shown at 151, mounting a second die 40 in covering relationship with a first die 20 having a memory 22. The method further includes, as shown at 152, providing penetration detection circuitry 100, 106, 110, located at least partially on the second die 40, which senses penetration of the second die 40 by a probe and generates a penetration detection signal 108 in response thereto. The method also includes, as shown at 153, providing circuitry that is responsive to the penetration detection signal 108 to perform an operation on the memory 22 that prevents unauthorized access of data in the memory. [0021] Those skilled in the art will appreciate that modifications may be made to the described embodiments, and also that many other embodiments are possible, within the scope of the claimed invention. |
A system is disclosed that includes a component, a fault table configured to receive fault information associated with the component, and a diagnosis processor configured to read the fault information from the fault table and initiate corrective action as a function of the fault information. A method for handling faults in the system is also disclosed. |
CLAIMS 1. A system, comprising: a component; a fault table configured to receive fault information associated with the component; and a diagnosis processor configured to read the fault information from the fault table and initiate corrective action as a function of the fault information. 2. The system of claim 1 , wherein the fault table is contained in the diagnosis processor. 3. The system of claim 1, wherein the fault table is contained outside the diagnosis processor. 4. The system of claim 1, wherein the fault table includes an entry, said entry including a field indicative of the identity of the component. 5. The system of claim 1, wherein the fault table includes an entry, said entry including a time value field. 6. The system of claim 5, wherein the time value field is indicative of when a fault was detected in the component. 7. The system of claim 5, wherein the time value field is indicative of the time that has elapsed since a fault was detected in the component. 8. The system of claim 1, wherein the fault table includes an entry, said entry including information indicative of the nature of a fault that has been detected in the component. 9. The system of claim 1, wherein the fault table includes an entry, said entry including a leaky bucket fault counter. 10. The system of claim 1 , wherein the diagnosis processor is to predict a failure of the component. 11. The system of claim 1 , wherein the component includes a fault register, and the fault information includes a value read from the fault register. 12. The system of claim 1, further comprising: a satellite diagnosis processor in communication with the diagnosis processor, the satellite diagnosis processor configured to collect fault data from the component. 13. The system of claim 12, wherein the satellite diagnosis processor is located on a separate circuit board than the diagnosis processor. 14. The system of claim 12, wherein the satellite diagnosis processor is configured to preprocess the fault data, and transmit the preprocessed fault data to the diagnosis processor. 15. The system of claim 14, wherein the fault information associated with the component is a function of the fault data collected from the component by the satellite diagnosis processor. 16. The system of claim 1, further comprising, a chassis management processor in communication with the diagnosis processor, the chassis management processor configured to collect chassis fault data. 17. The system of claim 16, wherein the chassis fault data includes temperature information. 18. The system of claim 16, wherein the chassis fault data includes cooling device operating state information. 19. The system of claim 16, wherein the chassis fault data includes power operating state information. 20. The system of claim 1 , wherein the corrective action includes initiating a diagnostic procedure for the component. 21. The system of claim 20, wherein the component includes a built-in self test, and the diagnostic procedure includes executing the built-in self test. 22. The system of claim 1, wherein the corrective action includes disabling the component. 23. The system of claim 1, wherein the corrective action includes replacing the component by a spare component. 24. The system of claim 1, wherein the corrective action includes generating an alert message. 25. The system of claim 1, wherein the component includes a memory. 26. The system of claim 25, wherein the fault information includes an indication that a parity error occurred in the memory. 27. The system of claim 25, wherein the fault information includes a count of the number of parity errors in a predetermined time interval. 28. The system of claim 1, wherein the component includes a disk drive controller. 29. The system of claim 28, wherein the fault information includes an indication that a retry occurred during an I/O operation to the component. 30. The system of claim 28, wherein the fault information includes a count of the number of I/O errors in a predetermined time interval. 31. The system of claim 1 , wherein the fault information includes a value of a watchdog timer associated with the component. 32. The system of claim 1, wherein the fault information includes a leaky bucket fault counter associated with the component. 33. The system of claim 1, wherein the diagnosis processor is to predict a failure of the component using a policy. 34. The system of claim 1, wherein the diagnosis processor is to predict a failure of the component based on whether more than a predetermined number of faults in a given time window has occurred. 35. The system of claim 34, wherein the time window is modified based on a frequency of faults in said component. 36. One of a diagnosis processor and a satellite diagnostic processor, comprising: a fault information table configured to store fault information associated with components of a system; and a processor in communication with the fault information table, the processor configured to analyze the fault information stored in the fault information table. 37. The processor of claim 36, wherein the processor is configured to select corrective actions based on analysis of the fault information stored in the fault information table. 38. A method for handling faults in a system, comprising: receiving fault information associated with a component; storing the fault information in a fault information table; and taking corrective action as a function of the fault information and a time the fault information was received. 39. The method of claim 38, further comprising: predicting the likelihood of failure of the component based on the fault information in the fault information table. 40. The method of claim 39 wherein predicting the likelihood of failure of the component is also based on a policy. 41. The method of claim 39 wherein predicting the likelihood of failure of the component is based on whether more than a predetermined number of faults in a given time window has occurred. 42. The method of claim 41, further comprising modifying the time window based on a frequency of faults in said component. 43. The method of claim 38, further comprising: storing a time value with the fault information in the fault information table. 44. The method of claim 43, further comprising: removing the fault information from the fault information table when the time value associated with the fault information indicates the fault information is older than a predetermined threshold. 45. An article of manufacture comprising a computer-readable medium having stored thereon instructions adapted to be executed by a processor, the instructions which, when executed, define a series of steps to be used to control a method for handling faults in a system, said steps comprising: receiving fault information associated with a component; storing the fault information in a fault information table; and taking corrective action as a function of the fault information and a time the fault information was received. 46. The article of manufacture of claim 45, wherein said steps further comprise: predicting the likelihood of failure of the component based on the fault information in the fault information table. |
SYSTEM AND METHOD TO DETECT ERRORS AND PREDICT POTENTIAL FAILURESA portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.BACKGROUNDINFORMATIONIn conventional computer systems, when a system fails, technicians may examine log files to diagnose the problem, after the problem occurs. Conventional fault-tolerant systems may include methods for diagnosing faults after a component fails, while preventing system failure from being caused by the component failure. For example, conventional fault-tolerant systems may include pair and spare systems, where two duplicated components run in lock step, receiving the same inputs. When the outputs from the pair of components differ, one of the components of the pair is known to have failed, although not which one, and both components are shut down and replaced by a spare, possibly without any human intervention. Alternatively, three components may be used that run in lock step, receiving the same inputs. When one of the outputs from the three components differs from the other two, the component that differs is considered to have failed, and may be replaced.Redundancy and failover mechanisms may be employed which reduces downtime if a primary system fails. A system may be configured in an N+l or N+i configuration with hot and/or cold standbys. If a primary system fails, the standby system becomes the primary. The amount of downtime caused by such an occurrence may depend on how quickly the system can be failed over to the standby and on how closely the standby was synchronized with the primary system which has failed. Currently, in telephone communication systems, it generally takes a few seconds to fail over a failed system and restore service after the failure is detected. The telephone communication OEMs (Original Equipment Manufacturers) are seeking lower downtime in their systems. Individual components in a system may also be fault-tolerant. For example, error correcting codes may correct faults, which occur in a memory. When these faults are successfully corrected, they may be invisible to the system as a whole. When these faults continue to build up without being detected or corrected, a system failure may occur. System downtime may be needed for replacing the memory chip.An increased frequency of correctable errors may suggest that an uncorrectable failure is imminent, or at least that the risk of such a failure has increased. Predicting component failures before they occur may reduce the chance of system failure and the resultant system downtime. Predicting component failures before they occur may also allow maintenance to be performed more efficiently.Conventional fault handling systems are generally "reactive" in nature. In other words, after a fault happens, an alert is triggered, and fail over is achieved to a known good system, after which diagnosing the problem can begin. As the demand for more and more uptime increases for all applications like e-commerce, electronic trading, etc., the system design challenges becomes almost insurmountable with the reactive failover architectures. In a cost conscious environment when lockstep methods may not be cost justifiable, this reactive mode of fault handling is not sufficient to meet these requirements.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 illustrates a high-level diagram of an example system, according to an example embodiment of the present invention.Figure 2 illustrates an example hardware layout for an example system, according to an example embodiment of the present invention.Figure 3 illustrates an example device error table and entry, according to an example embodiment of the present invention. Figure 4 illustrates an example detailed fault information table and entry according to an example embodiment of the present invention.Figure 5 illustrates an example procedure for fault diagnosis and prediction for a memory component, according to an example embodiment of the present invention.Figure 6 illustrates an example procedure for fault diagnosis and prediction for a disk component, according to an example embodiment of the present invention.Figure 7 illustrates an example device error table entry, in a second example embodiment according to the present invention.Figure 8 illustrates a second example procedure for fault diagnosis and prediction for a memory component, according to a second example embodiment of the present invention.DETAILED DECRIPTIONFigure 1 illustrates a high-level diagram of an example system, according to an example embodiment of the present invention. The system may be a computer system, telecommunications switch, telecommunication transmission equipment, or for some other application. For example the system hardware may be a Chassis/Shelf based computer system based on Advanced TCA* or cPCI architecture used for hosting highly available telecommunication services and applications for both the wire-line and wireless industries. The system hardware chassis/shelf 102 may include a single chassis with multiple circuit cards or blades, for example a single chassis/shelf with multiple compute and access blades/boards/modules interconnected with a high speed fabric such as Ethernet, Inf[iota]niband or other standard serial fabric. However, it will be appreciated that other arrangements of hardware may be employed, for example the entire system may be on a single blade, or the system may include multiple interconnected chassis. The system hardware 102 may include a chassis management module control blade 104. The control blade may also be termed a CMM or chassis management module and may act as a central control module for the system hardware 102, or alternatively for some subset of the hardware. The control blade 104 may be responsible for managing fault detection, diagnosis, and fault handling in the system hardware 102. This chassis management module functionality can also be implemented as a "partitioned" part of a regular blade.The control blade 104 may include a diagnosis processor 106. The diagnosis processor may be an IPMI BMC controller chip, or alternatively some other diagnosis processor or a general purpose processor programmed to function as a diagnosis processor. The control blade 104 and diagnosis processor 106 may receive fault information, e.g., fault data or other status information read from other components in the system. The flow of fault information from the components to the control blade 104 and the diagnosis processor 106 is illustrated by a solid arrow. The control blade 104 and diagnosis processor 106 may also control the configuration of other system hardware components. When a fault is detected, the control blade 104 and diagnosis process 106 and may send information to higher levels of the system, e.g., alert messages. The control blade 104 and diagnosis processor 106 may maintain a set of master key "fault" information databases for all the key shelf components over time and trigger action based on fault detection algorithms that may be stored as firmware. The control blade 104 and diagnosis processor 106 may also initiate other forms of corrective actions, including launching appropriate diagnostic procedures such as BIST (Built In Self Test) functions in system components, disabling components, replacing components with spares (either automatically or with human intervention), and working with higher level system functions to reallocate memory usage, storage, or files, etc. The initiated corrective actions may be performed by the diagnosis processor, or by other system elements based on a predefined policy set by the system administrator.The system may include a fault information table 107, which may be used to store a master fault information table. The fault information table 107 may be part of the diagnosis processor 106, or may be a separate component accessible by the diagnosis processor 106. The fault information table 107 is configured to allow storage of fault information received from other system components. Fault information in the fault information table 107 may be associated with a particular component or type of component. Alternatively information associated with all component types may be included. The fault information table 107 may also be configured to allow the diagnosis processor 106 to access the fault information. Using the information from the fault information table 107, the diagnosis processor 106 may be configured to predict the failures of individual system components before they occur, and take appropriate corrective action, e.g., running internal diagnosis procedures, disabling components, replacing the components with spares, triggering system alerts, etc.Other functions of the chassis management blade or module (CMM) 104 may include control and management of the chassis or shelf as a whole, including support devices and environment. For example, the chassis management blade may monitor temperature, the operating state of fans or other cooling devices, the operating state of power sources including batteries or a UPS (uninterrupted power supply) system, etc. The chassis management blade 108 may also control cooling devices and power sources, e.g., by increasing the operating rate of a fan if another fan fails or if temperature rises above a threshold.The example system may include a number of general purpose component blades 110. These component blades may include compute or processing, storage, I/O, and other functional components, or some subset of these. The component blades 110 may be used to provide the functionality desired by users from the system. For example, the component blades 110 may include line blades in a piece of telecommunications transmission equipment, processor blades in a multiprocessor, switching fabric blades in a telecommunications switch, disk drive or other device I/O controllers, or other types of conventional hardware system components. In this example, a compute blade, a storage blade, and an I/O blade are provided. Other types of special purpose blades may also be included. Some may provide several of these functionalities in one blade.The system may also have an operating system 120. For example, the operating system 120 may be an open source LINUX operating system optimized for telecommunications applications. Other conventional operating systems may also be used. It will also be appreciated that the system may have multiple operating systems, e.g., a separate operating system on each blade of the system. The operating system 120 may include one or more device drivers 122, which may provide an interface between the operating system 120 and hardware components 110 of the system. The transmission of data between the operating device driver 122 and components 110 is illustrated in Figure 1 by a double connecting arrow.The operating system 120 may also include a fault management interface 124. The fault management interface 124 may allow the transmission of information about faults and corrective actions between the control blade 104 and the operating system 120. The fault management interface 124 may also provide a standard interface for fault monitoring and maintenance.The operating system may also include middleware 126, which may be used to provide various standard interfaces to user applications, e.g., network management and control systems.The system may also include applications 130. These applications may communicate with the operating system 120 directly, or via the middleware interfaces 126.Applications may include network and system management tools, operations and maintenance systems, and other applications. These applications may run directly on the system hardware 102, or may interface with the system from a remote location. It will be appreciated that other channels may also be provided to allow applications 130 to communicate directly with the control blade 104 and diagnosis processor 106, without passing through the operating system. Such channels may allow remote monitoring of system hardware.Figure 2 illustrates an example hardware layout for an example system, according to an example embodiment of the present invention.The example system may include a control blade 104, which includes a diagnosis processor 106 and a fault information table 107. The fault information table 107 may be stored on the diagnosis processor 106, or in some other location accessible to the diagnosis processor 106. The control blade 104 may collect and preprocess chassis fault data. Chassis fault data may include environmental information such as temperature and fan operating state, humidity, etc. Chassis fault data may also include power operating state such as availability and quality of line power, UPS operating state, battery power levels, etc. The chassis control blade may also receive fault data from other system components, including data from other diagnosis processors located on these components, as detailed below.The example system may include various component blades 110. These component blades may include line blades in a piece of telecommunications transmission equipment, processor blades in a multiprocessor, switching fabric blades in a telecommunications switch, disk drive or other device I/O controllers, or other types of conventional hardware system components. Referring to Fig. 2, each component blade may include a satellite diagnosis processor 210. The satellite diagnosis processor 210 may be a separate component, or may be provided as a logical entity, e.g., as part of another processor or chipset. The satellite diagnosis processor 210 on a component blade 110 may gather fault data from components on the component blade 110. This information may be gathered directly from components, or from a component fault information table or register (e.g., memory fault register 216), which may be part of the component chipsets. Information gathered from the component fault register 216 may be stored in the satellite diagnosis processor 210 or in some other location accessible to the satellite diagnosis processor 210.Component blades may also be specialized, e.g., compute, I/O, or storage blades.A component blade 110 may include one or more processors or CPUs, as well as memory, and other computing components. Each component 110 may include a satellite diagnosis processor 210. The satellite diagnosis processor 210 may be a separate component, or may be provided as a logical entity, e.g., as part of the CPU chipset. The satellite diagnosis processor 210 on a component blade 110 may gather fault data from processor and other components on the component blade 110. This information may also be gathered from memory fault register 216, which may be part of the chipset. Information gathered from the component fault register 216 may be stored in the satellite diagnosis processor 210, or in some other location accessible to the satellite diagnosis processor 210.A storage component blade may include one or more disk controllers or storage CPU , as well as memory, and other computing components. The satellite diagnosis processor 210 may be a separate component, or may be provided as a logical entity, e.g., as part of the disk controller chipset. The satellite diagnosis processor 210 on a storage component blade may gather fault data from disk controllers and other components on the storage component blade. This information may be gathered from disk drive fault information table or register 220, which may be part of the disk controller chip set. Information gathered from the disk drive fault register 220 may be stored in the satellite diagnosis processor 210 or in some other location accessible to the satellite diagnosis processor 210.A network/LAN blade may include one or more I/O CPUs, as well as other components. Each network/LAN blade may include a satellite diagnosis processor 210. The satellite diagnosis processor 210 may be a separate component, or may be provided as a logical entity, e.g., as part of the network processor chipset. The satellite diagnosis processor 210 on a network/LAN blade may gather fault data components on the network/LAN blade. This information may be gathered from network/LAN fault information table or register 224, which may be part of the network/LAN processor chipset. Information gathered from the network/LAN fault register 224 may be stored in the satellite diagnosis processor 210 or in some other location accessible to the satellite diagnosis processor 210.It will be appreciated that blades may include components of different types, rather than just compute, network/LAN, or storage elements. Memory fault registers, disk drive fault registers and processor fault registers may be implemented as part of a fault management chipset and included as part of the component fault table. So in each blade or computing blade, the interaction may be from the main CPU of the blade to the fault management chipset to the diagnosis processor on the control blade. The chipset and/or the diagnosis processor may also maintain the master device fault information table for the blade and pre-process fault information, e.g., by aging, thresholding, or filtering it, before sending summary fault information to a master fault information table on the control blade 104. The satellite diagnosis processors 210 may be configured to monitor all or part of these component blade elements on each component blade, as well as other component blade devices not shown, e.g., communication ports, power supplies, network interface devices, etc.The satellite diagnosis processor 210 may be configured to preprocess the collected fault data, e.g., associating a time value indicating when the data was collected with the collected fault data, reducing multiple identical fault detections in an interval to a single reported event that includes a count of the number of detections, deleting or ignoring certain types of faults, etc. It will be appreciated that, depending on the system architecture, the fault data collection and/or pre-processing could alternatively be performed directly by the diagnosis controller 106.The system may include an interconnection fabric 230. It will be appreciated that the interconnection fabric 230 may be the main system bus, a management bus, or may be a special bus dedicated to the transmission of fault information and control of the system fault tolerance features. The fabric 230 may be a "Fabric" based on Ethernet or some other standard serial high speed connection, or it may be a special bus dedicated to the transmission of fault information or control like IPMB. It will also be appreciated that other forms of communications between the blades in the system may also be employed, e.g., multiple buses or other networking architectures may be employed. Once a satellite diagnosis processor 210 collects and preprocesses fault data from a component, the information may be forwarded to the control blade 104 via the interconnection fabric 230. It will also be appreciated that the interconnection fabric 230 may be a regular data bus, or may be a special bus added solely to exchange fault information and fault handling instructions. Once the fault information is received, it may be stored in the fault information table 107, e.g., by the diagnosis processor. The satellite diagnosis processor 210 may also have the ability to write the fault information directly to the fault information table 107.When a component failure occurs, the diagnosis processor 106 may receive information about the failure from the corresponding satellite diagnosis processor 210, e.g., by receiving a message, or by reading an entry in the fault information table. As in a conventional fault-tolerant system, the diagnosis processor 210 may cause the system to take appropriate corrective action, e.g., disabling the component, generating an alert to the operating system, or replacing the component by a spare component. The diagnosis processor 106 may also be configured to use fault information collected from the components to predict component failures and take corrective action before component failures occur, e.g., executing a diagnostic procedure, replacing the component by a spare before the component fails, working with the operating system to rearrange storage to avoid a faulty memory or disk, etc.Figure 3 illustrates an example of a device error table 300, according to an example embodiment of the present invention. The example device error table 300 may be included as part of the diagnosis processor 106. Alternatively, the device error table 300 may be included in another location in the system where the device error table is accessible to the diagnosis processor 106, e.g., as a separate chip on the control board 104. Although the device error table 300 has been illustrated as a hardware component, the table may be stored in dedicated hardware components, or as part of a software or firmware table. It will also be appreciated that, although the device error table is shown as an array, other data structures may be employed for the device error table, e.g., a linked list, hash tables, etc. It will also be appreciated that multiple device error tables may be included in the system, e.g., one for each particular class of components.The device error table 300 may include one or more entries 302. One entry may be included for each component in the system. Alternatively, multiple entries may be included, e.g., one for each fault or type of fault that has been detected in a given component.Referring to Figure 3, an example entry in an example device error table, according to an example embodiment of the present invention. The example entry 302 may include several fields.The example entry may include a device ID field. The device ID field may identify a component for which the particular entry 302 contains fault information. For example, the device ID field may indicate a particular memory page in a blade or particular disk drive. It will be appreciated that other approaches to linking the device error table entries and the components may be used, for example an entry for a memory may include a particular memory address or page address, while an entry for a disk drive may include track and sector information. Alternatively, the device ID field may include a pointer to a data structure containing information about a particular component.The example entry 302 may include an error count field that indicates the number of times the error has occurred in the relevant time window (as described below). The entry 302 may include a date-time stamp field, which may be indicative of when a fault was detected in the component identified by the device ID field. Depending on the failure prediction algorithms employed, it will be appreciated that the value in the date-time stamp field may only need to be approximate. It will also be appreciated that the date-time stamp value may be in different formats. For example, this field may contain a real time value such as the system clock time when the fault was detected, or alternatively a counter or timer that indicates the amount of time that has elapsed since the fault was detected.The example entry 302 may also include a pointer field (e.g., a pointer to detailed error information). The pointer may point to an entry in a detailed fault information table (e.g., as shown in Fig. 4). Each entry in the detailed fault information table may be a data structure that indicates the nature of the fault or error message that was detected. Such a data structure may include the severity of the fault, how the fault was detected, and the particular nature of the fault that was detected, e.g., a successfully corrected one bit error in a particular memory location. If the device ID field only indicates a high-level component, such as a board, the detailed error information may provide information on fault location. For example, if the device ID field indicates a particular disk drive, the data structure referenced by the pointer may include track and sector information.It will be appreciated that the fields in the device error table 302 need not be single variables or memory locations, e.g., the device error table fields may include links or pointers to a more complicated data structures, or may directly include more complicated data structures. An example embodiment of the present invention for a detailed fault information table 400 stored in the chipset of each component blade is shown in Fig. 4. Each main system board may gather fault data, parse this fault data and take action if a threshold fault level is crossed. In addition, each main system board may send summary fault information and alert detail to the chassis management modules diagnostic management system, which stored this information in the detailed fault information table. The chassis or shelf management system may send information to a centralized fault management system.Diagnostic access procedures may be provided that access the detailed fault information table, analyze information contained therein and take appropriate action. These actions may include sending alerts to the operating system and launching appropriate diagnostics, which may be stored in firmware. These diagnostics may further analyze the fault data and may possibly correct the problem. Data on individual system components like memory, hard disk drives, and each blade or circuit pack in the system from chipsets located in the respective components that collect and generate fault information. These actions may also include initiation of graceful failover of the application out of the potentially faulty segment (memory, disk, network framers, compute processor or other elements) before running corrective action or analyzing diagnostics.It will be appreciated that various detailed fault information tables in the system may be stored in different ways and may include different information depending on the particular implementation. For example, detailed fault information may be stored in the diagnosis processors, or in separate locations accessible to the diagnosis processors. Detailed fault information may be stored originally in fault registers in the chipsets associated with various types of components. These registers may serve as the most detailed fault information. Information from these detailed tables may be parsed by satellite diagnosis processors, aged, filtered, and stored in intermediate fault information tables or master device fault information tables associated with satellite diagnosis processors. These master device fault information tables may be stored in the satellite diagnosis processors, or some other location accessible to these processors. Processed fault information from the satellite diagnosis processors and master device tables may then be forwarded to a system or chassis level diagnosis processor, where it may be stored in a master fault information table for the entire chassis, shelf, or system. More detailed examples of the device error table and detailed fault information table are described below.Figure 4 illustrates an example entry 402 in an example detailed fault information table, according to an example embodiment of the present invention. It will be appreciated that the table may be implemented using different data structures, e.g., linked lists, objects, arrays, etc.The example detailed fault information table entry 402 may include a device ID field. The ID field typically is similar to the ID field of the device error table entry of Fig. 3. The example detailed fault information table entry 401 may include an error type field to indicate the type of error that occurred. For example, if the device is a memory device, the error type field may indicate whether the error was a write error, a read error, etc.The example detailed fault information table entry may include an accessing device field identifying the device that was accessing the one where the error occurred. Also, the detailed fault information table may include a physical fault address field identifying the address that was being accessed at the time of the error. Though not shown in Fig. 4, other fields may be included in the detailed fault information table such as fields that record system environment measurements when the fault was detected, for example temperature, voltage, and fan operating state. Also, the fault information table entry may include a flags field, which may contain information on error or alert flags that have been triggered.As stated above, each device has an associated device error table and a detailed fault information table. The type of data stored in these tables may be device-specific. For a hard disk drive, the detailed fault information table entry may include an error type field that identifies the type of an error detected in the device based on the type of access, e.g., read, write, or read/write that was being made to the device when the error was detected. The physical fault address field may include the address on the hard disk drive that was being access when the error was detected. For a memory device, the detailed fault information table entry may include a field identifying a memory page address, which may indicate an address at which a memory fault was corrected. The table entry may include an error-type field that describes the type of error or fault detected, e.g., a single bit parity error, a protection violation, double bit error etc. The accessing device field may include information about the accessing program when the fault was detected, e.g., a process ID, a DMA device or another bus mastering device or other identifier.For a network/LAN device, the detailed fault information table entry may include a field identifying a particular I/O port where the error occurred, and a field indicating an address at which a fault was detected, e.g., a particular port or line. Also, a field may be provided that describes the type of error or fault detected, e.g., no carrier, a fault in the i/o processor, errors from various network protocols, link error etc.It will be appreciated that the detailed fault information tables may take other forms, or may include information from multiple different types of components.Figure 5 illustrates an example procedure for fault diagnosis and prediction for a memory component, according to an example embodiment of the present invention. The example procedure is described in terms of memory fault analysis, but other component types may be controlled using similar procedures. The example procedure may be executed by the diagnosis processor, the satellite diagnosis processor, both processors in combination, or by some other system element. It will be appreciated that the example procedure may be carried out by executing a series of instructions stored on a computer-readable medium. The series of instructions may be stored on CD-ROM, disk, tape, as microcode or in firmware, or any other computer- readable medium.An iteration of the example procedure begins with 1002. The example procedure may be executed periodically, e.g., at regular time intervals, or may be event triggered, e.g., run when a fault-related interrupt occurs during a memory read or write, or when a parity error is detected. It will be appreciated that the frequency of execution may be varied as a function of the state of the system, e.g., reduced with increasing work load, or increased when more faults have been detected. It will be appreciated that the example procedure may also be setup so as to be triggered when a particular type of self correcting fault reaches a threshold.In 1003, fault data from a detailed fault information table may be read, e.g., by a satellite diagnosis processor from a memory or CPU chipset fault register. It will be appreciated that, depending on the particular hardware implementation employed, fault data may also be read directly by the diagnosis processor from a faulty component. Fault data may also be gathered indirectly, rather than gathered directly from component, e.g., by logging operating system error messages or interrupts. It will be appreciated that correlating different types of error messages by time, or by possible source, may be advantageous.In 1004, the satellite diagnosis processor may analyze a master policy (e.g., one set by a network operation center (NOC)) to determine whether the error warrants a failover action to take place. The fault information may be fault data preprocessed by a satellite diagnosis processor, or may be recorded directly, e.g., in a log file form. The fault information may include a memory address or page address where the fault occurred, a time, and a fault type. The fault type information may include the severity of the fault, how it was detected, and the particular nature of the fault that was detected, e.g., a successfully corrected parity error in a particular memory location.In 1008, it is determined whether a failover for the device causing the error should occur. For example, the number and nature of the faults may be examined to predict the likelihood of component failure. If a component has had more than a predetermined number of faults in total or more than a predetermined number of faults in a given time interval (as may have been read from the master policy in 1004), then corrective action may need to be taken, and the example procedure continues to 1012. Otherwise, the procedure continues with 1010. It will be appreciated that different thresholds may be used for different fault types, e.g., corrected parity errors may have a relatively high threshold, while total hard failures of an entire component may be acted upon as soon as they are detected. The thresholds may vary depending on the state of the system, workload, etc. In 1010, no failover takes place, and the iteration of the example procedure may end. The example procedure may be iterative, periodically checking, in which case 1010 only represents the end of an iteration, not the entire procedure.In 1012, a pro-active failover may be called, and when that operation is complete (decision block 1013), control pass to block 1014 to initiate a memory diagnostic procedure. For example, if the memory has a Built-in Self Test (BIST) capability, an execution of the BIST function may be triggered. Other testing or diagnostic procedures may also be employed, e.g., a memory audit or scrub by the operating system may be triggered, or time redundancy techniques might be employed such as executing a recovery block or re-executing a process from a checkpoint.In 1015 it is determined whether a memory problem has been detected. If a memory problem has not been detected, control passes to block 1016 and a warning error flag and data is sent to the control module CMM. A warning message to a middleware application is also triggered. Control passes to decision block 1017 to determine whether the device should be reloaded and restarted (as in block 1018). If it is not, then control passes to block 1019 where the board is replaced.If a memory problem has been detected in decision block 1015, then control passes to block 1020. In this case, the example procedure has identified a problem or potential failure in the memory, or at least a higher likelihood of a failure occurring. The example procedure may initiate various types of corrective action. For example a critical error alert may be sent to the CMM and NOC indicating the nature of the problem. In 1021, the example procedure may end after a repair process has been initiated.Figure 6 illustrates an example procedure for fault diagnosis and prediction for a disk component, according to an example embodiment of the present invention. The example procedure is illustrated in terms of disk fault analysis, but it will be appreciated that other component types could have similar procedures. The example procedure may be executed by the diagnosis processor, the satellite diagnosis processor, both processors in combination, or by some other system element. It will be appreciated that the example procedure may be carried out by executing a series of instructions stored on a computer-readable medium. The series of instructions may be stored on CD-ROM, disk, tape, as microcode or in firmware, or any other computer- readable medium.An iteration of the example procedure begins with 1102. The example procedure may be executed periodically, e.g., at regular time intervals, or may be event triggered, e.g., when a fault-related interrupt occurs during a disk read or write. It will be appreciated that the frequency of execution may be varied as a function of the state of the system, e.g., reduced with increasing work load, or increased when more faults have been detected.In 1103, fault data from a disk may be read, e.g., by a satellite diagnosis processor located on the same blade with the disk controller. The data may be read from the disk fault register or other form of fault detail table. It will be appreciated that, depending on the particular hardware implementation employed, fault data may also be read directly by the diagnosis processor from a faulty component, or obtained indirectly, e.g., from the operating system.In 1104, the satellite diagnosis processor will update the master device fault table with information about the fault. The fault information may be fault data preprocessed by a satellite diagnosis processor, or may be raw fault data recorded directly, e.g., in a log file form.In 1106, the satellite processor may age or filter the data in the master device table, e.g., by deleting entries of greater than a certain age, collapsing related faults, or other forms of processing.Also in 1106, the procedure reads the master policy (if any) that was setup by network Control Center (NOC) for policy on threshold and failoverIn 1108, the number of faults and nature of the faults may be examined to predict the likelihood of component failure. If a component has had more than a predetermined number of faults in total or more than a predetermined number of faults in a given time interval or window, then corrective action may need to be taken, and the example procedure may continue with 1112. Otherwise, the procedure may continue with 1110. The particular corrective action may depend on the number and type of faults recorded in the fault information table. For example, localized faults may only result in marking certain disk sectors as bad. Greater numbers of faults across a wider area may indicate a problem with the entire disk drive.In 1110, no disk drive failure is predicted, and the iteration of the example procedure may end. The example procedure may be iterative, periodically checking, in which case 1110 only represents the end of an iteration, not the entire procedure.In 1112, a disk drive diagnostic procedure may be initiated. For example, if the disk drive controller has a BIST capability, an execution of the BIST function may be triggered. Programs may also be initiated to scan the disk, or to compare a disks contents with a second disk that serves as a mirror.In 1114 the results of diagnostic procedures may be evaluated. If a problem has been detected, or failure seems likely, the example procedure may continue with 1118. Otherwise, the example procedure may continue in 1115.In 1115, the diagnostic procedure has not detected that a failure is likely. A warning flag and error data may be sent to the master fault information table on the control blade. Warnings or error flags may also be sent to other locations, e.g., to the operating system or other management systems monitoring the system. The iteration of the example procedure may end. The example procedure may be iterative, periodically checking, in which case 1110 only represents the end of an iteration, not the entire procedure. It will be appreciated that the fault information may still be updated, so that if a particular disk drive continues to cause problems, it may be flagged for service or replacement, even if no failure is predicted or detected.When 1118 has been reached, the example procedure has discovered a potential failure in the disk drive, or at least determined an increased likelihood of a failure occurring. The master fault information table may be updated with fault information that contains a "severe" error flag. The example procedure may include various types of corrective action. For example, in 1119, an alert may be sent to the operating system, indicating the nature of the problem. In 1120, a process for reallocating files away from a faulty drive or disk block may be initiated. A faulty block may be marked defective to prevent its use. The faulty disk drive (or faulty block) may be disabled. A faulty disk drive may be replaced by a spare, if the system has a dynamic reconfiguration capability or is mirrored. Alternatively, the operating system may be configured to prevent access to the particular disk drive or disk page. Files may be copied from the drive to other drives. Other forms of corrective action may also be taken. For example a user process that is potentially corrupted by data from a bad disk block may be terminated or rolled back to a checkpoint.In 1122, the example procedure may end after the corrective action has been taken. The example procedure may be iterative, periodically checking, in which case 1122 only represents the end of an iteration, not the entire procedure. It will be appreciated that the fault information may still be updated, so that if a particular disk drive continues to cause problems, it may be flagged and replaced, even failures have only been detected in certain segments or tracks.ALTERNATIVE EXAMPLE EMBODIMENTIn an alternative example embodiment according to the present invention, the device error table entry 702 (Fig. 7) may include several fields.The device error table entry 702 may include a device identification field.The device error table entry 702 may also include a leaky bucket fault counter. The leaky bucket fault counter may be configured to track whether too many faults have occurred in a predetermined time interval or window, resulting in a need to take corrective action for the component identified in the device identification field. For example, each time a fault is detected the counter may be incremented. Periodically, the counter may be reduced, e.g., to age the fault information. In one embodiment, if the frequency of faults or errors decreases, the time window for the error count can be increased. Also, if the frequency of faults or errors increases, the time window for the error count can be decreased. If the counter exceeds a threshold, it may be concluded that a problem has occurred and corrective action needs to be taken. The threshold and rate of reduction may be tuned to achieve desired fault detection performance properties.The example device error table entry 702 may also include a date-time stamp field and a pointer to detailed error information. The pointer field may point to data about the type of fault last detected, or other information that may be collected which may be useful in fault diagnosis and corrective action.Figure 8 illustrates a second example procedure for fault diagnosis and prediction for a memory component, according to an alternative example embodiment of the present invention. The example procedure is described in terms of memory fault analysis, but other component types could have similar procedures. The example procedure may executed by the diagnosis processor, the satellite diagnosis processor, both processors in combination, or by some other system element. It will be appreciated that the example procedure may be carried out by executing a series of instructions stored on a computer-readable medium. The series of instructions may be stored on CD-ROM, disk, tape, as microcode or in firmware, or any other computer-readable medium.An iteration of the example procedure begins with 1302. The example procedure may be executed periodically, e.g., at regular time intervals, or may be event triggered, e.g., run when an fault-related interrupt occurs during a memory read or write, or whenever a parity error is detected. It will be appreciated that the frequency of execution may be varied as a function of the state of the system, e.g., reduced with increasing work load, or increased when more faults have been detected.In 1304, fault data from a component may be read, e.g., by a satellite diagnosis processor. It will be appreciated that, depending on the particular hardware implementation employed, fault data may also be read directly by a the diagnosis processor. The fault data may be read in any conventional fashion, e.g., by reading the component fault register. In 1306, the fault data may be checked to determine if a new fault has occurred. If a new fault has been detected, the example procedure may continue with 1308. Note the new fault may actually have been successfully corrected and masked by the component, so that other than the information contained in the fault register, the fault may be invisible to the system as a whole. If no new fault has occurred, the example procedure may continue with 1318.In 1308, a leaky bucket fault counter for the component may be incremented. Other fields in the fault information table for the component may also be updated.In 1310, if the leaky bucket fault counter for the component is tested to determine if the counter has exceeded a predetermined threshold. If the counter has exceeded the predetermined threshold, the example procedure may take corrective action, continuing with 1312. If the threshold has not been exceeded, the example procedure may continue with 1318.In 1312, corrective action may be initiated. For example an alert may be sent to the operating system. Corrective action continues. For example, the faulty memory location may be disabled.The example procedure ends with 1316. The procedure may continue for other components, or when other faults are detected in the system.In 1318, the leaky bucket fault counter predetermined threshold has not been exceeded. The system may wait, either for a predetermined interval, or, if the procedure is event driven, until another fault occurs.While waiting, in 1320, the fault data for the component may be aged, for example by periodically decrementing the fault counter. The procedure may continue with 1304 after another fault occurs, or after a predetermined waiting interval has passed. MODIFICATIONSIn the preceding specification, the present invention has been described with reference to specific example embodiments thereof. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the present invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense. |
Memory apparatuses that may be used for receiving commands and ordering memory responses are provided. One such memory apparatus includes response logic that is coupled to a plurality of memory units by a plurality of channels and may be configured to receiving a plurality of memory responses from the plurality of memory units. Ordering logic may be coupled to the response logic and be configured to cause the plurality of memory responses in the response logic to be provided in an order based, at least in part, on a system protocol. For example, the ordering logic may enforce bus protocol rules on the plurality of memory responses stored in the response logic to ensure that responses are provided from the memory apparatus in a correct order. |
CLAIMS What is claimed is: 1. A memory apparatus, comprising: response logic configured to receive a plurality of memory responses; and ordering logic coupled to the response logic and configured to cause the plurality of memory responses to be ordered based, at least in part, on a protocol. 2. The apparatus of claim 1, wherein individual ones of the plurality of memory responses comprise a read identification bit and the ordering logic is further configured to cause the plurality of responses to be ordered based, at least in part, on the respective identification bits. 3. The apparatus of claim 1, further comprising: a queue coupled to the ordering logic, the ordering logic further configured to cause a plurality of commands in the queue to be ordered based, at least in part, on detecting at least one of a page hit and a hazard conflict. 4. The apparatus of claim 3, further comprising: a bank state machine coupled to the queue and configured to receive the plurality of commands from the queue, the bank state machine further configured to provide at least one of the plurality of commands to at least one of a plurality of memory units based, at least in part, on the at least one of a plurality of memory units having an available channel. 5. The apparatus of claim 1, further comprising: a plurality of buffers coupled to the response logic and configured to provide the plurality of responses to the response logic. 6. The apparatus of claim 1, further comprising at least one of: a read buffer coupled to the response logic and configured to provide read memory responses; and a write buffer coupled to the response logic and configured to provide write memory responses. 7. The apparatus of claim 1 , wherein the response logic is further configured to store read and write memory responses. 8. The apparatus of claim 7, wherein the ordering logic is further configured to cause read and write memory responses received by the response logic to be provided in an order independent of an order in which the responses were received by the response logic. 9. A computing system, comprising: a plurality of memory units; a system bus slave; and a memory controller coupled to the memory units and the system bus slave, the memory controller comprising: response logic configured to receive a plurality of responses in an order from the plurality of memory units; and ordering logic coupled to the response logic and configured to cause the plurality of responses to be provided to the system bus slave in an order that is independent of the order in which the plurality of responses was received by the response logic. 10. The computing system of claim 9, wherein the memory controller is coupled to the plurality of memory units by a bus comprising a plurality of logical channels. 11. The computing system of claim 10, wherein each of the plurality of memory units corresponds to a respective logical channel. 12. The computing system of claim 9, wherein the system bus slave is coupled to a system bus master and is configured to receive commands from the system bus master and provide responses to the system bus master. 13. The computing system of claim 9, wherein the plurality of responses corresponds to a respective plurality of commands. 14. The computing system of claim 9, wherein the plurality of responses comprises read responses and write responses, the read responses comprising read data and the write responses comprising write confirmation data. 15. The computing system of claim 9, wherein the plurality of responses are provided to the system bus master responsive, at least in part, to reordering the plurality of responses received by the response logic. 16. A computing system, comprising: a processor; and a memory apparatus coupled to the processor, the memory apparatus configured to generate a plurality of memory responses, the memory apparatus further configured to provide the plurality of memory responses to the processor in an order that is independent of an order in which the memory apparatus generated the memory responses. 17. The computing system of claim 16, wherein the memory apparatus comprises: a plurality of memory units configured to generate the plurality of memory responses; and a memory controller coupled to the memory units, the memory controller configured to receive and order the plurality of memory responses based, at least in part, on ordering logic. 18. The computing system of claim 16, wherein the processor comprises a system bus master, the system bus master configured to provide memory commands to a bus and receive the plurality memory responses from the bus. 19. The computing system of claim 16, further comprising a system controller, the system controller coupled to the processor and the memory apparatus, the system controller configured to receive commands from the processor and provide the commands to the memory controller. 20. The computing system of claim 19, wherein the memory controller comprises at least one of a system, bus slave and a system bus master. 21. A method of ordering memory responses, comprising: generating a plurality of memory responses corresponding to a plurality of commands provided to a memory unit; ordering the plurality of responses independent of the order the commands were provided to the memory units; and providing the plurality of responses to a system bus based, at least in part, on the ordering. 22. The method of claim 21 , wherein at least one of the plurality of commands creates at least one of a hazard conflict and a page hit. 23. The method of claim 21 , wherein at least one of the plurality of commands is a buffer command. 24. The method of claim 21 , further comprising: wherein ordering the plurality of responses comprises ordering the plurality of responses based, at least in part, on a system protocol. 25. The method of claim 21 , wherein at least one of the plurality of commands is a read command. 26. The method of claim 21, wherein providing the plurality of responses to a system bus comprises providing the plurality of responses to a system bus slave. 27. A method of ordering memory responses, comprising: receiving a first command and a second command; generating a first response corresponding to the first command and a second response corresponding to the second command; enforcing ordering logic on the first response and the second response; and providing a first ordered response corresponding to the first response and a second ordered response corresponding to the second response to an output. 28. The method of claim 27, wherein said providing a first ordered response corresponding to the first response and a second ordered response corresponding to the second response to an output comprises: providing the second ordered response to a system bus; and after said providing the second ordered response, providing the first ordered response to the system bus. 29. The method of claim 27, further comprising: after said receiving a first command and a second command, providing at least one of the first command and the second command to a memory unit. 30. The method of claim 29, wherein said providing at least one of the first command and the second command comprises: opening a row of the memory unit; accessing a first set of data in the memory unit corresponding to the first command; accessing a second set of data in the memory unit corresponding to the second command; and closing the row of the memory unit. |
MEMORY APPARATUSES, COMPUTER SYSTEMS AND METHODS FOR ORDERING MEMORY RESPONSES TECHNICAL FIELD [001] This invention relates to memory apparatuses, and more particularly, in one embodiment, to memory controllers allowing for concurrent use of multiple memory channels. BACKGROUND OF THE INVENTION [002] As input/output speeds of memory devices have increased in recent years, newer implementations have begun to approach performance limitations, thereby exhausting the utility of conventional architectures. [003] Consequently, to compensate, some approaches have turned toward adopting multichannel memory architectures, wherein a memory unit may be accessed by simultaneous commands via separate, independent logical channels. This allows commands to pass through command queues at a more efficient rate as commands may be provided as soon as a channel becomes available. In short, the amount of time a command is held in a queue is reduced. [004] However, this approach is not without its drawbacks. Traditionally, the ordering of commands by a memory controller have been enforced at the transaction queue level. That is, ordering logic has been used to enforce particular ordering rules on the command queue for providing commands to the memory units such that the order in which responses are returned from memory units is in accordance with a desired response order. Therefore, under this scheme, the performance of a multi-channel memory access scheme is hindered by the fact that some commands cannot be provided (e.g. issued) even when a channel is available as, in some cases, a response corresponding to a prior command must be received from a memory unit before the waiting command can be provided. While this implementation ensures that responses are returned in the correct order, available channels remain unused while the memory controller waits to receive the response from the memory unit. [005] Accordingly, there is therefore a need for an improved memory apparatus and method that utilizes multi-channel memory accesses and provides commands to memory units over available channels irrespective of when responses are provided.BRIEF DESCRIPTION OF THE DRAWINGS [006] Figure 1 is a block diagram of a memory apparatus according to an embodiment of the invention. [007] Figure 2 is a block diagram of a memory apparatus including a memory controller that may be used in Figure 1 according to an embodiment of the invention. [008] Figure 3 is block diagram of an ordering logic unit according to an embodiment of the invention that may be used in the memory controller of Figure 2. [009] Figure 4 is a timing diagram that illustrates various signals during operation of the memory apparatus according to an embodiment of the invention. [010] Figure 5 is a block diagram of a memory apparatus according to an alternative embodiment of the present invention. [011] Figure 6 is a block diagram of a computer system according to an embodiment of the invention. DETAILED DESCRIPTION [012] Certain details are set forth below to provide a sufficient understanding of embodiments of the invention. However, it will be clear to one skilled in the art that embodiments of the invention may be practiced without these particular details. Moreover, the particular embodiments of the present invention described herein are provided by way of example and should not be used to limit the scope of the invention to these particular embodiments. In other instances, well-known circuits, control signals, timing protocols, and software operations have not been shown in detail in order to avoid unnecessarily obscuring the invention. [013] Figure 1 illustrates a memory apparatus 100 according to one embodiment of the present invention. A memory apparatus could be a single memory device, for example, or a combination of separate devices. It could be in the form of a single semiconductor die or a combination of dies, and could be in a single package or in a combination of packages. The memory apparatus 100 may include a memory controller 102 coupled to data bus 110, address bus 112, and command bus 114 to receive data, address, and command signals, respectively. The memory controller may be further coupled to a plurality of memory units 104 via a communication bus 120. The memory controller is configured to perform various memory functions, such as providing memory commands to the memory units 104, in response to which the memory units 104 generate corresponding memory responses. The memory controller is further configured to provide memory responses based, at least in part, on an order dictated by a system bus protocol.In at least one embodiment, memory responses may comprise read data corresponding to a read command and/or write confirmation data corresponding to a write command. Memory responses may further comprise other types of data and are not limited by the description herein. [014] Memory units 104 may comprise any number of memory units and further may comprise any number of logical memory partitions. Additionally, communication bus 120 may comprise any number of bit lines and any number of logical channels. For example, in one embodiment, each memory unit may correspond to a same number of logical channels, such as 8 channels per unit. Moreover, in at least one embodiment, the number of logical memory partitions in memory units 104 and/or the number of logical channels of the communication bus 120 may be changed, for example, by increasing or decreasing a number of independent chip select signals used to control memory units 104. It will be appreciated by those having ordinary skill in the art that other implementations, such a separate control logic unit configured to increase or decrease the number of channels associated with each memory unit 104, may also be used without departing from the scope of the present invention. [015] In operation, memory controller 102 may receive commands over command bus 114 and provide the received commands to the memory units 104. For example, commands may be provided over command bus 114 to the memory controller 102 by a system bus slave (not shown). Commands may be provided by other devices as well. Each command received by memory controller 102 may be queued and subsequently checked by ordering logic for read/write hazard conflicts. A hazard conflict may refer to an operation error resulting from a particular order (e.g. sequence) of commands, such as a page conflict between read and write commands provided to the same row(s) of memory units 104. The ordering logic may be included in memory controller 102, and in an alternative embodiment, the ordering logic may be separate from memory controller 102. [016] In addition to hazard checks, ordering logic in memory controller 102 may determine whether a queued command follows a barrier command. In response to receipt of a barrier command, the ordering logic may delay providing subsequently received commands to memory units 104 until responses corresponding to commands received prior to receipt of the barrier command have been provided from memory units 104 to the system bus slave (or other devices providing commands to the memory controller 102) in a required order, as controlled by ordering logic in the memory controller 102. Finally, the ordering logic may also determine whether queued commands may produce a page hit. That is, memory controller 102 may determine whether a same row of memory units 104 may be accessed by two or more queued commandswithout closing and reopening the row between providing each command to the memory units 104. If a page hit is detected, the ordering logic may order (e.g. reorder) the commands in the queue, for example, advance or delay one or more commands in the queue, to reduce the number of times a particular row must be opened and closed. [017] As previously described, memory controller 102 may be configured to order (e.g. reorder) memory responses based, at least in part, a system bus protocol, and memory controller 102 may be configured to provide commands to memory units 104 as they are received over command bus 1 14, provided a received command does not create a hazard conflict or page hit, or follow a barrier command. As a result, there is a possibility memory responses provided to memory controller 102 from memory units 104 may not match the order in which corresponding commands were provided to the memory units 104, nor match the order required by the system protocol. In order to provide memory responses to a device (e.g., system bus slave) in accordance with a required order, regardless of the order the responses are provided from the memory units 104 to the memory controller 102, memory controller 102 may control the order of the responses provided by the memory controller such that the responses are returned in the required order. [018] As will be explained in more detail below, with the exception of some commands (e.g., hazards, page hits, and barrier commands) a command may be provided to memory units 104 immediately after it has been queued, because the ordering logic allows commands to be provided in virtually any sequence to memory units 104. Briefly, a command may be provided to a memory unit 104 as soon as a memory channel corresponding to the memory unit 104 is available. In at least one embodiment, because each unit typically corresponds to multiple channels, multiple memory commands may be provided concurrently to the same memory unit 104. [019] Commands received by memory controller 102 may include master identification bits (master IDs) indicating a system bus master requesting issuance of the command and transaction identification bits (transaction IDs) indicating a transaction stream within the requesting master. If, through a system bus slave 202 (Figure 2), a system bus master provides multiple commands to memory controller 102, with the commands corresponding to the same transaction stream, the requesting system bus master may not be able to differentiate responses without relying on the order in which the responses are provided. That is, the order of the responses should correspond to the required order dictated by the master for correct operation to occur. Thus, if after providing commands to memory units 104, the corresponding memory responses are notprovided from the memory units 104 to memory controller 102 in the required order, memory controller 102 should reorder the responses when providing them to the system bus slave 202. [020] Figure 2 illustrates a memory apparatus 200 according to an embodiment of the invention. Memory apparatus 200 may include a memory controller 201 that, in at least one embodiment, may be used in the memory apparatus 100 of Figure 1. Memory controller 201 may include a queue 204 coupled to a bank state machine 206 and ordering logic 210. The queue 204 is configured to receive commands from a system bus slave 202 over a command bus 214. The bank state machine 206 may be coupled to the memory units 208a-n by a communication bus 220. Additionally, memory controller 201 may further comprise response logic 212 and read data buffers 218a-n. Read data buffers 218a-n may be coupled to memory units 208a-n, respectively, by the communication bus 225, and each read data buffer 218a-n may be further coupled to response logic 212 by a buffer bus 230 as well. Finally, response logic 212 may be coupled to the system bus slave 202 by a response bus 235. In at least one embodiment, response bus 235 may be physically separate from other busses, or may multiplexed with other busses, such as command bus 214. [021] In operation, commands may be provided from the system bus slave 202 to queue 204 of memory controller 201 over the command bus 214. There, the ordering logic 210 may check the received commands for hazard conflicts, barrier commands, and page hits, as previously described. In at least one embodiment, such as that illustrated in Figure 2, the queue 204 may be used to store received commands for a plurality of memory units 208a-n. In another embodiment, each memory unit 208a-n may be coupled to a respective queue 204a-n. [022] Queued commands may be provided to the bank state machine 206, wherein each command may be provided to memory units 208a-n once a channel becomes available for the memory unit 208a-n to be accessed. In one embodiment, the bank state machine 206 may contain control logic to determine whether a channel is available, or in another embodiment, the bank state machine 206 may receive a signal from external control logic indicating that a particular channel is available for a command. Moreover, in one embodiment, multiple bank state machines 206 may be used. For example, the memory controller 201 may include a bank state machine 206 corresponding to each channel of each memory bank 208a-n. Memory controller 201 may alternatively use any number of bank state machines 206 per channel. [023] Once a command has been provided to a memory unit 208, the memory unit 208 may provide a response to the corresponding read data buffer 218. While in one embodiment, each memory unit 208a-n may correspond to a read buffer 218a-n, in another embodiment, memoryunits 208a-n may be coupled with, and provide responses to, a single read data buffer 218 (not shown). It will be appreciated by those having ordinary skill in the art that variations in the implementations of the read data buffers 218a-n may be made without departing from the scope of the present invention, and that embodiments are not limited by the specific embodiments set forth herein. [024] Responses may be provided from the read data buffers 208a-n and received by the response logic 212 over the buffer bus 230. Once received by the response logic 212, the ordering logic 210 may cause the responses to be ordered such that they are placed into the order (e.g. sequence) required by the requesting system bus master, as described above. For example, the ordering logic 210 can be configured to enforce bus protocol rules on responses stored in the response logic 212 to ensure that responses are provided in a correct order first to the system bus slave 202 over the response bus 235, and ultimately to the requesting system bus master. The ordering logic 210 may cause the responses received by the response logic 212 to be provided based, at least in part, on an order that is independent of the order in which the responses were received by the response logic 212. [025] Figure 3 is a block diagram illustrating ordering logic 300 according to an embodiment of the invention. The ordering logic 300 may be used as the ordering logic 210 in memory apparatus 200 of Figure 2. The ordering logic 300 may include ordering control logic 301, a receive queue 310 and a response queue 312, all of which may be coupled to response logic 305. Receive queue 310 may be configured to store master IDs, transaction IDs, and read identification bits (read IDs) and response queue 312 may be configured to store channel select bits and read IDs. Moreover, in at least one embodiment, receive queue 310 may be implemented as a shift buffer. [026] In operation, when a system bus slave 202 provides a command to queue 204 (Figure 2), receive queue 310 may also receive the command and store the master ID and transaction ID corresponding to the command. Moreover, receive queue 310 may generate a unique read ID for the command, allowing the command to be distinguished from commands corresponding to the same master ID and transaction ID. As commands are provided to memory units 208a-n and corresponding responses are provided as described above, each response may be stored in response logic 305. Additionally, the read ID and channel select bits corresponding to each response may be provided to the response queue 312, identifying which command each response corresponds to, as well as which channel provided the response. Because some commands may require use of multiple channels, use of channel select bits allows response queue 312 to ensurethat a complete response is provided for each command. In some embodiments, channel select bits may be one-hot encoded or may use binary for channel identification. [027] As described above, as responses are accumulated in response logic 305, ordering control logic 301 may cause response logic 305 to provide responses to the system bus slave 202 based, at least in part, on the order required by the requesting master. For example, in at least one embodiment, if responses stored in response logic 305 comprise the same master and transaction IDs, the responses may only be distinguished based on the read ID generated on receipt of the command by the memory controller 201. Responses differentiated in this manner should be provided to the system bus slave 202 in the required order as the requesting master will not otherwise be able to distinguish the responses from one another. [028] Figure 4 is a timing diagram illustrating the operation of the memory apparatus 201 of Figure 2 according to an embodiment of the invention. A system bus slave 202 (Figure 2) may provide a command 401 and a command 402 to the memory controller 201 over command bus 214 that may be received at times TO and Tl, respectively. The commands may be provided to memory unit 208a-n, and subsequently, at a time T2, a response 403 may be received at response logic 212 over the buffer bus 230 as a response to the command 401. [029] A response 410 may correspond to command 402 and be received by the response logic 212 over buffer bus 230 before, concurrently, or after the time T2, as illustrated by responses 410 in Figure 4 at times T3-, T3, and T3+, respectively. In one embodiment, such as that shown in Figure 4, response logic 212 may receive the response 410 before response 403 (e.g., at time T3- ). In another embodiment, response logic 212 may receive response 410 after response 403 (e.g., at time T3+). In yet another embodiment, responses 410 and 403 may be received approximately simultaneously (e.g., at time T3). [030] As described above, commands may be provided to memory unit 208a-n in the order as they are received by memory controller 201 and responses may be provided to the system bus slave 202 in an order required by a requesting master. As a result, regardless of a time at which a response 410 is received by the response logic 212 relative to response 403, the order in which responses are provided to the system bus slave 202 over response bus 235 may remain the same. Responses 420 and 421 , for example, may correspond to responses 403 and 410 re-ordered in the order required by a requesting master and be provided at times T4 and T5, respectively. That is, regardless of the order in which responses 403 and 410 are received by the response logic 212 from memory unit 208a-n (e.g., at time T3-, T3, or T+), responses 420 and 421 may be provided in the order as illustrated in Figure 4. Moreover, as previously described, responses need not beprovided in the order corresponding commands were received. For example, in another embodiment, if required, responses 420 and 421 may be provided to the system bus slave such that the response 421 is provided before the response 420. [031] Figure 5 illustrates a memory apparatus 500 according to an alternative embodiment of the present invention. The memory apparatus 500 includes elements that have been previously described with respect to the memory apparatus of Figure 2. Those elements have been shown in Figure 5 using the same reference numbers used in Figure 2 and operation of the common elements is as previously described. Consequently a detailed description of the operation of these elements will not be repeated in the interest of brevity. [032] In contrast to the memory apparatus 200, memory apparatus 500 further comprises write buffers 518a-n that may be coupled to memory unit 208a-n and configured to store write responses. Write buffers 518a-n may further be coupled to response logic 212, and in one embodiment, may respectively correspond to each unit of memory units 208a-n. In another embodiment, the write buffers 518a-n may correspond to each channel. In yet another embodiment, a single write buffer 518 (not shown) may be coupled to all memory units 208a-n and response logic 212. Those having ordinary skill in the art will appreciate that other implementations, such as a single buffer configured to store both read and write responses, may also be used without departing from the scope of the present invention. [033] In operation, memory units 208a-n may be provided write commands and provide write responses in return. Each write response may be subsequently provided to a write buffer 518, which may in turn provide the responses to response logic 212. Response logic 212 may provide the write responses to the system slave bus 202 in the order required. In one embodiment, ordering logic 210 may cause the response logic 212 to provide write responses to the system bus slave 202 independently of the order in which read responses are provided. In another embodiment, the ordering logic 210 may cause write responses to be provided based, at least in part, on the order in which read responses are provided. [034] Figure 6 illustrates a computing system 600 according to an embodiment of the invention. Computing system 600 may include a processor 605 configured to perform various computing functions, and a memory apparatus 603. Memory apparatus 603 may be coupled to processor 605 by a bus 606 and further may include a memory controller 601 and memory units 608 that are coupled by a communications bus 620. In at least one embodiment, memory controller 601 may be the memory controller 201 in the embodiment illustrated in Figure 2. In some embodiments, computing system 600 may comprise a desktop computer, laptop, telephone,personal digital assistant (PDA), media player (i.e., an MP3 player), server, appliance, gaming device, networking device (i.e. routers), television, or other device that may be configured to execute at least part of any one of the processes described herein. Computing system 600 may also comprise any combination of these devices. [035] In operation, as described above with reference to Figure 2, a system bus slave (not shown) may receive memory commands from a system bus master (not shown). The memory controller 601 may receive the commands from the system bus slave and provide the commands to the memory units 608, as described above with reference to Figure 2. In some embodiments, the system bus master may be included in the processor 605, or alternatively, may be included in a system controller (not shown) and receive commands from processor 605. Moreover, in at least one embodiment, memory controller 601 may also be included in the system controller. [036] From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and the scope of the invention. For example, although the embodiments of the invention are explained in the context of ordering responses in memory controllers, it will be understood that responses may be ordered once provided from a memory controller to a system bus. Accordingly, the invention is not limited except as by the claims. |
The present invention includes an electronic device workpiece processing apparatus and method of communicating signals within an electronic device workpiece processing apparatus. One embodiment of an electronic device workpiece processing apparatus includes a chuck including a surface, an electrical coupling adjacent the surface, and electrical interconnect configured to connect with the electrical coupling of the chuck and conduct a signal within the chuck; an intermediate member having a first surface and a second surface and the intermediate member including: an electrical coupling adjacent the first surface and configured to couple with the electrical coupling of the chuck; an electrical coupling adjacent the second surface; and an electrical interconnect configured to connect the electrical coupling adjacent the first surface and the electrical coupling adjacent the second surface; and an electronic device workpiece configured to couple with the second surface of the intermediate member, the electronic device workpiece including a sensor and an electrical coupling configured to provide electrical connection of the sensor with the electrical coupling of the second surface of the intermediate member. |
1. A wafer processing apparatus comprising:a wafer holder adapted to receive a wafer having an electrical coupling, the wafer holder including an electrical coupling configured to electrically couple with the electrical coupling of the wafer and communicate signals between the wafer and the wafer holder of the wafer processing apparatus during fabrication of integrated circuitry of the wafer using the wafer processing apparatus.2. The wafer processing apparatus according to claim 1 further comprising a data gathering device coupled with the electrical coupling of the wafer holder and configured to receive the signals.3. The wafer processing apparatus according to claim 2 further comprising a contact plate configured to communicate the signal intermediate the wafer holder and the data gathering device.4. The wafer processing apparatus according to claim 1 wherein the wafer holder includes a first surface, a second surface, and an electrical interconnect configured to electrically couple the first surface and the second surface.5. The wafer processing apparatus according to claim 4 wherein the first surface of the wafer holder is configured to face a received wafer and the second surface is configured to face a chuck.6. The wafer processing apparatus according to claim 1 wherein the wafer holder includes a plurality of electrical couplings adapted to couple with a plurality of electrical couplings of the wafer.7. The wafer processing apparatus according to claim 1 wherein the wafer holder comprises a chuck.8. The wafer processing apparatus according to claim 1 wherein the wafer holder comprises a chuck configured to receive one of a calibration wafer and a production wafer.9. The wafer processing apparatus according to claim 8 wherein the wafer holder includes vacuum chambers adapted to receive a vacuum to couple one of the calibration wafer and the production wafer with the chuck.10. The wafer processing apparatus according to claim 1 wherein the wafer holder comprises an intermediate member adapted to couple with a chuck.11. The wafer processing apparatus according to claim 1 wherein the wafer holder includes a vacuum chamber adapted to receive a vacuum to couple a received wafer with the wafer holder.12. The wafer processing apparatus according to claim 1 wherein the electrical coupling of the wafer holder comprises a conductive column configured to extend outward from plural surfaces of the wafer holder.13. The wafer processing apparatus according to claim 12 further comprising a contact plate including circuitry configured to provide electrical connection with the conductive column.14. The wafer processing apparatus according to claim 1 wherein the electrical coupling of the wafer holder is adapted to contact the electrical coupling of the wafer.15. The wafer processing apparatus according to claim 1 wherein the wafer holder is adapted to expose the wafer to a processing environment to process the wafer.16. The wafer processing apparatus according to claim 1 wherein the wafer holder is configured to support a wafer for processing within the wafer processing apparatus to form a plurality of discrete integrated circuits of a plurality of respective dies to be singulated from the wafer at a subsequent moment in time.17. The wafer processing apparatus according to claim 1 wherein the wafer holder is configured to expose a wafer to a processing environment within the wafer processing apparatus to form a plurality of discrete integrated circuits of a plurality of respective dies to be singulated from the wafer at a subsequent moment in time.18. The wafer processing apparatus according to claim 1 further comprising a processing area of the wafer processing apparatus configured to process a wafer supported using the wafer holder to fabricate a plurality of discrete integrated circuits of a plurality of respective dies to be singulated from the wafer at a subsequent moment in time.19. The wafer processing apparatus according to claim 1 wherein the wafer processing apparatus is configured to process a wafer supported using the wafer holder to fabricate a plurality of discrete integrated circuits of a plurality of respective dies to be singulated from the wafer at a subsequent moment in time.20. The wafer processing apparatus according to claim 1 wherein the wafer comprises a semiconductive wafer comprising a plurality of integrated circuit dies prior to singulation of at least one of the dies at a subsequent moment in time.21. The wafer processing apparatus according to claim 1 wherein the electrical coupling of the wafer holder is electrically conductive to establish an electrical connection with the electrical coupling of the wafer wherein electrons of the signals are exchanged between the electrical couplings of the wafer holder and the wafer.22. The wafer processing apparatus according to claim 1 wherein the signals are generated using electrical circuitry of the wafer.23. The wafer processing apparatus according to claim 1 wherein the signals comprise electrical signals.24. A wafer processing apparatus comprising:a wafer holder having circuitry configured to communicate a process signal from a received wafer and the process signal containing information regarding processing of the wafer during fabrication of integrated circuitry of the received wafer using the wafer processing apparatus.25. The wafer processing apparatus according to claim 24 wherein the wafer holder is adapted to expose the wafer to a processing environment to fabricate the integrated circuitry of the wafer.26. The wafer processing apparatus according to claim 24 wherein the process signal comprises information regarding the processing of the wafer for the fabrication of integrated circuitry using the wafer processing apparatus.27. The wafer processing apparatus according to claim 24 wherein the wafer holder is configured to receive the process signal comprising an electrical signal using an electrical coupling of the wafer holder in electrical contact with an electrical coupling of the wafer.28. A wafer processing apparatus comprising:a chuck including a surface, an electrical coupling adjacent the surface, and an electrical interconnect configured to connect with the electrical coupling of the chuck and conduct a signal within the chuck;an intermediate member adapted to receive a wafer and the intermediate member having a first surface and a second surface and the intermediate member including:an electrical coupling adjacent the first surface and configured to couple with the electrical coupling of the chuck;an electrical coupling adjacent the second surface; andan electrical interconnect configured to connect the electrical coupling adjacent the first surface and the electrical coupling adjacent the second surface; anda wafer configured to couple with the second surface of the intermediate member, the wafer including a sensor and an electrical coupling configured to provide electrical connection of the sensor with the electrical coupling of the second surface of the intermediate member.29. The wafer processing apparatus according to claim 28 further comprising a data gathering device coupled with the electrical coupling of the chuck and configured to receive the signal.30. The wafer processing apparatus according to claim 29 further comprising a contact plate configured to communicate the signal intermediate the chuck and the data gathering device.31. The wafer processing apparatus according to claim 28 wherein the sensor comprises a resistance temperature device.32. The wafer processing apparatus according to claim 28 wherein the wafer comprises a calibration wafer.33. The wafer processing apparatus according to claim 28 wherein the electrical interconnect comprises a conductive column configured to extend outward from plural surfaces of the chuck.34. The wafer processing apparatus according to claim 33 further comprising a contact plate including circuitry configured to provide electrical connection with electrical couplings of the chuck.35. The wafer processing apparatus according to claim 28 wherein the intermediate member is configured to expose the wafer to a processing environment to process the wafer.36. A wafer processing apparatus comprising:a chuck including a surface, a plurality of electrical couplings adjacent the surface, and a plurality of electrical interconnects configured to connect with respective electrical couplings of the chuck and conduct signals within the chuck;an intermediate member adapted to receive a wafer and the intermediate member having a first surface and a second surface and the intermediate member including:a plurality of electrical couplings adjacent the first surface and configured to couple with respective electrical couplings of the chuck;a plurality of electrical couplings adjacent the second surface; anda plurality of electrical interconnects configured to electrically connect the electrical couplings of the first surface with respective electrical couplings of the second surface;a calibration wafer configured to couple with the second surface of the intermediate member, the calibration wafer including a plurality of resistance temperature devices configured to generate process signals, and a plurality of electrical connections configured to electrically connect the resistance temperature devices with respective electrical couplings of the second surface of the intermediate member; anda data gathering device coupled with the electrical interconnects of the chuck and configured to receive the process signals from the resistance temperature devices through the intermediate member and the chuck.37. The wafer processing apparatus according to claim 36 wherein the intermediate member is configured to expose the wafer to a processing environment to process the wafer.38. An electronic device workpiece processing apparatus comprising:a workpiece holder adapted to receive an electronic device workpiece having an electrical coupling, the workpiece holder including an electrical coupling configured to electrically couple with the electrical coupling of the electronic device workpiece and communicate signals between the electronic device workpiece and the workpiece holder during fabrication of integrated circuitry of the electronic device workpiece using the electronic device workpiece processing apparatus, wherein the workpiece holder includes a vacuum chamber adapted to receive a vacuum to couple a received electronic device workpiece with the workpiece holder.39. The apparatus of claim 38 wherein the workpiece holder is configured to expose the electronic device workpiece to a processing environment to process the electronic device workpiece.40. The electronic device workpiece processing apparatus according to claim 38 wherein the communicated signals comprise information regarding processing of the wafer for fabrication of the integrated circuitry using the wafer processing apparatus.41. The electronic device workpiece processing apparatus according to claim 38 wherein the communicated signals comprise electrical signals.42. The electronic device workpiece processing apparatus according to claim 38 wherein the electrical coupling of the workpiece holder is configured to communicate the signals of a sensor of the electronic device workpiece.43. The electronic device workpiece processing apparatus according to claim 38 wherein the workpiece holder comprises a chuck.44. The electronic device workpiece processing apparatus according to claim 38 wherein the workpiece holder comprises an intermediate member.45. An electronic device workpiece processing apparatus comprising:a workpiece holder adapted to receive an electronic device workpiece having an electrical coupling, the workpiece holder including an electrical coupling configured to electrically couple with the electrical coupling of the electronic device workpiece and communicate signals between the electronic device workpiece and the workpiece holder, wherein the electrical coupling of the workpiece holder is configured to extend outward from plural surfaces of the workpiece holder; anda contact plate including circuitry configured to provide electrical connection with the conductive column.46. The apparatus of claim 45 wherein the workpiece holder is configured to expose the electronic device workpiece to a processing environment to process the electronic device workpiece.47. An electronic device workpiece processing apparatus comprising:a chuck including a surface, an electrical coupling adjacent the surface, and electrical interconnect configured to connect with the electrical coupling of the chuck and conduct a signal within the chuck;an intermediate member having a first surface and a second surface and the intermediate member including:an electrical coupling adjacent the first surface and configured to couple with the electrical coupling of the chuck;an electrical coupling adjacent the second surface; andan electrical interconnect configured to connect the electrical coupling adjacent the first surface and the electrical coupling adjacent the second surface;an electronic device workpiece configured to couple with the second surface of the intermediate member, the electronic device workpiece including a sensor and an electrical coupling configured to provide electrical connection of the sensor with the electrical coupling of the second surface of the intermediate member;a data gathering device coupled with the electrical coupling of the chuck and configured to receive the signal; anda contact plate configured to communicate the signal intermediate the chuck and the data gathering device.48. The apparatus of claim 47 wherein the intermediate member is configured to expose the electronic device workpiece to a processing environment to process the electronic device workpiece.49. An electronic device workpiece processing apparatus comprising:a chuck including a surface, an electrical coupling adjacent the surface, and electrical interconnect configured to connect with the electrical coupling of the chuck and conduct a signal within the chuck;an intermediate member having a first surface and a second surface and the intermediate member including:an electrical coupling adjacent the first surface and configured to couple with the electrical coupling of the chuck;an electrical coupling adjacent the second surface; andan electrical interconnect configured to connect the electrical coupling adjacent the first surface and the electrical coupling adjacent the second surface; andan electronic device workpiece configured to couple with the second surface of the intermediate member, the electronic device workpiece including a sensor comprising a resistance temperature device, and an electrical coupling configured to provide electrical connection of the sensor with the electrical coupling of the second surface of the intermediate member.50. The apparatus of claim 49 wherein the intermediate member is configured to expose the electronic device workpiece to a processing environment to process the electronic device workpiece.51. An electronic device workpiece processing apparatus comprising:a chuck including a surface, an electrical coupling adjacent the surface, and electrical interconnect configured to connect with the electrical coupling of the chuck and conduct a signal within the chuck;a contact plate including circuitry configured to provide electrical connection with the electrical coupling of the chuck;an intermediate member having a first surface and a second surface and the intermediate member including:an electrical coupling adjacent the first surface and configured to couple with the electrical coupling of the chuck;an electrical coupling adjacent the second surface; andan electrical interconnect configured to connect the electrical coupling adjacent the first surface and the electrical coupling adjacent the second surface, wherein the electrical interconnect comprises a conductive column configured to extend outward from plural surfaces of the chuck; andan electronic device workpiece configured to couple with the second surface of the intermediate member, the electronic device workpiece including a sensor and an electrical coupling configured to provide electrical connection of the sensor with the electrical coupling of the second surface of the intermediate member.52. The apparatus of claim 51 wherein the intermediate member is adapted to expose the electronic device workpiece to a processing environment to process the electronic device workpiece.53. An electronic device workpiece processing apparatus comprising:an electronic device workpiece including a sensor and an electrical coupling; andan intermediate member including a surface having an electrical coupling and adapted to expose the electronic device workpiece to a processing environment to process the electronic device workpiece;wherein the electrical coupling of the electronic device workpiece is configured to provide electrical connection of the sensor with the electrical coupling of the surface of the intermediate member.54. The apparatus according to claim 53 wherein the electronic device workpiece comprises a wafer.55. An article of manufacture comprising:a wafer processing apparatus configured to fabricate integrated circuitry using a plurality of wafers and comprising a wafer holder configured to receive at least one of the wafers having an electrical coupling, and wherein the wafer holder comprises an electrical coupling configured to electrically couple with the electrical coupling of the at least one wafer and to communicate signals between the at least one wafer and the wafer holder during fabrication of the integrated circuitry of the at least one wafer using the wafer processing apparatus.56. The article of claim 55 wherein the electrical coupling of the wafer holder is configured to contact the electrical coupling of the wafer.57. The article of claim 55 wherein the communicated signals comprise information regarding processing of the wafer using the wafer processing apparatus.58. The article according to claim 55 wherein the communicated signals comprise information regarding processing of the wafers for the fabrication of the integrated circuitry using the wafer processing apparatus.59. The article according to claim 55 wherein the electrically coupled electrical couplings of the wafer and the wafer holder are in electrical contact with one another to communicate the signals comprising electrical signals between the at least one wafer and the wafer holder.60. An electronic device workpiece processing apparatus comprising:an intermediate member comprising a first surface and a second surface, wherein the second surface comprises an electrical coupling; andan electronic device workpiece including a sensor and an electrical coupling configured to provide electrical connection of the sensor with the electrical coupling of the second surface of the intermediate member. |
This patent resulted from a divisional of and claims priority to U.S. patent application Ser. No. 09/137,629, filed on Aug. 21, 1998, now U.S. Pat. No. 6,229,322, issued May 8, 2001, entitled "Electronic Device Workpiece Processing Apparatus and Method of Communicating Signals Within an Electronic Device Workpiece Processing Apparatus, naming David R. Hembree as inventor, the disclosure of which is incorporated herein by reference.TECHNICAL FIELDThe present invention relates to an electronic device workpiece processing apparatus and method of communicating signals within an electronic device workpiece processing apparatus.BACKGROUND OF THE INVENTIONIt is preferred in the semiconductor and related arts to utilize large wafers for fabrication of integrated circuits and other devices. Large wafers are preferred inasmuch as an increased number of chips can be fabricated from larger workpieces. As the size of the wafers continues to increase as processing techniques are improved, additional processing obstacles are presented.For example, it is typically preferred to provide a substantially uniform temperature across the surface of wafers being processed because changes in temperature can influence device fabrication. Wafers of increased diameters and surface areas experience increased temperature fluctuations at various locations on the workpiece. In particular, a partial vacuum is typically used to pull small diameter wafers into direct thermal contact with a hot plate. Such processing methods facilitate substrate temperature control because the substrate temperature is closely associated to the temperature of the hot plate. Fabrication of small sub-micron devices upon larger diameter semiconductor wafers or workpieces requires minimal backside contamination. As such, contact of the workpiece with the hot plate is not typically possible. Large workpieces are processed in conventional operations upon spacers or pins that position the workpiece approximately 0.1 millimeters above the hot plate surface. Such spacing intermediate a chuck or hot plate and the workpiece can result in temperature fluctuations across the surface of the workpiece.The utilization of specific materials for processing large workpieces in small geometry applications presents numerous obstacles. Absolute workpiece temperature and workpiece temperature uniformity are parameters which are closely monitored during wafer and workpiece fabrication to provide critical dimension (CD) control. Chemically amplified resists are often utilized in deep ultraviolet (DUV) lithography in small micron geometries (eg., 0.25 microns and below). Chemically amplified resists are particularly temperature dependent further increasing the importance of temperature control and monitoring. Some thermal resist processing steps require process windows ranging from 1-2 degrees centigrade down to a few tenths of a degree centigrade. Meteorology that is four to ten times more precise than conventional process equipment is typically utilized to provide thermal performance measurements to 0.1 degrees centigrade.One approach has disclosed the use of temperature sensors across a surface of the wafer to provide temperature mapping of the workpiece during processing. Platinum foil and copper leads are utilized to electrically connect the temperature sensors. With the use of numerous temperature sensors across an entire workpiece surface, numerous wires are required for coupling and monitoring. Such numerous wired connections can break and/or adversely impact processing of the workpiece or the temperature measurements taken of the surface of the workpiece. Some temperature sensors require four leads per sensor further impacting the processing and temperature monitoring of the workpieces.An improved method of providing temperature information is disclosed in U.S. patent application Ser. No. 09/032,184, entitled "Electronic Device Workpieces, Methods of Semiconductor Processing and Methods of Sensing Temperature of an Electronic Device Workpiece", filed Feb. 27, 1998, naming Dr. Salman Akram and David R. Hembree as inventors, assigned to the assignee hereof, and incorporated herein by reference.There exists a need to provide additional improvements for monitoring of processing of workpieces.SUMMARY OF THE INVENTIONThe invention provides electronic device workpiece processing apparatuses, and methods of communicating signals within an electronic device workpiece processing apparatus. Exemplary electronic device workpieces include production workpieces (e.g., silicon wafers) and calibration wafers.One aspect of the invention provides an electronic device workpiece processing apparatus including a chuck, intermediate member and an electronic device workpiece. The chuck includes an electrical interconnect configured to conduct signals within the chuck. The intermediate member is configured to conduct signals intermediate opposing surfaces of the intermediate member. The electronic device workpiece includes one or more sensors. An exemplary sensor comprises a resistance temperature device (RTD) configured to provide process signals containing process information regarding the electronic device workpiece processing apparatus. A data gathering device or recorder can be provided to record process information generated by the electronic device workpiece processing apparatus. The chuck and intermediate member are configured to communicate the process signals intermediate the sensor and the data gathering device.According to another aspect of the invention, an electronic device workpiece processing apparatus includes a workpiece holder. Exemplary workpiece holders include a chuck and an intermediate member. The workpiece holder is adapted to receive an electronic device workpiece and includes an electrical coupling configured to electrically couple with an electrical coupling of a received electronic device workpiece. The workpiece holder is adapted for communication of signals between the electronic device workpiece and the workpiece holder.The present invention also provides methods of communicating signals within an electronic device workpiece processing apparatus. According to one method, a workpiece holder is coupled with an electronic device workpiece and a signal can be communicated through the workpiece holder. The communicated signals preferably contain process information.Another aspect of the invention provides a method comprising electrically coupling a sensor of an electronic device workpiece with a workpiece holder configured to receive the workpiece. The workpiece holder is configured to communicate signals generated using the sensor.Yet another aspect of the present invention provides a method comprising communicating signals intermediate circuitry of an electronic device workpiece and circuitry of a workpiece holder configured to receive the electronic device workpiece.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings.FIG. 1 is an isometric view illustrating one embodiment of an electronic device workpiece processing apparatus.FIG. 2 is a cross-sectional view taken along line 2-2 of the electronic device workpiece processing apparatus of FIG. 1.FIG. 3 is a cross-sectional view of another embodiment of an electronic device workpiece processing apparatus.FIG. 4 is an isometric view of a pogo plug of the chuck depicted in FIG. 3.FIG. 5 is an isometric view of the chuck depicted in FIG. 3.FIG. 6 is a cross-sectional view of another embodiment of an electronic device workpiece processing apparatus.FIG. 7 is a cross-sectional view of a sensor configuration of an electronic device workpiece.FIG. 8 is a cross-sectional view of another sensor configuration of an electronic device workpiece.FIG. 9 is a cross-sectional view of one embodiment of an electrical interconnect within a chuck of an electronic device workpiece processing apparatus.FIG. 10 is a cross-sectional view of the electrical interconnect of FIG. 9 coupled with a calibration workpiece.FIG. 11 is a cross-sectional view of another embodiment of an electrical interconnect of a chuck.FIG. 12 is a cross-sectional view of yet another embodiment of an electrical interconnect of a chuck.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThis disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8).Referring to FIG. 1, an embodiment of an electronic device workpiece processing apparatus 10 is illustrated. The depicted apparatus 10 includes a workpiece holder 12 adapted to couple with or receive an electronic device workpiece 20. Exemplary workpiece holders 12 include a chuck 40 as shown in FIG. 1 and an intermediate member described below. Exemplary electronic device workpieces include calibration workpieces and production workpieces.Workpiece holder 12 includes an electrical coupling (not illustrated FIG. 1) configured to electrically connect with an electrical coupling of electronic device workpiece 20. Connection of circuitry including electrical couplings of electronic device workpiece 20 and workpiece holder 12 permits communication of signals between electronic device workpiece 20 and workpiece holder 12. Workpiece holder 12 is configured to receive and conduct or communicate signals.Electronic device workpiece 20 comprises a calibration workpiece in the presently described embodiment. Production workpieces typically undergo processing from which subsequent devices are formed. Exemplary production electronic device workpieces include semiconductor wafers, glass or quartz substrates for flat panel or field emission display devices, etc. Typical production workpieces are processed and subsequently utilized to form products used in a variety of electronic devices. Calibration and production electronic device workpieces can comprise silicon, glass, quartz or other materials.Workpiece holder 12 can be implemented in various configurations. In the embodiment depicted in FIG. 1, workpiece holder 12 is implemented as a chuck 40. Chuck 40 is configured to receive electronic device workpiece 20 and preferably compatible with processing of electronic device workpiece 20.In the depicted embodiment, electronic device workpiece 20 comprises a calibration workpiece. Workpiece 20 includes opposing surfaces 21, 22 (only surface 21 is shown in FIG. 1). A plurality of sensors 23 are borne by or provided adjacent first surface 21 of workpiece 20. Sensors 23 are configured to sense a process condition within apparatus 10 and generate and output process signals corresponding to the sensing. Exemplary process signals contain information regarding processing of a workpiece.The depicted sensors 23 comprise resistance temperature devices (RTD). The information within the process signals can comprise temperature information corresponding to sensed temperatures at plural positions across surface 21 of workpiece 20.In a preferred embodiment, sensors 23 comprising resistance temperature devices individually include plural electrical connections. Such resistance temperature devices include four electrical connections providing two connections for voltage monitoring and two connections for current monitoring. This configuration provides cancellation or minimization of wire resistances of connections to sensors 23.In the embodiment depicted in FIG. 1, chuck 40 is coupled with a data gathering device or data recorder 14. Data gathering device 14 is configured to couple with an electrical interconnect of chuck 40 and receive process signals through chuck 40 outputted from plural sensors 23 provided upon workpiece 20. One embodiment of data gathering device 14 comprises a ClientPro MTR computer available from Micron Electronics, Inc. utilizing a Pentium(TM) processor.Data gathering device 14 is configured to receive and process signals provided by sensors 23 and corresponding to processing conditions of workpiece 21. Alterations to processing conditions of apparatus 10 can be changed responsive to reception of process signals within device 14.Electronic device workpiece 20 is held by chuck 40 with the use of a vacuum or mechanical coupling in exemplary embodiments. The depicted chuck 40 includes a lip 52 configured to receive and maintain electronic workpiece device 20 in a desired position relative to chuck 40.Referring to FIG. 2, the depicted chuck 40 includes a surface 39 and an opposing surface 41. Chuck 40 also includes circuitry comprising a plurality of electrical interconnects 44 and plural electrical couplings 45 adjacent surface 41. Electrical interconnects 44 are configured to connect with or include respective electrical couplings 45 of chuck 40. In addition, electrical interconnects 44 are configured to conduct or communicate signals within and through chuck 40. In the depicted embodiment, electrical interconnects 44 are configured to conduct or communicate signals intermediate surfaces 39, 41 of chuck 40.The depicted electrical interconnects 44 comprise pogo pins which are available from Rika Denshi America, Inc. and have product designation RM-500 Series. Electrical interconnects 44 of other configurations can be utilized.Calibration workpiece 20 is shown received within chuck 40 in FIG. 2. Lip 52 is operable to define a compartment for reception of electronic device workpiece 20. Surfaces 21, 22 of electronic device workpiece 20 are illustrated in FIG. 2. A plurality of sensors 23, such as resistance temperature devices, are shown provided or fabricated upon surface 21 of electronic device workpiece 20. In the depicted embodiment, an insulative protective layer 28 is shown formed over sensors 23. Layer 28 can comprise glass or other suitable material for protecting sensors 23.One exemplary electronic device workpiece 20 is described in the patent application having Ser. No. 09/032,184, filed Feb. 27, 1998, and cited above. Such a workpiece 20 includes circuitry comprising electrical couplings 24, vias 25 and connections 27 corresponding to respective sensors 23.Connections 27 comprise conductive traces in the described embodiment and are configured to couple sensors 23 with respective vias 25. Vias 25 extend intermediate surfaces 21, 22 of electronic device workpiece 20. Vias 25 preferably include a conductive material to electrically couple surfaces 21, 22 of workpiece 20. In a preferred embodiment, the conductive material in vias 25 is electrically isolated from electronic workpiece 20. For example, an insulator or dielectric layer around the via conductor can be utilized.Electrical couplings 24 are adjacent or borne by surface 22 of electronic device workpiece 20. Electrical couplings 24 comprise bond or land pads of electronic device workpiece 20 and correspond to respective sensors 23 and vias 25. Further, electrical couplings 24 are preferably configured to provide electrical connection of sensors 23 with electrical couplings of chuck 40 and an intermediate member (if provided) as described below.Electrical couplings 45 are spring loaded and configured to protrude slightly above surface 41 of chuck 40. Electrical couplings 45 of chuck 40 are configured or adapted to couple with electrical couplings 24 of electronic device workpiece 20. Positioning or reception of electronic device workpiece 20 upon chuck 40 slightly depresses electrical couplings 45 of pogo pins or electrical interconnects 44 in the described embodiment. Electrical connection is established intermediate electrical couplings 24 of device 20 and electrical couplings 45 of chuck 40.Following connection of electrical couplings 24, 45, process signals from data gathering device 14 can be applied to sensors 23 via wire 13, electrical interconnect 44, electrical couplings 24, 45 and connections 25, 27. In addition, signals outputted from sensors 23 can be conducted via connections 25, 27, electrical couplings 24, 45, electrical interconnect 44, and wire 13 to data gathering device 14. The depicted pogo pins are configured to remain within chuck 40 during normal production use or processing of production electronic device workpieces in one embodiment of the invention.Workpiece holder 12, as depicted in FIG. 2, includes a plurality of vacuum channels or chambers 49 extending intermediate surfaces 39, 41. Vacuum chambers 49 are coupled with a vacuum source 51 in a preferred embodiment. Vacuum chambers 49 are configured to receive a vacuum to couple a received electronic device workpiece 20 with workpiece holder 12. Mechanical devices such as clamps are utilized in other embodiments to attach or couple workpiece 20 with workpiece holder 12.Following coupling of the circuitry of calibration workpiece 20 with the circuitry of workpiece holder 12, process signals can be communicated intermediate sensors 23 and data gathering device 14. Thereafter, the coupling of respective circuitry of workpiece 20 and workpiece holder 12 can be broken and another calibration workpiece or production workpiece can be coupled with workpiece holder 12.Referring to FIG. 3, an alternative embodiment of electronic device workpiece processing apparatus 10 is illustrated. The depicted processing apparatus 10 includes a workpiece holder 12 comprising an insert or intermediate member 60. Intermediate member 60 is also referred to as an insert or interposer. The depicted intermediate member 60 is adapted to couple with chuck 40, and receive and couple with electronic device workpiece 20. Intermediate member 60 is preferably configured to communicate signals intermediate chuck 40 and electronic device workpiece 20.Intermediate member 60 preferably comprises a nonconductive material which is compatible with a fabrication environment. Intermediate member 60 includes opposing surfaces 61, 62 and circuitry comprising at least one electrical interconnect 64 and plural electrical couplings 65, 66. Electrical interconnect 64 is configured to electrically couple opposing surfaces 61, 62 of intermediate member 60. In addition, electrical interconnect 64 is configured to couple circuitry of workpiece 20 and circuitry of chuck 40. Surface 61 of intermediate member 60 is configured to face a received electronic device workpiece 20. Surface 62 of intermediate member 60 is configured to face chuck 40 during processing of electronic device workpieces 20.Intermediate member 60 is configured to receive electronic device workpiece 20 having electrical couplings 24. In addition, intermediate member 60 is configured to couple with chuck 40 having electrical couplings 45. Electrical interconnects 64 are configured to electrically connect electrical couplings 24 of electronic device workpiece 20 with electrical couplings 45 of chuck 40. The depicted electrical interconnects 64 comprise double-ended probes or pogo pins which are also available from Rika Denshi America, Inc. and have product designation B1052 Series Probes. Other suitable probes include B1080-C3 Low Profile Probes and the B1303-C3 or B1316-C3 Ball Grid Probes. Electrical interconnects 64 of other configurations can be utilized.The depicted intermediate member includes a lip 63 configured to receive electronic device workpiece 20. Chuck 40 includes lip 52 configured to receive intermediate member 60.In the depicted embodiment, mechanical devices such as clamps can be utilized to couple or maintain electronic device workpiece 20 with surface 61 of intermediate member 60. Further, a vacuum is utilized in the illustrated embodiment to couple intermediate member 60 with chuck 40. The depicted chuck 40 includes plural chuck vacuum channels or chambers 49. Vacuum channels 49 are in fluid communication with openings 53 at surface 41 of chuck 40. Vacuum channels or chambers 49 are configured to couple with a vacuum source 51 and receive a vacuum to couple intermediate member 60 relative to chuck 40. In other embodiments, intermediate member 60 is received and maintained within chuck 40 by mechanical fasteners such as clamps. In addition, a vacuum can be utilized in other arrangements to couple workpiece 20 with intermediate member 60.An alternative configuration of intermediate member 60 includes utilization of a copper film/polyamide tape having conductive microbumps to provide electrical connection of sensors 23 and electrical couplings 45 of chuck 40. An exemplary tape is available from Nitto Denko America, Inc.Referring to FIGS. 3 and 4, the depicted chuck 40 includes a plurality of electrical couplings 45. Electrical couplings 45 are embodied as pogo plugs 47 in the presently described embodiment. The depicted pogo plugs 47 individually include an insulator 50 provided about conductive electrical coupling 45. Exemplary materials of insulator 50 include plastic, glass, ceramic, Teflon, and Torlon. Pogo plugs 47 can be provided within a plurality of vias 48 formed within chuck 40. Wires 13 are connected with electrical couplings 45 of pogo plugs 47 and data gathering device 14.Referring to FIG. 5, details of chuck 40 are illustrated. Electrical couplings 45 are shown adjacent surface 41 of chuck 40. Insulators 50 of pogo plugs 47 are shown to isolate conductive electrical couplings 45 from chuck 40. In addition, openings 53 of vacuum channels or chambers 49 are visible within surface 41. Lip 52 surrounds the periphery of chuck 40 in the illustrated embodiment and is configured to receive intermediate member 60 as previously described.Referring again to FIG. 3, reception of electronic device workpiece 20 upon surface 61 of intermediate member 60 slightly depresses electrical couplings 65 of pogo pins 64 establishing an electrical connection intermediate electrical couplings 24, 65. Similarly, placement of intermediate member 60 within chuck 40 slightly depresses electrical couplings 66 of pogo pins 64 establishing electrical conduction intermediate electrical couplings 45, 66.In the described embodiment, intermediate member 60 is configured to temporarily receive electronic device workpiece 20. Following processing of electronic device workpiece 20, workpiece 20 can be removed from intermediate member 60. Also, chuck 40 is configured to temporarily receive intermediate member 60 in the described embodiment. Following production or processing of electronic device workpieces 20, intermediate member 60 can be removed from chuck 40.One advantage of the embodiment described with reference to FIG. 3, is the provision of a clean production chuck 40 having no moving parts. In addition, chuck 40 is isolated to a greater extent from the processing environment utilized to fabricate or process electronic device workpieces 20. Utilization of intermediate member 60 provides processing of electronic device workpiece 20 apart from chuck 40. Such minimizes exposure of chuck 40 to processing materials utilized during fabrication processes.According to one processing methodology, calibration workpiece 20 is received within intermediate member 60, and intermediate member 60 placed upon chuck 40. Following sensing of process conditions using sensors 23, calibration workpiece 20 is removed from intermediate member 60. Thereafter, production electronic device workpieces are individually placed within intermediate member 60 and processing of such workpieces occurs in mass.Referring to FIG. 6, another embodiment of an electronic workpiece processing apparatus 10 according to the present invention is illustrated. Workpiece holder 12 depicted in FIG. 6 comprises a chuck 40 configured to receive plural electronic device workpieces. In particular, chuck 40 is configured to receive a calibration workpiece 20 and a production workpiece 80. Lip 52 of chuck 40 has been vertically extended in the embodiment illustrated in FIG. 6 to accommodate reception of plural electronic device workpieces. Utilization of the configuration of apparatus 10 of FIG. 6 enables processing of production workpieces 80 while monitoring processing conditions using calibration workpiece 20.Calibration workpiece 20 includes plural sensors 23 and corresponding connections 25, 27 and electrical coupling 24 although only one construction is labelled as such in FIG. 6. The calibration workpiece 20 illustrated in FIG. 6 additionally includes plural through holes or vacuum chambers 26 passing intermediate surfaces 21, 22. Plural through holes 26 are preferably provided within calibration workpiece 20 although only one such through hole is illustrated in FIG. 6.The depicted chuck 40 comprises plural vacuum channels or chambers 49, 55 intermediate surfaces 39, 41 of chuck 40. Vacuum channels or chambers 49 allow application of a vacuum to calibration workpiece 20 which pulls calibration workpiece 20 toward chuck 40. Vacuum chambers 55 and through holes 26 permit application of a vacuum to production workpiece 80 which pulls production workpiece 80 toward calibration workpiece 20 and chuck 40.In particular, vacuum channels or chambers 49, 55 are configured to couple with an external vacuum source 51 at positions adjacent surface 39 of chuck 40. Vacuum source 51 is configured to provide a calibration wafer hold-down vacuum to chamber 54 using a supply line 56. In addition, the illustrated vacuum source 51 is configured to provide a production wafer hold-down vacuum to vacuum channel or chambers 26, 55 and production wafer 80 via connection 57. As illustrated, through holes 26 of calibration wafer 20 are configured to align with vacuum chambers 55 of chuck 40. Application of hold-down vacuums to channels or chambers 26, 49, 55 operate to couple the respective calibration workpiece 20 and production workpiece 80 with chuck 40.In an alternative embodiment, mechanical devices are utilized to couple calibration workpiece 20 and production workpiece 80 with chuck 40.The depicted chuck 40 includes an electrical interconnect 44 and an electrical coupling 45 configured to meet or couple with electrical coupling 24 of calibration workpiece 20. In the depicted arrangement, electrical interconnect 44 comprises a pogo pin. Wire connection 13 operates to couple electrical interconnect 44 with data gathering device 14. In the depicted embodiment, electrical interconnect 44 comprises circuitry configured to conduct process signals within chuck 40 and intermediate surfaces 39, 41. Data gathering device 14 is configured to receive the process signals from sensors 23 through chuck 40 and intermediate member 60.Referring to FIG. 7, an exemplary portion of a calibration workpiece 20 is illustrated. Sensor 23 comprising a resistance temperature device is shown provided upon surface 21 of calibration workpiece 20. Via 25 is formed within calibration workpiece 20 intermediate surfaces 21, 22. Via 25 is conductive to permit communication of process signals. Electrical connection 27 is illustrated connecting sensor 23 and via 25. In the depicted embodiment, electrical connection 27 comprises a conductive trace.An insulative dielectric layer 30 is provided about via conductor 25 in some configurations. Provision of dielectric layer 30 is preferred if workpiece 20 is semiconductive or conductive. Layer 30 is typically not utilized if workpiece 20 comprises a non-conductive material, such as glass.In the preferred embodiment, a conformal protection layer 28 is provided over surface 21, sensor 23 and connection 27. Layer 28 operates to protect surface 21, sensor 23 and electrical connection 27 from the processing environment including gasses, chemicals, plasmas, etc. utilized during processing of the electronic device workpieces. In the described embodiment, layer 28 comprises glass. The glass may be sputtered over calibration workpiece 20 including sensors 23, electrical connections 27 and surface 21.Referring to FIG. 8, a thick protection layer 28 is shown provided over sensors 23 and electrical connection 27. Layer 28 is preferably chemically or mechanically polished providing a flat or smooth surface 29 of layer 28. A polished or flat smooth surface 29 of layer 28 facilitates vacuum sealing of a production workpiece 80 placed over calibration workpiece 20. In addition, flat smooth surface 29 provides enhanced wearing properties during processing of production workpieces 80 or exposure of calibration workpiece 20 to process conditions. A worn or damaged glass layer 28 may be reprocessed to add more glass or resurfaced to remove defects within the existing glass layer.Referring to FIG. 9, a portion of another embodiment of chuck 40 configured to receive a calibration workpiece (not illustrated in FIG. 9) is depicted. Through hole 42 is shown passing intermediate surfaces 39, 41 of chuck 40. Plural through holes 42 are preferably provided in chuck 40 although only one such through hole is illustrated in FIG. 9. An insulative layer (not illustrated in FIG. 9) is preferably provided if chuck 40 comprises a conductive material. In particular, an insulative layer can be provided about interconnect 44 or along the surface of through hole 42 to electrically isolate interconnect 44from chuck 40. Such an insulative layer is not typically utilized if chuck 40 is non-conductive.Electrical interconnect 44 comprises a conductive column or wire in the embodiment depicted in FIG. 9. In particular, the depicted electrical interconnect 44 comprises a buckle beam or column wire contact. Electrical interconnect 44 is provided within through hole or via 42. Electrical interconnect 44 includes electrical couplings 45, 46 which are configured to extend outward from respective surfaces 39, 41 of chuck 40 as shown. Column electrical interconnect 44 is configured to provide electrical coupling with sensors 23.A contact plate 90 is shown adjacent chuck 40 in FIG. 9. Contact plate 90 includes circuitry 95 configured to provide electrical connection with electrical couplings 46 of chuck 40. Contact plate 90 includes a land pad or electrical coupling 94 configured for electrical connection with electrical coupling 46 of column interconnect 44. Electrical contact plate 90 can comprise a printed circuit board (PCB), ceramic thick/thin film circuit board in exemplary embodiments. Circuitry 95 provides electrical connection intermediate surfaces 91, 96 of contact plate 90. Circuitry 95 is coupled with connection 13 and data gathering device 14.Referring to FIG. 10, an electronic device workpiece comprising a calibration wafer 20 is shown contacting surface 41 of chuck 40. In addition, chuck 40 is shown contacting contact plate 90. As illustrated, placement of calibration workpiece 20 upon chuck 40 and chuck 40 upon plate 90 deflects conductive column 44. In particular, the original position P of conductive column 44 is represented by a dashed line in FIG. 10. Placement of calibration workpiece 20 upon chuck 40 and chuck 40 upon contact plate 90 results in deflection of conductive column 44 to the illustrated position P' in FIG. 10.In the illustration of FIG. 10, electrical couplings 45, 46 are provided in a conductive relationship with respective electrical couplings 24, 94 of calibration workpiece 20 and contact plate 90 respectively. Through hole 84 is preferably sized to provide electrical isolation of conductive column interconnect 44 from chuck 40 when conductive column 44 is deflected as shown in FIG. 10. In particular, chuck 40 can comprise a material 43 which is conductive in some embodiments. Spacing conductive column 44 from material 43 of chuck 40 provides electrical insulation or isolation of process signals passing through conductive column electrical interconnect 44 from chuck 40.In another embodiment, conductive wire interconnect 44 is fixed via electrical coupling 46 to electrical coupling 94 of contact plate 90. Electrical coupling 45 of conductive column 44 can thereafter be free to couple with electrical coupling 24 of calibration workpiece 20.Referring to FIG. 11, another configuration having conductive column 44 fixed to chuck 40 at an intermediate location of through hole 84 is illustrated. Both ends of conductive column 44 comprise respective electrical couplings 45, 46 configured to move or deflect responsive to coupling with external pads or electrical couplings. In the depicted embodiment, a securing device 88 is formed within through hole 84 to fix conductive column 44 at the approximately middle portion of through hole 84. In exemplary embodiments, securing device 88 comprises epoxy press fit as a disk or plug into through hole 84. In another embodiment, through hole 84 is filled with epoxy which is subsequently machined to form securing device 88. Securing device 88 is preferably non-conductive if chuck 40 comprises a conductive material.Referring to FIG. 12, an alternative configuration is shown providing an encapsulated conductive column wire 44 within through hole 84. An electrically insulating encapsulating material 97, such as an elastomer, can be utilized to encapsulate conductive column 44. Such is preferred wherein chuck 40 comprises a conductive material 43. Encapsulation of conductive column interconnect 44 is utilized to hold conductive column wire 44 within through hole 84 and isolate conductive column 44 from chuck 40. Utilization of an encapsulating material 97 encloses through hole 84 of chuck 40 thereby reducing exposure of chuck 40 to contaminating materials present during processing of electronic device workpieces by apparatus 10.Other electrical connections can be utilized within chuck 40 and intermediate member 60 of electronic workpiece device processing apparatus 10 in other embodiments. Exemplary connections include Short Contact(TM) connections available from Johnstech International Corporation and conventional socket type contacts (e.g., spring fingers). Other useable contacts include coil spring, leaf spring and probe needle type contacts and contacts available from Interconnect Devices, Inc. Microspring(TM) contacts available from FormFactor, Inc. may also be utilized. Other exemplary contacts or pins are described in U.S. Pat. No. 5,495,667, incorporated herein by reference. Further, pins can be placed upon land pads of an electronic device workpiece and configured for mating receipt within sockets provided upon chuck 40 or intermediate member 60 of apparatus 10.In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents. |
In some embodiments a secure permit request to change a hardware configuration is created. The secure permit request is sent to a remote location, and a permit sent from the remote location in response to the permit request is received. The hardware configuration is changed in response to the received permit. Other embodiments are described and claimed. |
CLAIMS What is claimed is: 1. A method comprising: creating a secure permit request to change a hardware configuration; sending the secure permit request to a remote location; receiving a permit sent from the remote location in response to the permit request; and changing the hardware configuration in response to the received permit. 2. The method of claim 1, wherein the secure permit request and the permit protect privacy of a user of the hardware. 3. The method of claim 1 , wherein the secure permit request and the permit protect privacy of the hardware. 4. The method of claim 1, wherein the hardware configuration is a hardware configuration of a chipset or a chipset part. 5. The method of claim 1, wherein one or more cryptographic keys are used to ensure secure communication with the remote location. 6. The method of claim 1, wherein a unique key has been permanently included in the hardware during manufacturing of the hardware, and the unique key is used to ensure secure communication and permit authentication with the remote location. 7. The method of claim 6, wherein the unique key is uniquely programmed into the hardware during manufacturing by randomly blowing fuses in the hardware during manufacturing. 8. The method of claim 6, wherein the unique key is not accessible by software running outside of the hardware. 9. The method of claim 1, wherein the permit includes a unique signature from theremote location. 10. The method of claim 1, further comprising validating the received permit prior to changing the hardware configuration. 1 1. The method of claim 1, further comprising validating the received permit using a public key which corresponds to a private signing key located at the remote location. 12. The method of claim 1, wherein the remote location is a secure and trusted location. 13. The method of claim 1, wherein the changing of the hardware configuration is performed without any physical change to the hardware. 14. The method of claim 1, wherein the permit is a secure permit and/or a signed permit. 15. The method of claim 1, wherein transaction information is bound inside the permit such that future returns or exchanges can be enabled. 16. The method of claim 1, wherein once a permit is signed for a particular hardware part it cannot be used on another hardware part. 17. The method of claim 1, wherein software running outside of the hardware cannot emulate the functionality of software running inside the hardware. 18. The method of claim 1, further comprising performing an override during a boot and/or initialization process, and changing the hardware configuration in response to the override. 19. The method of claim 1, wherein the secure permit request is not uniquely identifiable. 20. The method of claim 1, wherein the identity of the hardware and/or the user of the hardware is not determinable from the secure permit request. 21. The method of claim 1, further comprising generating a random value and generating the secure permit request in response to the random value. 22. A method comprising: receiving from a remote location a secure permit request to change a hardware configuration at the remote location; sending a secure permit to the remote location in response to the permit request, wherein the permit is to allow the remote location to change the hardware configuration. 23. The method of claim 22, wherein the secure permit request and the permit protect privacy of a user of the hardware. 24. The method of claim 22, wherein the hardware configuration is a hardware configuration of a chipset or a chipset part. 25. The method of claim 22, wherein one or more cryptographic keys are used to ensure secure communication and permit authentication with the remote location. 26. The method of claim 22, wherein the permit includes a unique signature. 27. The method of claim 22, wherein the permit is to allow the remote location to change the hardware configuration without any physical change to the hardware. 28. The method of claim 22, wherein the permit is a secure permit and/or a signed permit. 29. The method of claim 22, wherein the permit is to allow the remote location to change the hardware configuration in response to an override operation performed during a boot and/or initialization process. 30. The method of claim 22, wherein the secure permit request is not uniquely identifiable. 31. The method of claim 22, wherein the identity of the hardware and/or the user of the hardware is not determinable from the secure permit request. 32. The method of claim 22, wherein the secure permit request has been created at the remote location in response to the random value. 33. The method of claim 22, further comprising using a private signing key that corresponds to a public key located at the remote location to help in validation of the permit at the remote location. 34. The method of claim 22, wherein transaction information is bound inside the permit such that future returns or exchanges can be enabled. 35. The method of claim 22, wherein once a permit is signed for a particular hardware part it cannot be used on another hardware part. 36. The method of claim 22, wherein software running outside of the hardware cannot emulate the functionality of software running inside the hardware. 37. An apparatus comprising: a hardware device having a hardware configuration that may be remotely configured, the hardware device including a controller to create a secure permit request to change the hardware configuration, to send the secure permit request to a remote location, to receive a permit sent from the remote location in response to the permit request, and to change the hardware configuration in response to the received permit. 38. The apparatus of claim 37, wherein the hardware device is a chipset or a chipset part. 39. The apparatus of claim 37, further comprising one or more cryptographic keys toensure secure communication and permit authentication with the remote location. 40. The apparatus of claim 37, further comprising a unique key permanently included in the hardware device, wherein the unique key is used to ensure secure communication and permit authentication with the remote location. 41. The apparatus of claim 40, wherein the unique key comprises randomly blown fuses in the hardware device. 42. The apparatus of claim 40, wherein the unique key is not accessible by software running outside of the hardware device. 43. The apparatus of claim 37, wherein the permit includes a unique signature from the remote location. 44. The apparatus of claim 37, the controller further to validate the received permit prior to changing the hardware configuration. 45. The apparatus of claim 37, the controller further to validate the received permit using a public key which corresponds to a private signing key located at the remote location. 46. The apparatus of claim 37, wherein the remote location is a secure and trusted location. 47. The apparatus of claim 37, the controller to change the hardware configuration without any physical change to the hardware. 48. The apparatus of claim 37, wherein the permit is a secure permit and/or a signed permit. 49. The apparatus of claim 37, wherein transaction information is bound inside the permit such that future returns or exchanges can be enabled. 50. The apparatus of claim 37, wherein once a permit is signed for a particular hardware device it cannot be used on another hardware device. 51. The apparatus of claim 37, wherein software running outside of the hardware device cannot emulate the functionality of software running inside the hardware device. 52. The apparatus of claim 37, the controller to perform an override during a boot and/or initialization process, and to change the hardware configuration in response to the override. 53. The apparatus of claim 37, wherein the secure permit request is not uniquely identifiable. 54. The apparatus of claim 37, wherein the identity of the hardware and/or the user of the hardware is not determinable from the secure permit request. 55. An apparatus comprising: a server to receive from a remote location a secure permit request to change a hardware configuration at the remote location, to send a secure permit to the remote location in response to the permit request, wherein the permit is to allow the remote location to change the hardware configuration. 56. The apparatus of claim 55, wherein the hardware configuration is a hardware configuration of a chipset or a chipset part. 57. The apparatus of claim 55, wherein one or more cryptographic keys are used to ensure secure communication and permit authentication with the remote location. 58. The apparatus of claim 55, wherein the permit includes a unique signature. 59. The apparatus of claim 55, wherein the permit is to allow the remote location to change the hardware configuration without any physical change to the hardware. 60. The apparatus of claim 55, wherein the permit is a secure permit and/or a signed permit. 61. The apparatus of claim 55, wherein the permit is to allow the remote location to change the hardware configuration in response to an override operation performed during a boot and/or initialization process. 62. The apparatus of claim 55, wherein the secure permit request is not uniquely identifiable. 63. The apparatus of claim 55, wherein the identity of the hardware and/or the user of the hardware is not determinable from the secure permit request. 64. The apparatus of claim 55, wherein the secure permit request has been created at the remote location in response to the random value. 65. The apparatus of claim 55, the server to use a private signing key that corresponds to a public key located at the remote location to help in validation of the permit at the remote location. 66. The apparatus of claim 55, wherein transaction information is bound inside the permit such that future returns or exchanges can be enabled. 67. The apparatus of claim 55, wherein once a permit is signed for a particular hardware part it cannot be used on another hardware part. 68. The method of claim 55, wherein software running outside of the hardware cannot emulate the functionality of software running inside the hardware. |
PROVISIONING, UPGRADING AND/OR CHANGING OF HARDWARE TECHNICAL FIELD The inventions generally relate to provisioning, upgrading, and/or changing of hardware. BACKGROUND Currently, in order to change hardware (for example a SKU or Stock Keeping Unit) in a computer system manufacturers use a testing station on the manufacturing floor. It would be beneficial to allow for hardware (and/or SKU) provisioning or changing to be done directly to the consumer of the component (for example, by the OEM and/or end user or IT department of the end user) rather than using the current process of testing on the manufacturing floor. Additionally, current techniques in the computer industry of upgrading a hardware configuration require replacement of the hardware. For example, some previously used techniques include changing hardware configuration with physical changes to the hardware such as, for example, changing pins, jumpers, straps, fuses, etc. It would be beneficial to provide a change or upgrade of a hardware configuration without requiring replacement of the actual hardware or making such physical changes. Purchases under network traffic do not currently protect privacy at the end points of the transaction. Unique identifiers are currently used for transport of internet purchases, for example. It would be beneficial to allow for a transaction to be anonymous to the seller of the services in order to ensure privacy for the buyer. BRIEF DESCRIPTION OF THE DRAWINGS The inventions will be understood more fully from the detailed description given below and from the accompanying drawings of some embodiments of the inventions which, however, should not be taken to limit the inventions to the specific embodiments described, but are for explanation and understanding only. FIG 1 illustrates a system according to some embodiments of the inventions. FIG 2 illustrates a system according to some embodiments of the inventions. FIG 3 illustrates a system according to some embodiments of the inventions. FIG 4 illustrates a system according to some embodiments of the inventions.FIG 5 illustrates a system according to some embodiments of the inventions. FIG 6 illustrates a permit according to some embodiments of the inventions. FIG 7 illustrates a flow according to some embodiments of the inventions. FIG 8 illustrates key locations according to some embodiments of the inventions. FIG 9 illustrates a flow according to some embodiments of the inventions. FIG 10 illustrates a system according to some embodiments of the inventions. FIG 11 illustrates a flow and a system according to some embodiments of the inventions. DETAILED DESCRIPTION Some embodiments of the inventions relate to provisioning, upgrading, and/or changing of hardware. FIG 1 illustrates a system 100 according to some embodiments. In some embodiments system 100 includes a processor 102 (and/or Central Processing Unit or CPU), a Graphics and Memory Controller Hub 104 (and/or GMCH and/or Memory Controller Hub or MCH), and an Input/Output Controller Hub 106 (and/or ICH). In some embodiments, GMCH 104 includes a Management Engine 112 (and/or Manageability Engine and/or ME), which is a microcontroller and/or hardware processing engine. In some embodiments, ME 1 12 is able to run firmware services and applications, and is in some embodiments the same as or similar to other ME devices described in more detail below. In some embodiments GMCH 104 contains a memory controller that provides access to the system memory. A small portion of the system memory is used by the ME 1 12 for its runtime memory needs. This memory is separated from the memory accessed by the operating system (OS) using special hardware mechanisms. In some embodiments, the architecture that creates the separation is called Unified Memory Architecture (UMA). In some embodiments, the ICH 106 contains an Ethernet network controller, network filters, and/or a nonvolatile flash memory controller, among other things. In some embodiments, a wireless Local Area Network (LAN) or WiFi network controller is connected to the ICH 106 via a PCI Express bus, for example. The network controller, wired and wireless LAN, and network filters provide in some embodiments out-of-band (OOB) communication to access to the ME 112. OOB communication allows the ME 112 to communicate over a network without having any dependence on the OS or the driversthat reside therein. OOB communication is capable of working even when the computer is in some states where the OS is not working or is sleeping (for example, when the OS has crashed, or is in a standby state or a hibernate state). The flash controller in the ICH 106 provides access to the flash memory (for example, also referred to as the nonvolatile memory or NVM) located on motherboard of the computer. The NVM houses in some embodiments Basic Input/Output System (BIOS) code, ME code, and/or data, among other things. The GMCH 104 and ICH 106 communicate in some embodiments with each other using a Direct Media Interface (DMI) bus and/or a Controller Link (CLink) bus. The DMI bus is a chip-to-chip interconnect between the ICH 106 and the GMCH (or MCH) 104. This high speed interface ensures that the Input/Output (LO) subsystem (for example, PCI Express, Intel High Definition Audio, Serial ATA or SATA, Universal Serial Bus or USB, etc.) receives the necessary bandwidth for peak performance. In some embodiments, the CLink bus is a proprietary interface that can be used even when the computer is in a sleep state or a hibernate state, in addition to being used when the OS is operational. In some embodiments, GMCH 104 (and/or MCH 104) is coupled directly and/or indirectly to a number of devices including but not limited to one or more displays and/or display ports (for example, CRT, HDMI, TV, LVDS), one or more graphics devices and/or graphics ports, and/or one or more memory devices (for example, Dual In-Line Memory Module or DIMM devices). In some embodiments, ICH 106 is coupled directly and/or indirectly to a number of devices including but not limited to Peripheral Component Interconnect (PCI) devices and/or PCI buses, Universal Serial Bus (USB) devices and/or USB buses, Serial ATA devices, SPI flash devices, discrete TPM devices, Super I/O devices, SMBus devices, High Definition Audio Devices, PCI Express devices, Local Area Network (LAN) devices, Wide Area Network (WAN) devices, Wireless Local Area Network (WLAN) devices, Wireless Wide Area Network (WWAN) devices, WiMAX devices, flash memory devices, express card devices, etc. FIG 2 illustrates a system 200 according to some embodiments. In some embodiments system 200 includes a processor 202 (and/or Central Processing Unit or CPU) and a Platforms Controller Hub 204. In some embodiments, processor 202 includes two or more cores (for example, as illustrated by core 222 and uncore 224 in FIG 2). In some embodiments, PCH 204 includes a Management Engine 212 (and/or ManageabilityEngine and/or ME), which is a microcontroller and/or hardware processing engine. In some embodiments, ME 212 is able to run firmware services and applications. In some embodiments, ME 212 is the same as or similar to ME 112 and/or to other ME devices described in more detail below. In some embodiments, the processor 202 and the PCH 204 communicate with each other using a Direct Media Interface (DMI) bus. This high speed interface ensures that the Input/Output (I/O) subsystem (for example, PCI Express, Intel High Definition Audio, Serial ATA or SATA, Universal Serial Bus or USB, etc.) receives the necessary bandwidth for peak performance. In some embodiments, the PCH 104 performs many or all of the functions and features and/or has many or all of the connected devices as described above in reference to ICH 106. In some embodiments, some of the functions, features, and/or connections described above in reference to GMCH 104 are moved to the processor 202 and some are moved to the PCH 204. In some embodiments, processor 202 is coupled directly and/or indirectly to a number of devices including but not limited to one or more displays and/or display ports (for example, CRT, HDMI, TV, LVDS), one or more graphics devices and/or graphics ports, and/or one or more memory devices (for example, Dual In-Line Memory Module or DIMM devices). In some embodiments, PCH 204 is coupled directly and/or indirectly to a number of devices including but not limited to Peripheral Component Interconnect (PCI) devices and/or PCI buses, Universal Serial Bus (USB) devices and/or USB buses, Serial ATA devices, SPI flash devices, discrete TPM devices, Super I O devices, SMBus devices, High Definition Audio Devices, PCI Express devices, Local Area Network (LAN) devices, Wide Area Network (WAN) devices, Wireless Local Area Network (WLAN) devices, Wireless Wide Area Network (WWAN) devices, WiMAX devices, flash memory devices, express card devices, etc. FIG 3 illustrates a Manageability Engine, Management Engine and/or ME 300 according to some embodiments. In some embodiments, ME 300 is the same as or similar to other ME devices described herein (for example, in some embodiments ME 300 is the same as or similar to ME 112 and/or ME 212 described above). In some embodiments, ME 300 includes a processor (for example, an ARC processor) 302, a code cache 304, a data cache 306, a direct memory access (DMA) engine 308, a crypto engine 310, a read only memory (ROM) 312, a Controller Link (CLink) interface 314, a Management Engineinterface 316, a memory controller interface 318, an interrupt controller 320, high precision and/or watchdog timers 322, internal random access memory (and/or SRAM) 324, and/or a connector 326 to a main memory controller, which are coupled together over an ME backbone bus 330. The code cache 304 and data cache 306 help to accelerate ME functionality by reducing memory accesses to system memory. The DMA engine 308 helps the ME 300 to move data to and from the OS memory and ME UMA (Unified Memory Architecture) memory. The DMA engine 308 is only accessible by the ME 300, and is not accessible by the OS. Further, the ME 300 does not provide any generic interfaces to the OS to access the DMA engine 308. The crypto engine 310 provides hardware offloads to accelerate the cryptographic operations done inside the ME 302 for secure communication protocols such as, for example, wireless security, HTTP security via TLS, etc. The initial boot code for the ME 300 is located and executed from the ROM 312. In some embodiments, the CLink interface 314 is used for communication between the GMCH and ICH in low power states such as sleep or hibernate, some ME specific devices in the ICH communicate with the ME 300 exclusively over CLink, while some devices can communicate over DMI as well as CLink (for example, the network controller). A small portion of the main system memory is used by the ME 300 for its runtime memory needs. This separation is done using the UMA mechanism. In some embodiments, an integrated graphics controller in the GMCH also uses the same mechanism to use a portion of the main system memory for its needs. In some embodiments the size of this memory is 16MB, which is less than 1% of the total system RAM in a computer having 2-3 GB of DRAM. From the perspective of the OS the Graphics UMA memory portion will appear to be a little larger than that of computers that do not have an ME. In some embodiments, ME 300 uses a NOR flash nonvolatile memory (NVM) that is present on the motherboard for persistent storage of the code, configuration, user data, etc. The NVM is also used to store the BIOS code and other OEM specific data. The NVM is divided into specific regions, including separate regions for the ME, the BIOS, and the network controller, for example. The NVM contains an access control descriptor (for example, at the very beginning and/or address 0 of the NVM) which specifies the permissions for accessing the various regions of the NVM. The ICH hardware ensures that these permissions are enforced. The controller that the ICH uses for accessing theflash is based on Serial Peripheral Interface (SPI). The ME region of the flash is further divided into regions for code, recovery code, internal configuration data and variable storage, event logs, and user/ISV relevant data. In some embodiments, in desktop platforms only the Ethernet network adapter is connected to the ME 300. In some embodiments, in mobile platforms the ME 300 has access to both Ethernet and WiFi network controllers (for example, both when the OS is function and when it is not functional such as when the system has crashed, is sleeping, etc). Network controllers such as Ethernet and Wi-Fi controllers communicate in some embodiments with the ME 300 using the CLink interface, and the ME accesses traffic differently from an Ethernet controller (for example, a Gigabit Ethernet controller) than from a Wi-Fi controller. The ME sends and receives traffic directly over the Ethernet controller without using the OS. In some embodiments of Wi-Fi, however, the network controller has a single master and when the OS is operational the WiFi traffic is routed to the ME via the WiFi driver in the OS. However, when the OS crashes or goes to sleep, the ME assumes ownership of the WiFi network controller and performs the communication directly. Remote communication with computers may be implemented from a management console (for example, using HTTP and other protocols) over these interfaces. The ME firmware can share a common LAN MAC, hostname, and IP address with the OS, helping to minimize IT infrastructure cost. In some embodiments, out-of-band (OOB) communications architecture of the ME supports ARP, DHCP, and IP port filters, for example. The OOB communications architecture supports ARP by forwarding ARP packets containing a specific IP address to the host and/or ME. The OOB communications architecture supports DHCP by forwarding DHCP offer and ACK packets to the host and/or the ME. The OOB communications architecture supports IP port filters (for example, HTTP and redirection) by redirecting incoming IP packets on a specific port to the ME. In some embodiments, the ME ROM (for example, ROM 312) is masked into the silicon of the GMCH chip. The ROM contains the reset vector (the very first set of instructions to execute after the ME is reset). The ROM is only accessible by the ME and not by the host or the OS. Since the ROM code is masked in the chip during manufacturing, it can never be changed and is therefore secure. The ROM is used, for example, to configure ME memory areas, initializing certain hardware pieces, checkingthe integrity and signature on the firmware image on the flash, and transferring control to the firmware image. In some embodiments, the ROM is the root of trust of the ME firmware. In some embodiments, an ME kernel module is composed of services and drivers that provide the base functionality of the ME environment. The kernel provides the basic set of services that are expected for any general purpose execution environment. For example, in some embodiments, these services include bootstrap and initialization, task and thread management, memory management, interrupt management, timers, messaging and events/event monitoring, security and cryptographic functions, drivers for local and network interfaces, storage, etc., power management, interface discovery, and/or firmware update. Since the ME houses some very security-sensitive technologies (for example, the Trusted Platform Module or TPM), it is necessary to provide a high degree of separation and isolation from the kernel level between the highly security-sensitive applications and others. Therefore, the kernel is partitioned into privileged and non-privileged portions in some embodiments. The privileged portion includes the ROM, the initialization modules such as loader and bring-up modules, a portion of the kernel called the privileged kernel, and TPM firmware. The non-privileged portion includes the remaining portion of the kernel called the non-privileged kernel, support modules, common services modules, and other firmware applications. Firmware that executes in privileged mode has access to privileged hardware resources such as certain memory ranges and certain hardware registers. Non-privileged firmware that attempts to access privileged resources will cause and exception or interrupt to occur. A register in the ME contains the address of the code for entering and exiting out of the privileged mode. In some embodiments, ME 300 has access to a special purpose clock on the chipset called a Protected Real Time Clock (PRTC) that is not accessible by the OS and is only accessible by the ME. FIG 4 illustrates a system 400 according to some embodiments. In some embodiments, system 400 includes a Manageability Engine (and/or Management Engine and/or ME) 402 coupled to a permit server 404 (for example, one or more business to business servers or B2B servers) via the internet 406. A permit 412 is transferred between the ME 402 and the permit server 402. ME 402 is coupled to the internet via host communication link 408 and/or out of band (OOB)network communication link 410. In some embodiments, ME 402 includes a Host to Embedded Controller Interface (HECI) 422, a HECI driver 424, a Network Interface Card (NIC) 426, a network stack 428, Active Management Technology (AMT) 430, flexible SKU 432, Capabilities and Licensing Services (CLS) and/or Intel Capabilities and Licensing Services (iCLS) 434, fuse access 436, hardware identifier (HWID) 438, fuse read access 440, fuse override 442, secure file system 444, Serial Peripheral Interface (SPI) flash 446, cryptographic driver 448, Rivest Shamir Adleman (RSA) key based encryption 450, Secure Hashing Algorithm version 1 (SHA-1) 452, True Random Number Generator (TRNG) 454, secure time 456, and/or Protected Real Time Clock (PRTC) 458. In some embodiments, HECI 422, NIC 426, HWID 438, fuse read access 440, fuse override 442, SPI flash 446, RSA 450, SHA-1 452, TRNG 454, and/or PRTC 458 are implemented in hardware. In some embodiments, HECI driver 424, network stack 428, AMT 430, flexible SKU 432, CLS 434, fuse access 436, secure file system 444, crypto driver 448, and/or secure time 456 are implemented in firmware. In some embodiments, system 400 illustrates building block and data structure that may be used to perform secure operations between ME 402 and a signing server. According to some embodiments, permit 412 is a data structure binary that provides authentic feature information to the ME system. In some embodiments, permit server 404 is a back-end infrastructure server (or servers) capable of generating a permit such as permit 412. In some embodiments, ME firmware runs on ME 402 that includes several components. For example, in some embodiments, CLS 434 validates and parses permits to provide information to CLS plug-ins such as the Flex SKU 432 and/or SaaS/SMT (not illustrated in FIG 4). In some embodiments CLS plug-ins such as Flex SKU (or other CLS plug-ins such as SaaS) accomplish specific features of the ME. In some embodiments, firmware services and drivers are used to provide essential services to CLS 434 and CLS plug-ins. In some embodiments, external interface firmware components allow the ME to interface with external entities via those external interface components (for example, AMT 430, HECI driver 424, network stack 428, etc). In some embodiments, HWID 438 is a unique identifier in each chipset created during the manufacturing process of the chipset in the chipset hardware (for example, implemented as fuses in the chipset). In some embodiments, fuse read access 440 is hardware logic used to read the fuses in the chipset. In some embodiments, fuse override442 is a mechanism by which the hardware overrides the actual fuses with a supplied bitmap during some point in the initialization of the chipset hardware. In some embodiments, fuse access 436 is firmware logic that implements the fuse reading and overriding mechanisms to the CLS 434 firmware component. In some embodiments, SPI flash 446 is a non-volatile storage mechanism (for example, NOR based flash) that is accessible from the chipset using the SPI protocol (and therefore connected to an SPI controller in the chipset). In some embodiments, RSA 450 is a hardware unit in the chipset that helps in accelerating RSA computation via hardware based add and multiply circuits (and in some embodiments, the rest of the RSA logic is implemented in firmware). In some embodiments, SHA-1 452 is a hardware unit in the chipset that implements an SHA-1 algorithm. In some embodiments, TRNG 454 is a hardware unit in the chipset that generates unique random numbers in the chipset using a thermal noise concept, for example. In some embodiments, crypto driver 448 is a firmware driver that provides crypto operations as usable interfaces (for example, RSA-2048 sign, encrypt, verify, decrypt, SHA-1 hash generate, TRNG generate, etc) to other firmware components such as, for example, CLS 434. In some embodiments, PRTC 458 is a protected clock that keeps time that is not modifiable by host OS software, thereby providing a more secure notion of time to ME firmware components such as the CLS 434. In some embodiments, the ME (for example, ME 112, ME 212, ME 300, and/or ME 402) has access to a special purpose clock on the chipset (for example, PRTC 458). This clock is only accessible by the ME and is not accessible by the OS. The ME uses this clock for its time related verifications (such as certificate validations, Kerberos time stamp checks, etc) rather than relying on the system Real Time Clock (RTC). The RTC can be changed (back-dated or moved forward) by the user or malware in the OS. Therefore, the ME does not rely on the RTC. In some embodiments, the PRTC is powered by a battery so that the PRTC maintains the time even when the computer is completely powered off. In some embodiments, when a system is provisioned in a small business mode, for example, the ME clock is synchronized with the BIOS clock at boot time, and both generally represent the local time. In some embodiments, for example, when a system is provisioned in an enterprise mode, the clocks are separate and it the PRTC clock in the ME is set, for example, to GMT time. In some embodiments, an ME includes a True Random Number Generator (for example, such as TRNG 454) which is based on thermal noise variants. The TRNG isvery helpful in assisting in cryptographic transactions such as generating random session keys, tokens, nonces, etc. The TRNG outputs 32 bit random numbers at a time, for example. Many cryptographic algorithms and mechanisms make use of random numbers. An important feature of a random number generator (RNG) is its entropy, which is the measurement of the inability of an external viewer to predict the next number that will be generated by the RNG, even if the viewer knows all of the previously-generated random numbers by that generator. A pseudo-RNG (PRNG) may be used which is a deterministic algorithm that produces the next random number based on the current generator's state. Such an algorithm maintains a high level of entropy as long as the initial state (or "seed state") of the PRNG is not known. Some PRNG implementations seed themselves using a value of one of the platform clocks. This value can be somewhat unpredictable due to the high resolution of the clock, and therefore makes a reasonable seed for the PRNG that is suitable for applications requiring a moderate level of security. However, given that a large number of platforms power up at the same time (a time that may be known to within a few minutes or seconds), this could help a potential attacker to narrow down the possibilities and therefore crack the PRNG seed state, allowing prediction of the next numbers generated by the PRNG. An attacker could also learn from the generated numbers from one hacked platform to break other platforms in the enterprise (also known as a BORE attack: "Break Once, Run Everywhere"). In some embodiments, a true random number generator (TRNG) such as TRNG 454 may be a TRNG hardware device. In some embodiments, such a TRNG may be based on two resistors that produce a thermal noise. The noise is amplified and provided as input to a frequency-modulated low- frequency oscillator. Combined with a high- frequency oscillator, a nearly-random bitstream is produced. A voltage regulator controls the hardware components to avoid any bias based on voltage. Additionally, a logic block attempts to correct the bitstream of any bias that may have been inserted (for example, due to an imperfect duty cycle of the oscillator) by using a standard anti-bias correction algorithm. In some embodiments, a PRNG may be implemented for the TRNG, where the state of the PRNG is occasionally reset to initialize to a state generated by the TRNG. This creates a powerful high-quality RNG that is able to keep up with the high usage of random numbers in the subsystem.In some embodiments, the chipset has a key (for example, a 128 bit key) for use by firmware in symmetric encryption and integrity protection operations. This key is generated during manufacturing of the chipset by randomly blowing fuses dedicated for this purpose. The ME is the only component that can access this keying material, and it provides the root of trust for several ME operations. No device outside of the ME knows the value of this key. In some embodiments, a chipset fuse key is used that is unique to each system, and is known only by the firmware, for example. According to some embodiments, the chipset fuse key is a set of 128 fuses in the chipset. Each fuse is blown or un-blown, corresponding to a 0 or 1 value. The status of each of the fuses (0 or 1) is determined at manufacturing. A random subset of the fuses is blown on the chipset manufacturing line, while the rest remain un-blown. Thus, a random unique value is created for each chipset. The 128-fuse set thus creates a 128-bit key (or a key with any other number of bits depending on the number of fuses). In some embodiments, encryption of secrets is achieved using standard encryption techniques, but the interesting feature is the key that is used for the encryption. The encryption key needs to be stored in some nonvolatile form, but the flash itself is not a good place to store it (otherwise the attacker would first read this key from the flash and then use it to decrypt the rest of the protected data in the flash). Therefore, in some embodiments, firmware derives the encryption key from the chipset fuse key, and uses this encryption key to encrypt the sensitive items being place on the non-volatile flash. Since secure firmware (for example, ME firmware) is the only entity that has knowledge of the chipset fuse key (and therefore the encryption key and the integrity protection key), even if the attacker pulls out the flash portion from the system and tries to read it directly, all he sees is the encrypted and/or integrity protected data. According to some embodiments, a permit server (for example, such as permit server 404) is located remotely on a network from an ME (for example, ME 1 12, ME 212, ME 300, and/or ME 402). In some embodiments, a permit server (for example, such as a permit server 404) is a trusted provisioning server that creates authenticated permits to be used by an ME (for example, ME 1 12, ME 212, ME 300, and/or ME 402). The permit server has secure access (for example, via one or more hardware signing modules and/or HSM) to a cryptographic RSA (Rivest Shamir Aldeman) private key, whose public component is embedded in the signed ME firmware. In some embodiments, the hardware signing module (HSM) is FIPS- 140-2 Level 3 compliant. In some embodiments, the HSMis a PCI card installed in a signing server. In some embodiments, the HSM is tamper resistant, provides active monitoring, destroys keying material in the event of tampering, protects a permit signing key, executes permit signing code, and/or accounts for permit purchases. In some embodiments, the permit server can create permits that the ME can authenticate as having originated from a trusted entity. The permit server embeds hardware fuse override information in a signed permit (for example, such as permit 412), which the ME consumes to implement a soft SKU process. In some embodiments, one or more servers perform the operations described herein relating to a permit server, depending upon implementation of infrastructure and transfer capacity requirements, for example. In some embodiments, a permit (for example, such as permit 412) is a signed data structure containing hardware fuse override information. In some embodiments, no one other than one entity (for example, the owner of the permit server) can create a permit that can be successfully validated by an ME (for example, ME 1 12, ME 212, ME 300, and/or ME 402). The permit also contains, for example, a class and subclass identifier to indicate the interpretation of the rest of the data structure. The permit also contains, for example, a time stamp indicating when the permit was created (for example, in a "Timestamp" field). The permit also contains, for example, some attributes (for example, in a "Flags" field) that indicate some permit characteristics such as, for example, whether or not the system will reboot immediately after permit installation. In some embodiments, a Manageability Engine, Management Engine, and/or ME (for example, such as ME 1 12, ME 212, ME 300, and/or ME 402) is, among other things, a hardware engine that executes programming of a fuse override mechanism. The ME runs, for example, signed/authenticated and verified firmware code. This ME firmware is capable of interacting with user code executing in the operating system (OS). The user code can interact with the ME firmware code to program hardware fuse override registers. The ME firmware ensures that all conditions are met (for example, including permit validation) before it changes the values in the hardware fuse override registers. The ME also executes its end of the permit installation protocol. In some embodiments an ME uses a permit signing key pair (for example, a symmetric RSA key pair) for permit signing and verification. The private portion of this key pair is possessed by a company such as a manufacturer of the ME, and resides in asecure data center facility of that company (for example, in a computer using a hardware signing module or HSM). The public portion of the key is maintained in a signed ME firmware image that cannot be changed by anyone. Permits are signed by the private portion of the key pair, and the public portion of the key is used by an ME to verify the signature. In some embodiments, a SafelD system of keys is implemented in which every ME has a unique SafelD private key burned into the chipset as fuses during a manufacturing process of the chipset. The ME never allows any external access to this key. The group portion of the key is possessed by a company such as the manufacturer of the chipset and resides in a permit server system located in a data center of that company. The SafelD key in the chipset is used to create a signature (on a timestamp and a nonce, for example) which is verified by the permit server using the group key. This assures the permit server that the signature was performed by a true and valid ME. In some embodiments, the chipset fuse key is a symmetric key that is unique for every chipset (and/or for every chipset part). It is uniquely programmed into every chipset during a manufacturing process by randomly flowing fuses in a program and forget manner. No one (including the manufacturer of the chipset or chipset part) has knowledge of this key, and the ME never allows this key to be externally accessed from the ME. FIG 5 illustrates a system 500 according to some embodiments. System 500 includes a customer location 502, a permit server 504 (for example, in some embodiments, similar to and/or the same as permit server 404), an enrollment server 506, and a network 508 (for example, the internet). In some embodiments, network 508 couples together customer location 502, permit server 504, and enrollment server 506. In some embodiments, customer location 502 includes a customer, a customer's computing system, an Enrollment Application (EA) 512, and an ME 514. In some embodiments, EA 512 and/or ME 514 are resident on the customer's computing system. In some embodiments permit server (PS) 504 includes one or more business to business (B2B) servers and/or one or more back end servers. In some embodiments, entrollment server 506 includes one or more business to customer (B2C) servers, one or more ISV servers, and/or one or more front-end servers. In some embodiments, dotted line 522 represents permit order and/or fulfillment transactions (for example, ISV permit order and/or fulfillment transactions between permit server 504 and enrollment server 506. In some embodiments, dotted line 524 represents customer permit order and/or fulfillment transactions between customerlocation 502 and enrollment server 504. In some embodiments, permit server 504 is the similar to or the same as permit server 404. In some embodiments, ME 514 is the same as or similar to ME 1 12, ME 212, ME 300, and/or ME 402. As mentioned above, in some embodiments ES 506 is a B2C server (and/or more than one B2C servers). In some embodiments, ES 506 is more than one server for load balancing. In some embodiments, ES 506 interacts with customers to operate the front end of the purchasing process and to interact with the PS 504 to purchase/receive permits. In some embodiments, the Enrollment Application (EA) 512 is a local host application or agent that interacts with the enrollment server (for example, ES 506) to perform a customer purchasing process and to request/receive permits. EA 512 also interacts with the ME 514 to install permits in an in-band manner. For purposes of the permit installation protocol, in some embodiments the EA 512 acts for the most part as a pass through agent between the ME 514 and the back-end (for example, the Permit Server 504 and the Enrollment Server 506). The EA 512 does interact with the user (customer) to generate payment and transaction information, and to use this information in the permit installation protocol. In some embodiments, the customer at customer location 502 runs the EA agent 512 to purchase a new feature (for example, Flex SKU feature) of the customer's computing system. In some embodiments, the customer may be an end user or may be a corporate IT purchasing agent, for example. In some embodiments, Enrollment Server 506 is an ISV/MSP/MSP domain responsible for interfacing with the customer (for example, an end user) to accomplish a permit purchase and installation process. In some embodiments, ISV/MSP is used to close the feature sale (for example, implementing transaction initiation, revenue collection, management and support, etc). In some embodiments, a company with a permit server such as permit server 504 provides the electronic commerce support for ordering, delivering, and invoicing permits, and is the sole source of permits. In some embodiments, the permit server 506 is under direct physical control of that company (for example, the permit server 506 is on the campus of that company, and that company provides support and security for the permit server 506). In some embodiments, protection is provided against hacking at the end customer and/or product manufacture sites, abuse of user privacy is prevented, unique data is notexposed, fault tolerant operation may be maintained during a field upgrade of CLS firmware and/or permit data, and support for refunds and exchanges is provided for feature upgrades, feature cancellations, and/or trial features, etc. FIG 6 illustrates a permit 600 according to some embodiments. In some embodiments, permit 600 includes a permit message 602 and/or a permit signature 604. Permit message 602 includes in some embodiments a permit header 612, a capability descriptor 614, a permit authentication identifier 616, and/or a customer authentication identifier 618. In some embodiments, permit header 612 includes an ID string 622, a permit version 624, a permit length 626, a message length 628, a time stamp 630, a class 632, and/or flags 634. In some embodiments, capability descriptor 614 includes a subclass 642, a vendor ID 644, a device ID 646, and/or a feature identifier 648. In some embodiments, permit signature 604 includes a permit message digest 652. In some embodiments, permit signature 604 and/or permit message digest 652 are signed with an e-commerce private key (for example, RPSK-priv). FIG 7 illustrates a permit installation protocol 700 according to some embodiments. FIG 7 illustrates a customer 702, an Enrollment Application (EA) 704, a Management Engine (ME) 706, an Enrollment Server (ES) 708, and a Permit Server (PS) 710, which each participate in the permit installation process according to some embodiments. In some embodiments, the customer 702 first starts an application with the EA 704. Then the EA 704 obtains platform information from the ME 706. The customer 702 then sends purchase capabilities such as monetary amounts to the EA 704. Then the EA 704 creates a permit request with the ME 706. The ME 706 then sends a message to the EA 704 for the PS 710 to create a permit request. The EA 704 then sends a message to the PS 710 relating to creating a permit request, PS customer authentication, and/or monetary amounts. The PS 710 then sends a message to ME 706 that it is creating a permit request. Then the ME 706 works with the PS 710 to install a permit. The PS 710 then sends a response to the ME 706 that the permit is installed. According to some embodiments, a permit installation protocol is described as follows: As discussed previously, in some embodiments a SafelD system of keys is implemented in which every ME has a unique SafelD private key burned into the chipset as fuses during a manufacturing process of the chipset. The ME never allows any external access to this key. The group portion of the key is possessed by a company such as the manufacturer of the chipset and resides in a permit server system located in a data center of that company. The SafelD key in the chipset is used to create a signature (on a timestamp and a nonce, for example) which is verified by the permit server using the group key. This assures the permit server that the signature was performed by a true and valid ME. The use of a SafelD key system assures the permit server that it is talking to a trueME and not an impersonator of an ME. This is because only the ME has knowledge of the SafelD private key and the ME signs a timestamp value in the Ml message, for example, and later signs (RAND2 1 RAND3) in M4. When the PS verifies the SafelD signature on Ml, it knows that it is talking to a true ME, unless the message was replayed in the last 48 hours. However, when the PS receives M4, it confirms that it is talking to a true ME on the basis of verifying the SafelD signature on (RAND2 1 RAND3). According to some embodiments, a process for installing a permit on a computer is described below. The flow of this process presumes according to some embodiments a customer initiated transaction (that is, for example, the customer is selecting a particular upgrade and is paying for it using a credit card), and the permit is then installed on the computer of that customer. A Management Service Provider (MSP) initiated transaction is also possible according to some embodiments in which a management application communicates with a corresponding console application to directly initiate the permit installation process in a manner that does not involve the end user of the computer. In some embodiments, a private portion of a permit signing key pair and a public portion of the permit signing key pair are together an RSA key pair. The private portion of the permit signing key pair is located at the permit server and the public portion of the permit signing key pair is located in the chipset ROM. In some embodiments, a group portion of a SafelD key and a private portion of the SafelD key are together a pair of ECC based SafelD keys. The group portion of the SafelD key is located at the permit server and the private portion of the SafelD key is located in chipset fuses. In some embodiments a chipset fuse key is also located in chipset fuses. FIG 8 illustrates key infrastructure 800 according to some embodiments. As illustrated in FIG 8, keys 802 located at the permit server include a private portion of the permit signing key pair and a group portion of the SafelD key, keys 804 located in chipset fuses include a private portion of the SafelD keys and a chipset fuse key, and keys 806 located in the chipset ROM include a public portion of the permit signing key pair. In some embodiments, a firmware signing key (FWSK) is an RSA 2048 bit key pair (for example, an asymmetric key pair including FWSK-private and FWSK-public). The FWSK is used to sign one or more firmware (FW) modules. The ROM validates the signature to ensure the integrity of the FW module (that is, that it came for the correct manufacturing company and that it has not been modified. The FWSK-priv key is stored in a key vault within code signing system facilities (for example, the manufacturing company). The FWSK-pub key is located in the masked ROM of the ME. The FWSK- pub key is used to verify the signature of arriving firmware before installation and execution. In some embodiments, a root permit signing key (RPSK) is an RSA 2048 bit key pair used to sign permits (for example, an asymmetric key pair including RPSK-private and RPSK-public). The RPSK-priv key is stored in a key vault within permit signing facilities of a company (for example, a manufacturing company). The RPSK-pub key is located as part of the ME firmware (and signed by FWSK-priv). RPSK-pub is used toverify the signature of the arriving permits signed by RPSK-priv. In some embodiments, a chipset fuse key (CFK) is a random per unit symmetric key (for example, of 128 bits). The CFK is "burned" in fuses as part of the manufacturing process. The CFK is used as the root key for the ME to generate other symmetric keys during run time (including a chipset storage key or CSK). In some embodiments, a chipset storage key (CSK) is a random symmetric key (for example, of 128 bits). The CSK is generated in the ME using the CFK. The CSK is used to encrypt data to be stored in the flash memory. FIG 9 illustrates a flow protocol 900 according to some embodiments. The flow 900 of FIG 9 includes an ME 902, an EA 904, and a PS 906. In flow 900 the ME 902 creates Ml to include an RSA encrypt (for example, RPSK-pub, current time, a SafelD sign indicating "I am an ME", a current time, etc). The EA then constructs an authentication message and sends it as M2 to the PS 906. The PS 906 verifies the SafelD signature and checks a time value to be within a particular time tolerance limit (for example, within 48 hours). The PS 906 creates M3, which includes RAND l and RAND2, and sends them to the ME 902 (for example, via the EA 904). The ME 902 creates a response M4 to include SafelD sign, RAND2 and RAND 3, and sends it to the PS 906 (for example, via the EA 904 and/or an ES). The PS 906 then verifies the SafelD signature to definitively prove that the other side is an ME 902 and not an imposter. The PS sends a message M5 to the ME 902, and the ME 902 then validates M5 before it installs the permit, replaces a manufacturing test permit or trial permit, and/or activates the next feature in the next system reboot. In some embodiments, upgrading and/or reconfiguring of hardware (SKU) is allowed via a remote protocol mechanism. In some embodiments implementation is performed in an ICH (I/O controller hub), an MCH (memory controller hub), a GMCH (graphics and memory controller hub), a PCH (platforms controller hub), and/or another chipset or hardware device. In some embodiments, hardware may be upgraded remotely and securely, while maintaining and assuring user privacy. In some embodiments, an integrated controller is combined with a network protocol and embedded cryptographic functions to allow for the development of a robust protocol to communicate out of band of the client OS to provide a secure, private, and reliable way of exchanging information. In some embodiments, information is exchanged outside of the scope of system OS and malware attacks. In some embodiments, a CLS(Capability Licensing Service) provides information on how to allow hardware to change a configuration state. In some embodiments, the hardware configuration of a computer may be changed. In some embodiments, the hardware configuration of a computer may be changed using an override control mechanism at boot/initialization time. In some embodiments, a software based upgrade mechanism is used to upgrade the hardware itself, without any need to open the computer or replace hardware. In some embodiments, this is implemented using an embedded microcontroller, firmware, and hardware to allow software based hardware upgrade. FIG 10 illustrates a system 1000 according to some embodiments. In some embodiments, system 1000 includes a memory controller hub (MCH) 1002 and an Input/Output (I/O) Controller Hub (ICH) 1004. In some embodiments MCH 1002 includes one or more hardware fuse override registers 1012, a Management Engine (and/or Manageability Engine and/or ME) 1014, one or more flexible SKU (Flex-SKU) disable fuse 1016, one or more hardware fuse read-only registers 1018, and/or firmware authentication module 1020. In some embodiments, ICH 1004 includes one or more hardware fuse registers 1022, one or more flexible SKU (Flex-SKU) disable fuse 1026, one or more hardware fuse read-only registers 1028. In some embodiments, hardware fuse override registers 1012 and/or 1022 enable the ME firmware to override the setting of the hardware feature fuses of each component participating in the Flexible SKU solution (for example, the MCH 1002 and/or the ICH 1004). These registers are not writeable by the host. This ensures that host software cannot program them once they are configured by the ME firmware, and/or that other firmware runtime bugs do not result in the registers being reprogrammed. The ME firmware programs and locks these registers very early in the boot/initialization cycle of the platform. In some embodiments, the ME 1014 is a hardware processing engine that executes the programming of the fuse override configuration illustrated in FIG 10. The ME 1014 runs signed/authenticated and verified firmware code. This ME firmware interacts with user code executing in the operating system (OS). The user code can interact with the ME firmware code to program the hardware fuse override registers 1012 and/or 1022. The ME firmware ensures that all conditions are met before it changes the values in the hardware fuse override registers. In this manner, only the firmware running on the ME 1014 is ableto program the hardware fuse override registers 1012 and 1022. The Flex-SKU disable fuse 1016 and/or Flex-SKU disable fuse 1026 are hardware fuses that are used to disable the hardware fuse override configuration of system 1000 of FIG 10. The fuses 1016 and 1026 are readable by firmware, and the flexible SKU firmware will only operate when fuses 1016 and/or 1026 are set to an enabled state (that is, set to enable Flexible SKU, for example). Fuses 1016 and/or 1026 are very important for the initial override mechanism so that the override feature can be disabled in the event of a serious flaw. This hardware disable may also be used in some embodiments to define SKUs of the chipset with the override mechanism or those that ship without the mechanism. In some embodiments, hardware fuse read-only registers 1018 and/or 1028 allow the hardware to support firmware read access to the hardware feature fuses of each component participating in the override solution. In some embodiments, the firmware authentication 1020 provides a way to authenticate that the ME firmware module that can modify hardware override registers must only be firmware that is authored by a particular company (for example, the company that manufactured the hardware). This ensures that an attacker cannot write their own firmware and use it to enable features on the hardware. In some embodiments the MCH 1002 illustrated in FIG 10 includes additional features that are not illustrated in FIG 10. For example, in some embodiments, MCH 1002 includes a chipset identifier (chipset ID), an ME debug disable mechanism, a chipset fuse key, a random number generator (RNG), a monotonic counter, and/or additional features. In some embodiments, a chipset ID is included in MCH 1002 to provide a mechanism to allow firmware to identify that it is running on a given family of hardware. In some embodiments, MCH 1002 includes an ME debug disable mechanism that is supported to prevent system debug capabilities in production hardware from being used as a method of attacking a CLS system. In some embodiments, MCH 1002 includes a chipset fuse key that provides firmware read access of a set of fuses that uniquely identify CLS hardware. The chipset fuse key is used by firmware to generate a signal used to match platform hardware to permits. In some embodiments, MCH 1002 includes a random number generator (RNG) (for example, a true RNG) to provide a secure CLS and Flexible SKU system. In some embodiments where hardware RNG is not possible a firmware implementation of a pseudo random number generator is used to meet securityand privacy concerns. In some embodiments MCH 1002 includes a monotonic counter to support permit revocation and upgrade flows. According to some embodiments, a flow of control to program hardware override registers during a system initialization/boot time proceeds in the following manner. First the user starts the system and the ME 1014 then initializes. Then the ME 1014 reads the hardware fuse matrix and writes it into the hardware fuse read only register 1018 and/or 1028. The ME 1014 sets a bit in an internal register to let the override mechanism know that it can execute. Then the override firmware checks to determine whether the mechanism is turned on or off by reading the Flex-SKU disable fuse 1016 and/or 1026. If not enabled, the override firmware allows the platform boot process to continue without executing any further firmware override steps. If it is enabled, then the override firmware in the ME continues, in which case it reads the new override fuse map from a secure/trusted location. The ME override firmware then writes the new override fuse map to the hardware fuse override registers 1012 and/or 1022. According to some embodiments, a hardware upgrade service is implemented in which end users are able to change hardware configurations (for example, of their chipsets) to enable new features in return for a monetary payment. A secure transaction is performed between the end user and a company such as the company that manufactured the hardware. After receiving payment, the company issues a secure and signed permit to the user's computer (for example, to the chipset of the user's computer). The user's computer (and/or chipset) verifies the permit and uses the information in the permit to program the fuse override register at boot time to enable the new configuration. According to some embodiments, hardware configurations are changed using software programming without any physical alterations of the hardware. FIG 11 illustrates a protocol flow 1 100 according to some embodiments. In some embodiments FIG 11 includes an ME 1 102 (for example, in some embodiments ME 1 102 is similar to or the same as all or some of the MEs described herein such as ME 112, ME 212, ME 300, ME 402, etc) and a permit server (PS) 1104 (for example, in some embodiments, PS 1 104 is similar to or the same as all or some of the other permit servers described herein such as PS 404, 504, etc). In some embodiments, a permit signing key pair is an asymmetric (RSA) key pair used for permit signing and verification. The private key of the pair is located at a permit server such as PS 1104, for example, residing in a secure data center facility (for example,with HSM). The public portion of this key pair is located in a chipset ROM and/or a signed ME firmware image, for example, and cannot be changed by anyone. Permits are signed by the private portion of this key pair, and the public portion is used by an ME (for example, ME 1 102) to verify the signature. In some embodiments, a chipset fuse key is located in chipset fuses. In some embodiments, the chipset fuse key is unique for every chipset part. The chipset fuse key is uniquely programmed into every chipset during a manufacturing process by randomly flowing fuses in a program and forget manner. No one (including the manufacturer) has knowledge of this key, and the ME (for example, ME 1 102) never lets this key be accessed externally. In some embodiments as illustrated in FIG 1 1, the chipset of a computer generates a unique identifier such as a Permit Authentication Identifier (or PAID) in such a way that the identifier is bound to a particular chipset or chipset part. According to some embodiments, this is done in a way such that it is not possible to link the PAID back to the particular chipset instance. By looking at the identifier the chipset can determine whether or not the identifier is bound to that particular chipset instance. However, it is not possible by looking at the identifier to link it back to the particular chipset instance to which the identifier is bound. Such a Permit Authentication Identifier (PAID) is illustrated in FIG 1 1 as PAID 1 112. In some embodiments, ME 1 102 generates a new PAID 1 112 which is bound to the particular chipset, chipset instance, and/or computer in which ME 1 102 resides. Permit server (PS) 1 104 received PAID 11 12 from ME 1102, and then generates a permit 1 122 and embeds the PAID 1 112 inside the permit 112 as PAID 1 124. Permit 1122 includes PAID 1124 and other permit data 1126. Permit 1 122 also includes and/or is attached to a signature 1 128. ME 1102 receives the signature 1128 and verifies it using a permit signing public key (that is, a public portion of a permit signing key pair, for example). Then the ME 1 102 verifies the PAID 1124. If the PAID verification is successful, the ME 1 102 determines that the permit 1122 is in response to the PAID 1 112 that was sent only by the ME 1102 (and/or the computer in which the ME 1 102 resides) and was not sent from any other ME (and/or computer). In some embodiments, PS 1104 is a trusted provisioning server that creates authenticated permits to be used by the ME 1102. PS 1104 has secure access (forexample, via HSM) to a cryptographic RSA key, whose public component is embedded in the signed ME firmware. Therefore, PS 1 104 is able to create permits such as permit 1122 that an ME such as ME 1 102 can authenticate as having originated from a trusted entity. In some embodiments, the permit server 1104 embeds hardware fuse override information in the signed permit which the ME 1102 then uses (for example, for soft SKU and/or upgrading hardware within the computer in which ME 1 102 resides). In some embodiments PS 1 104 comprises one or more servers that perform these operations (for example, depending upon infrastructure implementation, transfer capacity requirements, etc). In some embodiments, permit 1122 is a signed data structure that is the same as and/or similar to other permits described herein (such as, for example, in some embodiments, permit 412, permit 602, etc). Permit 1 122 is a signed data structure containing the hardware fuse override information (for example, in a 'Feature ID' field). No one other than a particular company can create a permit 1 122 in a way that can be successfully validated by the ME 1 102. In some embodiments, that company is, for example, the owner of the permit server 1 104 and/or the manufacturer of the ME 1102, of the chipset including the ME 1 102, etc). In some embodiments, permit 1 122 further includes a class and subclass identifier indicating an interpretation of the rest of the data structure. In some embodiments, permit 1 122 contains a time stamp indicating when it was creating (for example, in a 'Timestamp' field). Permit 1122 also includes in some embodiments attributes (for example, in a 'Flags' field) that indicate some permit characteristics such as whether or not the system will reboot immediately after permit installation. In some embodiments, ME 1 102 is a hardware processing engine that executes programming of the fuse override mechanism. ME 1102 runs signed/authenticated and verified firmware code. This ME firmware can interact with user code executing in the operating system (OS). The user code can interact with the ME firmware code to program the hardware override registers. The ME firmware ensures that all conditions are met (including permit validation) before it changes the values in the hardware fuse override registers. The ME 1102 also executes its end of the permit installation protocol. In some embodiments (for example in some embodiments of FIG 11) PAID regeneration is performed according to the following steps: - Generate a random value 'R'- Store R in a secure flash - Generate PAID = H( X 1 Y 1 Z ) where X, Y and Z are defined below, and H is a secure hash function as described below. X = Permit Authorization Key = H (R, CFK) Y = Permit Authorization Message = a well known string - "Permit Authorization Message" Z = Feature ID requested by the customer to which the upgrade is required In some embodiments, the secure flash is a component on the computer's motherboard that can persistently store data in a non-volatile flash media. In some embodiments, the data stored on this flash media is confidentially protected, integrity protected, and/or anti-replay protected. In some embodiments, the secure hash function (H) is a cryptographically secdure one way hash function. For example, in some embodiments, hash function (H) is one or more of SHA1, SHA-256, MD5, etc. In some embodiments, a PAID value is used in a CLS permit installation protocol to bind the permit request and the permit to a specific computing machine, hardware device, hardware part, chipset, chipset part, etc. The PAID is included in the initial message that is sent to the PS (for example, PS 1104) from the ME (for example, ME 1 102). The PS embeds the PAID value in the signed permit which is returned to the ME from the PS in the response message. The ME first verifies the signature on the permit. If successful, the ME then verifies the PAID value inside the permit. To verify the PAID, the ME recreates the PAID (for example, using the steps described above) and compares it with the PAID in the permit (for example, PAID 1124 in permit 1 122). If the verification is successful, then the ME is assured that this permit is sent by the PS in response to the request sent only by this ME and by no other entity. The ME then accepts the permit. In some embodiments, the ME (for example, ME 1 102) generates a new PAID every time it needs to create a permit request to send to the PS. Since one of the ingredients of the PAID is a random number (for example, 'R'), no two PAIDs will look alike as long as the random number is different. This ensures that any two PAID values generated by the same ME look completely different from each other. No one is able to differentiate between two PAID values coming from the same ME or from separate MEs. This ensures user privacy when purchasing and/or obtaining permits from the PS. The PS cannot correlate two PAID values on its end and use them to link the two PAID values tothe same user. In some embodiments, purchases under network traffic protect privacy at the ends of the transaction. According to some embodiments, the transaction is anonymous to the seller of the services. In some embodiments, a transaction (for example, for an internet purchase) is secured without the need for unique identifiers. The unique ID is removed as a required for a network purchase transaction. In some embodiments, a hardware upgrade service is implemented in which the transaction is anonymous to the seller. End users can change a hardware configuration (for example, of a chipset) to enable new features in return for a payment. A secure transaction is enabled between the end user and the provider of the hardware upgrade where, after receiving payment, the provider of the hardware upgrade issues a secure and signed permit to the user's computer (for example, to the chipset). The user's computer (and/or chipset of that computer) is able to ensure that PAIDs embedded in different permits for the same computer do not look alike. This protects the privacy of the end user because no entity (including the PS) can link the permit (or the PAID) to the same hardware instance (for example, chipset instance) that created the PAID. Although some embodiments have been described herein as being implemented in a chipset, an MCH, a GMCH, an ICH, a PCH, etc., according to some embodiments these particular implementations may not be required. For example, in some embodiments, implementations can occur in other devices (for example, some embodiments described herein as happening in the MCH or GMCH can also occur in the PCH according to some embodiments, or in some other device not described herein in some embodiments). Although some embodiments have been described in reference to particular implementations, other implementations are possible according to some embodiments. Additionally, the arrangement and/or order of circuit elements or other features illustrated in the drawings and/or described herein need not be arranged in the particular way illustrated and described. Many other arrangements are possible according to some embodiments. In each system shown in a figure, the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar. However, an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same ordifferent. Which one is referred to as a first element and which is called a second element is arbitrary. In the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. An algorithm is here, and generally, considered to be a self-consistent sequence of acts or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Some embodiments may be implemented in one or a combination of hardware, firmware, and software. Some embodiments may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by a computing platform to perform the operations described herein. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, the interfaces that transmit and/or receive signals, etc.), and others. An embodiment is an implementation or example of the inventions. Reference in the specification to "an embodiment," "one embodiment," "some embodiments," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions. The various appearances "anembodiment," "one embodiment," or "some embodiments" are not necessarily all referring to the same embodiments. Not all components, features, structures, characteristics, etc. described and illustrated herein need be included in a particular embodiment or embodiments. If the specification states a component, feature, structure, or characteristic "may", "might", "can" or "could" be included, for example, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to "a" or "an" element, that does not mean there is only one of the element. If the specification or claims refer to "an additional" element, that does not preclude there being more than one of the additional element. Although flow diagrams and/or state diagrams may have been used herein to describe embodiments, the inventions are not limited to those diagrams or to corresponding descriptions herein. For example, flow need not move through each illustrated box or state or in exactly the same order as illustrated and described herein. The inventions are not restricted to the particular details listed herein. Indeed, those skilled in the art having the benefit of this disclosure will appreciate that many other variations from the foregoing description and drawings may be made within the scope of the present inventions. Accordingly, it is the following claims including any amendments thereto that define the scope of the inventions. |
A process is provided for forming an integrated thin film resistor (TFR) in an integrated circuit (IC) device including IC elements and IC element contacts. A TFR film layer and TFR dielectric layer are formed over the IC structure, and a wet etch is performed to define a dielectric cap with sloped lateral edges over the TFR film layer. Exposed portions of the TFR film layer are etched to define a TFR element. A TFR contact etch forms contact openings over the TFR element, and a metal layer is formed to form metal layer connections to the IC element contacts and the TFR element. The sloped edges of the dielectric cap may improve the removal of metal adjacent the TFR element to prevent electrical shorts in the completed device. A TFR anneal to reduce a TCR of the TFR is performed at any suitable time before forming the metal layer. |
CLAIMSWHAT IS CLAIMED IS:1. A method of forming an integrated thin film resistor (TFR) in a semiconductor integrated circuit device, the method comprising: forming an integrated circuit (IC) structure including a plurality of IC elements and a plurality of conductive IC element contacts connected to the plurality of IC elements; forming a TFR film layer over the formed IC structure; forming a TFR dielectric layer over the TFR film layer; performing a first etch to remove selected or exposed portions of the TFR dielectric layer to thereby define a TFR dielectric cap over the TFR film layer, wherein the first etch stops at the TFR film layer, and wherein the first etch defines sloped lateral edges of the TFR dielectric cap; performing a second etch to remove selected or exposed portions of the TFR film layer to thereby define a TFR element, wherein the sloped lateral edges of the TFR dielectric cap are aligned over respective lateral edges of the TFR element; performing a third etch to form TFR contact openings in the TFR dielectric cap over the TFR element; and forming a metal layer extending over the conductive IC element contacts and over the TFR dielectric cap, and extending into the TFR contact openings and in contact with the TFR element; and at some time after forming the TFR film layer and before forming the metal layer, annealing the TFR film layer or the TFR element.2. The method of claim 1, wherein the formed IC structure includes a memory cell or transistor structure including at least one conductive IC element contact connected to at least one of a source region, a drain region, and a gate region of the memory cell or transistor structure.3. The method of any of claims 1-2, wherein the TFR film layer comprises silicon carbide chromium (SiCCr), silicon chromium (SiCr), chromium silicon nitride (CrSiN), tantalum nitride (TaN), tantalum silicide (Ta2Si), or titanium nitride (TiN).4. The method of any of claims 1-3, wherein the metal layer comprises aluminum.5. The method of any of claims 1-4, wherein the TFR dielectric layer comprises an oxide layer.6. The method of any of claims 1-5, wherein the second etch comprises a dry etch.7. The method of any of claims 1-6, wherein annealing the TFR film layer or the TFR element comprises an anneal at a temperature of at least 500°C.8. The method of any of claims 1-6, wherein annealing the TFR film layer or the TFR element comprises an anneal at a temperature of 515°C ± 10 °C for a duration of 15-60 minutes.9. The method of any of claims 1-8, wherein: forming the metal layer comprises: depositing a conformal layer of metal over the TFR dielectric cap; and performing a metal etch to remove selected or exposed portions of the conformal layer of metal; and the deposited conformal layer of metal includes a sloped metal region extending over a respective sloped lateral edge of the TFR dielectric cap, the sloped metal region having a lower height at a first location adjacent a respective lateral edge of the TFR element than at a second location above a top upper surface of the TFR dielectric cap; and the metal etch removes the sloped metal region at the first location adjacent the respective lateral edge of the TFR element, wherein the lower height of the sloped metal region at the first location allows a reduced etching time or intensity to remove the full thickness of the sloped metal region at the first location.10. The method of any of claims 1-9, further comprising forming an etch stop layer over the IC structure, and forming the TFR film layer over the etch stop layer.11. The method of any of claims 1-9, wherein: the first etch is a wet etch; the second etch is a TFR etch; the third etch is a TFR contact etch; forming the TFR film layer includes forming the TFR layer over the formed IC structure; and the sloped lateral edges of the TFR dielectric cap are aligned over respective lateral edges of the TFR element.12. The method of any of Claims 1-11, wherein: the method further include forming a first etch stop layer over the IC structure; the TFR film layer is formed further over the first etch stop layer; the method further includes forming and patterning a first photomask over a portion of the TFR dielectric layer; the TFR dielectric cap is further defined under the first photomask; the metal layer is a metal interconnect layer and underlies the TFR element; the method further includes forming and patterning a third photomask; the method further includes performing a fourth etch process to remove selected portions of the metal interconnect layer to thereby define a plurality of metal interconnect elements.13. The method of Claim 12, wherein the sloped lateral edges of the TFR dielectric cap reduce the likelihood of stringers (shorts) at the metal interconnect elements.14. The method of any of Claims 12-13, wherein the third etch comprises a wet etch.15. The method of any of Claims 12-14, wherein the fourth etch process defines a contact element providing a conductive connection between the TFR element and at least one of the plurality of conductive IC element contacts.16. An integrated thin film resistor (TFR) formed by any of the methods of Claims1-15.17, A semiconductor integrated circuit device, including an integrated thin film resistor of Claim 16. |
THIN FILM RESISTOR (TFR) FORMED IN AN INTEGRATED CIRCUIT DEVICE USING WET ETCHING OF A DIELECTRIC CAPRELATED APPLICATIONThis application claims priority to commonly owned United States Provisional Patent Application No. 62/982,107 filed February 27, 2020, the entire contents of which are hereby incorporated by reference for all purposes.TECHNICAL FIELDThe present disclosure relates to forming thin film resistors, e.g., systems and methods for forming a thin film resistor integrated in a semiconductor integrated circuit (IC) device. BACKGROUNDMany integrated circuit (“IC”) devices incorporate thin film resistors (TFRs), which provide various advantage over other types of resistors. For example, TFRs may be highly accurate, and may be finely tuned to provide a very precise resistance value. As another example, TFRs typical have smaller parasitic components which provides advantageous high frequency behavior. In addition, TFRs typically have a low temperature coefficient of resistance (TCR), e.g., after a suitable annealing process to “tune” the TCRto a near-zero value, which may provide stable operation over a wide range of operating temperatures. A TFR anneal may be performed at above 500°C, e.g., in the range of 500-525°C, to optimize the TCR value.A TFR may include any suitable resistive film formed on, or in, an insulating substrate. Some common IC-integrated TFR resistive film materials include SiCr, SiCCr, TaN, and TiN, although any other suitable materials may be used. Fabricating integrated TFRs typically requires the addition of numerous processing to the background IC integration flow, such as several expensive photomask processes. It would be advantageous to reduce the number of such steps, in particular the number of photomask processes, to reduce the cost of integrated TFR fabricationAnother problem relates to forming and annealing TFRs in IC devices that use aluminum interconnect layers (e.g., interconnect layers formed from aluminum, aluminum copper, or aluminum silicon copper), due to the relatively low melting point of aluminum. A common aluminum interconnect layer is formed as a layer stack, for example, a Ti layer, followed by a TiN layer, followed by an AlSiC layer (or AlCu or A1 layer), followed by a second Ti layer, and finally a second TiN layer. A typical TFR anneal, which may involve
temperatures at or above 500°C, may negatively affect such an aluminum interconnect, which has an accepted anneal temperature limit of about 450°C. For example, in an aluminum interconnect layer stack described above, when a TFR if formed and annealed (e.g., at a temperature at or above 500°C) after forming an aluminum interconnect, TiAb may form at grain boundaries within the interconnect layer stack, which increases sheet resistance of the interconnect (e.g., by a factor of 50 or more), which may cause electromigration problems in the IC structure.SUMMARYEmbodiments of the present invention address various problems with conventional TFR integrations by forming a thin film resistor (TFR) after forming IC elements (e.g., memory devices) and contacts (e.g., tungsten vias), but before forming a first metal/interconnect layer, often referred to as a “Metal 1” layer. By forming the TFR prior to forming the Metal 1 layer, a TFR anneal may be performed at temperatures that would negatively affect the material of the Metal 1 layer, for example where aluminum (or other metal have a low melting temperature) is used for the Metal 1 layer. Thus, forming the TFR prior to forming the Metal 1 layer (e.g. aluminum Metal 1 layer) allows a TFR anneal at optimal temperature (e.g., to optimize a TCR value of the TFR film), for example an anneal at or above 500 °C (e.g., in the range of 500- 525°C). Thus, embodiments of the present invention allow formation and optimal annealing of a TCR in an IC production flow that utilizes aluminum interconnect.As used herein, “forming” any particular material layer (or other structure) may include depositing the respective material layer, growing the respective material layer (e.g., growing an oxide layer), or otherwise forming the respective material layer, and may include various process steps known in the art with respect to forming various types of layers in an IC structure.In addition, as used herein, an “etch process” may include a single etch, or multiple etches that may include different etch chemistries or other etch parameters.In some embodiments, the process of forming the TFR includes only two added photomasks to the background IC production flow (i.e., the IC production flow without forming the TFR).In some embodiments the disclosed process of forming an TFR in an IC device includes forming a cap oxide layer over a TFR film (e.g., SiCCr film) and performing a wet etch to remove portions of the cap oxide layer, thereby forming an oxide cap over the TFR film. The wet etch (as compared to a dry etch) may form sloped (i.e., non-vertical) lateral edges of the
oxide cap over the TFR film. The sloped edges of the oxide cap may facilitate (e.g., make easier) the removal of metal (e.g., portions of the deposited Metal 1 layer) adjacent the TFR element to prevent electrical shorts (often referred to as “stringers”) in the completed device.In one aspect of the invention, a method is provided for forming an integrated thin film resistor (TFR) in a semiconductor integrated circuit device. An integrated circuit (IC) structure is formed, including a plurality of IC elements and a plurality of conductive IC element contacts connected to the plurality of IC elements. A TFR film layer is formed over the IS structure, and a TFR dielectric layer is formed over the TFR film layer. A wet etch is performed to remove selected portions of the TFR dielectric layer, thereby leaving a TFR dielectric cap over the TFR film layer, wherein the wet etch stops at the TFR film layer, and wherein the wet etch defines sloped lateral edges of the TFR dielectric cap. A TFR etch is performed to remove selected portions of the TFR film layer (e.g., those not under the TFR dielectric cap), to thereby define a TFR element, wherein the sloped lateral edges of the TFR dielectric cap are aligned over respective lateral edges of the TFR element. A TFR contact etch is then performed to form TFR contact openings in the TFR dielectric cap over the TFR element, and a metal layer (e.g., “Metal 1” layer) is deposited over the conductive IC element contacts and over the TFR dielectric cap, and extending into the TFR contact openings and in contact with the TFR element.A TFR anneal is performed at some time after forming the TFR film layer but before depositing the metal layer, e.g., to reduce a thermal coefficient of resistance (TCR) of the TFR film layer. For example, a TFR anneal may be performed after forming the TFR film layer and TFR dielectric layer but before the wet etch to define the TFR dielectric cap, or may be performed after the TFR etch that defines the TFR element, or at any other time after forming the TFR film layer but before depositing the metal layer.In some embodiments, the step of forming the metal layer includes depositing a conformal layer of metal over the TFR dielectric cap, and performing a metal etch to remove selected portions of the conformal layer of metal. The deposited conformal layer of metal includes a sloped metal region extending over a respective sloped lateral edge of the TFR dielectric cap, which sloped metal region has a lower height at a first location adjacent a respective lateral edge of the TFR element than at a second location above a top upper surface of the TFR dielectric cap. The metal etch to remove selected portions of the conformal layer of metal includes removing a portion of the sloped metal region at the first location adjacent
the respective lateral edge of the TFR element. The lower height of the sloped metal region at the first location may allow a reduced etching time or intensity to remove the full thickness of the sloped metal region at the first location, e.g., as compared with a similar structure in which the TFR dielectric cap has vertical lateral edges (i.e., squared-off edges) instead of sloped lateral edges created by the wet etch of the TFR dielectric layer.In one embodiment, the integrated circuit structure includes a memory cell or transistor structure including at least one conductive IC element contact connected to at least one of a source region, a drain region, and a gate region of the memory cell or transistor structure.In some embodiments, the TFR film layer comprises silicon carbide chromium (SiCCr), silicon chromium (SiCr), chromium silicon nitride (CrSiN), tantalum nitride (TaN), tantalum silicide (Ta2Si), or titanium nitride (TiN).In one embodiment, the metal interconnect layer comprises aluminum.In one embodiment, the TFR dielectric layer comprises an oxide layer.In one embodiment, an etch stop layer is formed over the IC structure prior to forming the TFR film layer, such that the TFR film layer is formed over the etch stop layerIn one embodiment, the TFR etch comprises a dry etch.In one embodiment, the TFR anneal comprise an anneal at a temperature of at least 500°C. For example, the TFR anneal may comprise an anneal at a temperature of 515°C ± 10 °C for a duration of 15-60 minutes (e.g., 30 min).In another aspect of the invention, a method is provided for forming an integrated thin film resistor (TFR) in a semiconductor integrated circuit device. An integrated circuit (IC) structure is formed, including a plurality of IC elements and a plurality of conductive IC element contacts connected to the plurality of IC elements. A first etch stop layer is formed over the IC structure. A TFR film layer is formed over the first etch stop layer, and a TFR dielectric layer is formed over the TFR film layer. A first photomask is formed and patterned over a portion of the TFR dielectric layer. A first etch process is performed to remove exposed portions of the TFR dielectric layer, thereby leaving a TFR dielectric cap under the first photomask and over the TFR film layer. The first etch process may comprise a wet etch that stops at the TFR film, and the wet etch may define sloped lateral edges of the TFR dielectric cap, e.g., as discussed above. A second, dry etch is performed to remove exposed portions of the TFR film layer to thereby define a TFR element. A second photomask is formed and patterned with at least one second mask opening aligned over the TFR element. A third etch
process is performed to form at least one TFR contact opening in the TFR dielectric cap over the TFR element. A metal interconnect layer (e.g., “Metal 1” layer) is formed over the plurality of conductive IC element contacts and over the TFR dielectric cap and underlying TFR element, such that the formed metal interconnect layer extends into the at least one TFR contact opening to contact the underlying TFR element. A third photomask is formed and patterned. Finally, a fourth etch process is performed to remove selected portions of the metal interconnect layer to thereby define a plurality of metal interconnect elements.A TFR anneal is performed at some time after forming the TFR film layer but before forming the metal interconnect layer, e.g., to reduce a thermal coefficient of resistance (TCR) of the TFR film layer. For example, a TFR anneal may be performed before or after the first etch process, before or after the second etch process, before or after the third etch process, or at any other time after forming the TFR film layer but before forming the metal interconnect layer.In some embodiments, as discussed above, the sloped lateral edges of the TFR dielectric may facilitate (e.g., make easier) the removal of metal (e.g., portions of the deposited Metal 1 layer) adjacent the TFR element to prevent electrical shorts (often referred to as “stringers”) in the completed device.In one embodiment, the integrated circuit structure includes a memory cell or transistor structure including at least one conductive IC element contact connected to at least one of a source region, a drain region, and a gate region of the memory cell or transistor structure.In some embodiments, the TFR film layer comprises silicon carbide chromium (SiCCr), silicon chromium (SiCr), chromium silicon nitride (CrSiN), tantalum nitride (TaN), tantalum silicide (Ta2Si), or titanium nitride (TiN).In one embodiment, the metal interconnect layer comprises aluminum.In one embodiment, the TFR dielectric layer comprises an oxide layer.In one embodiment, the TFR anneal is performed prior to forming the metal interconnect layer.In some embodiments, the TFR anneal comprise an anneal at a temperature in the range of 500-525°C. For example, in some embodiments the TFR anneal comprise an anneal at a temperature of 515°C ± 10 °C for a duration of 15-60 minutes (e.g., 30 min).In one embodiment, the third etch process comprises a wet etch. In another embodiment, the third etch process comprises a dry etch.
In one embodiment, the fourth etch process defines a TFR interconnect element providing a conductive connection between the TFR element and at least one of the plurality of conductive IC element contacts.In another aspect, a semiconductor device including a thin film resistor (TFR) produced according to the disclosed process is provided.BRIEF DESCRIPTION OF THE DRAWINGSExample aspects of the present disclosure are described below in conjunction with the figures, in which:Figures 1-12 illustrate steps of an example method of integrating a thin film resistor (TFR) in a semiconductor integrated circuit (IC) device, according to one example embodiment of the invention; andFigures 13A-13C and 14A-14C illustrate how sloped lateral edges of a TFR oxide cap can prevent or reduce the occurrence of electrical shorts (often referred to as “stringers”) in an integrated TFR. More particularly, Figures 13A-13C show a removal of selected portions of a metal layer deposited over a TFR oxide cap having sloped lateral edges, while Figures 14A- 14C show a removal of selected portions of a metal layer deposited over a TFR oxide cap having vertical (“squared-off’) lateral edges.It should be understood that the reference number for any illustrated element that appears in multiple different figures has the same meaning across the multiple figures, and the mention or discussion herein of any illustrated element in the context of any particular figure also applies to each other figure, if any, in which that same illustrated element is shown. DETAILED DESCRIPTION OF THE INVENTIONEmbodiments of the present invention provide an improved technique for integrating a thin film resistor (TFR) in a semiconductor integrated circuit (IC) device, which may provide a cost reduction as compared with conventional techniques, e.g., by allowing for TFR integration in combination with aluminum interconnect. In some embodiments, the TFR is formed after IC elements and IC element contacts (e.g. tungsten vias) are formed, but before the first metal/interconnect layer (“Metal 1” layer) is formed. This may allow a TFR anneal to be performed (e.g., to optimize the TCR value of the TFR film), for example at a temperature of 500 °C or above (e.g., in the range of 500-525°C). Thus, an annealed TFR may be integrated into an IC device that uses aluminum interconnect, because the aluminum interconnect (which
is generally not tolerant of the high temperatures experienced during a typical TFR anneal) is not formed until after the TFR anneal.Further, in some embodiments, the TFR may include an oxide cap formed over a TFR element (e.g., SiCCr element), wherein the cap oxide includes sloped lateral edges aligned over lateral edges of the TFR element, which may prevent or reduce the occurrence of electrical shorts (often referred to as “stringers”) between the TFR element and adjacent metal structures (e.g., Metal 1 structures) during operation of the IC device. In some embodiments, the cap oxide with sloped lateral edges may be formed by forming a cap oxide layer over a TFR film and performing a wet etch to define an oxide cap having sloped lateral edges.Figures 1-12 illustrate an example method of integrating a thin film resistor (TFR) in a semiconductor integrated circuit (IC) device, according to an example embodiment.Figure 1 illustrates an example integrated circuit (IC) structure 10, e.g., during the manufacturing of an IC device. In this example, the IC structure 10 includes a transistor structure 12 formed over a substrate 13, with a plurality of conductive contacts 14, e.g., tungsten vias, extending though a bulk insulation region 20 formed over the transistor structure 12. However, the IC structure 10 may include any other IC devices(s) or structure(s), e.g., one or more full, or partial, memory cells or memory cell structures, and conductive contacts associated with such structures. In this example embodiment, the bulk insulation region 20 includes (a) a high-density plasma (HDP) pre-metal dielectric (PMD) oxide layer 20 A (e.g., formed after a CMP), (b) a PMD oxide film 20B, e.g., PMD P TEOS (phosphorous-doped tetraethyl orthosilicate film), and (c) a PMD cap layer 20C.Figure 1 may represent a state during an IC fabrication process after formation of tungsten vias 14 and a chemical mechanical polish (W CMP) process at the top of the structure 10Next, as shown in Figure 2, a TFR layer stack 30 is formed over the bulk insulation region 20 and conductive contacts 14. First, a dielectric etch stop layer 32, e.g., an SiN layer, may be formed, e.g., to protect the tungsten vias 14 from a subsequent TFR etch shown below at Figure 5. A thin resistive film layer (TFR film layer) 34 may then be formed on the first dielectric etch stop layer 32. The TFR film layer 34 may comprise, SiCCr, SiCr, TaN, TiN, or any other suitable TFR material.In some embodiments, a TFR anneal may be performed at this point, e.g., to tune or optimize a temperature coefficient of resistance (TCR) of the TFR film layer 34. For example,
an anneal may be performed at a temperature of > 500°C. In some embodiments, the TFR anneal may comprise an anneal at 515°C ± 10 °C for a duration of 15-60 minutes, e.g., 30 min. In other embodiments, the TFR anneal may be performed at any other point in the process, prior to the deposition of the first metal layer/interconnect layer 60 (e.g., “Metal 1” layer) discussed below with reference to Figure 10. For example, in some embodiments, the TFR anneal may be performed after forming the TFR contact dielectric layer 36 discussed below with respect to Figure 2. In other embodiments, the TFR anneal may be performed after etching the TFR film layer 34 to define a TFR element 34A, as discussed below with respect to Figures 5 and 6. In other embodiments, the TFR anneal may be performed after performing a TFR contact etch, as described below with respect to Figure 9.After the TFR anneal, a TFR contact dielectric layer 36 may be formed on the TFR film layer 34. In this embodiment, TFR contact dielectric layer 36 comprises an oxide layer.As shown in Fig. 3, a first photomask 40 may be formed and pattered (e.g., using known photolithographic techniques) for forming a TFR, in this example at a location laterally offset from the underlying transistor structure 12.As shown in Figure 4, a wet etch may then be performed to remove exposed portions of the TFR oxide layer 36 to define an oxide cap 36A under the photomask 40 and over the TFR film layer 34. As shown, the wet etch may be designed to stop at the TFR film layer 34, and may define sloped (i.e., non-horizontal and non-vertical) lateral edges 44 of the TFR oxide cap 36 A.As shown in Figure 5, a dry etch may then be performed to remove exposed portions of the TFR film layer 34, to thereby define a TFR element 34A under the oxide cap 36A. The dry etch may be designed to stop on the SiN etch stop layer 32. As shown, the sloped lateral edges 44 of the TFR oxide cap 36A, formed by the wet etch discussed above, are aligned over corresponding lateral edges 48 of the TFR element 34A. As discussed below, e.g., with respect to Figures 13A-13C and 14A-14C, the sloped lateral edges 44 of the TFR oxide cap 36A may facilitate the removal of selected portions of a deposited metal layer 60 adjacent the TFR element 34A to prevent electrical shorts (often referred to as “stringers”) in the completed device.As shown in Figure 6, the remaining portions of the photoresist 40 may be stripped. In some embodiments, a chemical clean may be used because the underlying tungsten contacts 14 are protected by the SiN etch stop layer 32.
As shown in Figure 7, exposed portions of the SiN etch stop layer 32 may be removed, e.g., by performing a gentle SiN clear etch, preferably with high selectivity to oxide, to thereby protect the underlying tungsten contacts 14. A remaining portion of the SiN etch stop layer 32, below the TFR element 34A, is indicated at 32A.As shown in Figure 8, a second photomask 50 may then be formed and pattered to define a pair of mask openings 52 aligned over the TFR element 34A. A TFR contact etch may then be performed to define a pair of TFR contact openings 56 in the TFR oxide cap 36A, stopping on the TFR element 34A, such that the TFR contact etch exposes upper surfaces of the TFR element 34A within the TFR contact openings 56. The TFR contact etch may be a wet etch or a dry etch. A wet etch may improve the flow of metal during a subsequent metal deposition (see Figure 9), but may increase the size of the TFR contact openings 56.As shown in Figure 9, the remaining portion of the second photomask 50 may be removed, e.g., by performing a resist strip.As shown in Figure 10, the IC device processing may continue, by forming a first metal layer/interconnect layer, referred to as a “Metal 1” layer 60. In the illustrated embodiment, Metal 1 layer 60 comprises aluminum. In other embodiments, Metal 1 layer 60 may comprise copper or other metal. As shown, Metal 1 layer 60 extends into the TFR contact openings 56 formed in the TFR oxide cap 36A, to thereby contact the TFR element 34A at opposing sides of the TFR element 34A. Metal 1 layer 60 also extends over and in contact with tungsten contacts 14.Next, as shown in Figure 11, a third photomask 70 may be formed, patterned, and etched to define a plurality of mask openings 72A, 72B, 72C to pattern the underlying Metal 1 layer.Finally, as shown in Figure 12, a metal etch may be performed through mask openings 72A, 72B, 72C to etch selected portions of the aluminum Metal 1 layer 60 to define a plurality of metal layer openings 61A, 61B, 61C and aluminum Metal 1 elements (e.g., interconnect elements) 62A-62D. After the metal etch, the remaining photoresist material 70 may then be removed. For example, as shown, the metal etch may define aluminum interconnect elements 62 A and 62B in contact with tungsten vias 14, and aluminum interconnect elements 62C and 62D in contact with the opposing sides of the TFR element 34A. In this example illustration, a first aluminum interconnect element 62C conductively connects a first side of the TFR element 34A with a tungsten via 14 coupled to a source or drain region of the transistor 12, and
a second interconnect element 62D conductively contacts a second side of the TFR element 34A with other IC element structure(s) (not shown). The TFR element 34A and the first and second interconnect elements 62C and 62D collectively define an integrated TFR, indicated at 80.As mentioned above, the sloped lateral edges 44 of the TFR oxide cap 36A may facilitate the removal of selected portions of metal layer 60 adjacent selected lateral edges of the TFR element 34A, e.g., to prevent an electrical short (“stringer”) between interconnect elements 62C and 62D caused by a remaining portion of metal layer 60 (after the metal etch) that physically connects interconnect elements 62C and 62D, i.e., the metal contacts on opposing sides of the TFR element 34A.Figures 13A-13C and 14A-14C provide an example illustration of how the sloped lateral edges 44 of the TFR oxide cap 36A can facilitate the removal of selected portions of metal layer 60 (to physically separate interconnect elements 62C and 62D from each other), as compared with a similar structure having a TFR oxide cap with vertical (“squared-off’) lateral edges. Figures 13 A-13C are cross-section views of a selected portion of IC structure 10 defined by a cut line A-A shown in Figures 11 and 12, which extends into the page, such that the cross- sections shown in Figures 13A-13C are perpendicular to the cross-sections shown in Figures 1-12. In contrast, Figures 14A-14C are cross-section views of a selected portion of an IC structure 10’ similar to IC structure 10 but having a TFR oxide cap with vertical (“squared- off’) lateral edges, as opposed to the sloped lateral edges 44 of the TFR oxide cap 36A in Figures 1-12 and 13A-13C.Figures 13 A and 14A show (a) the selected portion of IC structure 10 after deposition of metal layer 60 over the TFR oxide cap 36A having sloped lateral edges 44, referred to below as sloped oxide cap edges 44 (Figure 13 A) and the selected portion of IC structure 10’ after deposition of a metal layer 60’ over the TFR oxide cap 36A’ having vertical lateral edges 44’, referred to below as vertical oxide cap edges 44’. The same metal thickness, indicated as T metai, is deposited for metal layer 60 and metal layer 60’.Figure 13 A thus corresponds with the state of IC structure 10 shown in Figure 11, after forming and patterning the photomask 70 above the metal layer 60 and prior to the metal etch to define metal elements 62A-62D. As indicated in Figure 13 A, the illustrated cross-section is located within mask opening 72C shown in Figure 11. In this example, the metal 60 in the illustrated cross-section should be fully removed by the metal etch through mask opening 72C,
in order to remove any conductive connection provided by metal 60 between metal interconnect elements 62C and 62D (i.e., the metal contacts on opposing sides of the TFR element 34A), thereby preventing electrical shorts (“stringers”) across the TFR element 34A. As shown in Figures 13A and 14A, the thickest portions of metal layer 60 and 60’ are located adjacent the lateral edges 48 and 48’ of the TFR elements 34A and 34A’, indicated generally at locations 64 and 64’, and thus the metal etch should be sufficient to remove the full metal thickness in these locations. As explained below, the sloped oxide cap edges 44 reduce the metal thickness in these locations, thus reducing the required metal etch parameter(s), e.g., etching time or etching intensity.Metal layers 60’ and 60’ shown in Figures 13 A and 13B may each comprise an aluminum layer, e.g., Al, AlCu, and AlSiCu, applied as a sputtered film. As known in the art, physical sputtered films such as Al, AlCu, and AlSiCu are typically not fully conformal. “Bread-loafing” occurs above the upper corners of physical structures, e.g., as shown in Figures 13 A at 66, and in Figure 14A at 66’. As shown, the sloped oxide cap edges 44 shown in Fig. 13 A reduce the extent of “bread-loafing” at the upper corners, as compared with the vertical oxide cap edges 44’ shown in Fig. 14A. This reduced “bread-loafing” effect, along with the downwardly sloping contour of metal layer 60 over the sloped oxide cap edges 44, results in a vertical metal thickness Tmetai_sioped_cap adjacent the lateral edges 48 of the TFR element 34A (i.e., at locations 64 shown in Figure 13), that is less than a vertical metal thickness T metai squared cap adjacent the lateral edges 48’ of the TFR element 34A’ of IC structure 10’ (i.e., at locations 64’ shown in Figure 14). Thus, viewing Tmetai_sioped_cap in comparison to the lesser T metai sqared cap, it can be seen that the maximum vertical thickness of metal to be removed during the metal etch (to prevent electrical shorts across the TFR element 34A or 34A’) is reduced as a result of the sloped oxide cap edges 44, as compared with vertical oxide cap edges 44’.Figures 13B and 14B show the selected portions of IC structure 10 and IC structure 10’ during the metal etch to remove each metal layer 60 and 60’, respectively, which represents a state in time between the states shown in Figures 11 and 12. In particular, Figures 13B and 14B shows a state during the etch at which the horizontal regions of each metal layer 60 and 60’, each having thickness Tmetai, have been removed, while regions of metal layers 60 and 60’ at the lateral edges 48, 48’ of each TFR element 34A, 34A’ still remain. As shown, the maximum remaining metal thickness Tmetai_sioped_cap in the structure having sloped oxide cap edges 44 (Figure 13B), is smaller than the maximum remaining metal thickness Tmetai_squared_cap
in the structure having squared oxide cap edges 44’ (Figure 14B), and thus requires a shorter etch time (or etch intensity) to remove fully.Figures 13C and 14C show the selected portions of IC structure 10 and IC structure 10’ after additional etch time (over etch), in particular at a time at which the thickest regions of metal layer 60 (at Tmetai_sioped_cap) have been fully removed. Figure 13C thus corresponds with the state of IC structure 10 shown in Figure 12. As shown, in the structure having squared oxide cap edges 44’ (Figure 14C), a thickness of metal 60’ (indicated at Tmetai_squared_cap) still remains at the time when the metal layer 60 in IC structure 10 (Figure 13C) has been fully removed. Thus, the sloped oxide cap edges 44 formed in IC structure 10 may reduce the etch time (or etch intensity) required to fully remove the metal 60 to prevent electrical shorts across the TFR element 34A. The reduced etch time (or etch intensity) allows for a thinner photoresist 70 (Figure 11), which allows for tighter metal line spacing in IC structure 10, e.g., as compared with an IC structure using squared cap edges 44’ (Figures 14A-14C). This reduction in metal line spacing may allow for an overall reduction in the size of IC structure 10, which may allow for more IC devices per wafer, which may reduce the cost per device.Although the disclosed embodiments are described in detail in the present disclosure, it should be understood that various changes, substitutions, and alterations can be made to the embodiments without departing from their spirit and scope. |
A method includes pushing a datum onto a stack by a first processor and popping the datum off the stack by a second processor. |
What is claimed is: 1. A method comprising: pushing a datum onto a stack by a first processing thread; and popping the datum off the stack by a second processing thread. 2. The method of claim 1 wherein the pushing comprises : executing a push command on the first processing thread, the push command having at least one argument, determining a pointer to a current stack datum, determining a location associated with an argument of the push command, storing the determined pointer at the determined location, producing a pointer associated with determined location the pointer to the current stack datum. 3. The method of claim 2 wherein determining a location comprises: decoding the push command. 4. The method of claim 2 wherein determining a location comprises: storing an argument of the pop command in a location associated with the argument of the push command. 5. The method o : claim 2 wherein said push command is at least one of a processor instruction, and an operating system call. 6. The method of claim 1 wherein popping comprises : executing a pop command by the second processing thread, determining a pointer to a current stack datum, returning the determined pointer to the second processing thread, retrieving a pointer to a previous stack datum from a location associated with the pointer to the current stack datum, and assigning the retrieved pointer the pointer to the current stack datum. 7. The method of claim 6 wherein the location associated with the pointer to the current stack datum is the location that has an address equal to the value of the pointer to the current stack datum. 8. The method of claim 6 wherein the location associated with the pointer to the current stack datum is the location that has an address equal to the sum of an offset and the value of the pointer to the current stack datum. 9. The method of claim 6 wherein the pop command is at least one of a processor instruction or an operating system call. 10. The method of claim 1 further comprising: storing data in a memory buffer that is accessible using a buffer pointer having the datum that is pushed onto the stack. 11. The method of claim 1 further comprising: using the popped datum as a buffer pointer to access information stored in a memory buffer. 12. The method of claim 1 further comprising: a third processing thread pushing a second datum onto the stack. 13. The method of claim 1 further comprising: a third processing thread popping a second datum of the stack. 14. A system comprising: a stack module that stores data by pushing it onto the stack and processing threads can retrieve information by popping the information off the stack, a first processing thread having a first command set, including at least one command for pushing data onto the stack, and a second processing thread having a second command set, including at least one command for popping the data off the stack. 15. The system of claim 14 wherein the first and second processing threads are executed on a single processing engine. 16. The system of claim 14 wherein the first and second processing threads are executed on separate processing engines. 17. The system of claim 16 wherein the separate processing engines are implemented on the same integrated circuit. 18. The system of claim 14 wherein the stack module and the processing threads are on the same integrated circuit. 19. The system of claim 14 where the first and second command sets are at least one of a processor instruction set and an operating system instruction set. 20. The system of claim 14 further comprising a bus interface for communicating between at least one of the processing threads and the stack module. 21. A stack module comprising: control logic that responds to commands from at least two processing threads, the control logic storing datum on a stack structure in response to a push command and retrieving datum from the stack in response to a pop command. 22. The stack module of claim 21 further comprising a stack pointer associated with the most recently stored datum on the stack. 23. The stack module of claim 22 further comprising a memory location associated with a first datum on the stack, the second memory location including: a pointer associated with a second datum which was stored on the stack prior to said first datum. 24. The stack module of claim 22 further comprising a second stack pointer associated with the most recently stored datum on a second stack. 25. The stack module of claim 22 wherein the stack pointer is a register on a processor. 26. The stack module of claim 23 wherein said memory location includes SRAM memory. 27. The stack module of claim 21 wherein the commands are processor instructions. 28. The stack module of claim 21 wherein the commands are operating system instructions. 29. An article comprising a computer-readable medium which stores computer logic, the computer logic comprising: a stack module configured to store data from a first processing thread by pushing the data onto a stack and to retrieve the data for a second processing thread by popping the data off the stack, the stack module being responsive to a first processing thread command to store data on the stack and a second processing thread command to retrieve data from the stack. 30. An article comprising a computer-readable medium which stores computer-executable instructions, the instructions causing a processor to: store data from a first processing thread by executing an instruction to push the data onto the stack; and retrieve the data for a second processing thread by executing an instruction to pop the data from the stack for use by the second thread. |
MEMORY SHARED BETWEEN PROCESSING THREADS BACKGROUND The invention relates to memory shared between processing threads. A computer thread is a sequence or stream of computer instructions that performs a task. A computer thread is associated with a set of resources or a context. SUMMARY In one general aspect of the invention, a method includes pushing a datum onto a stack by a first processor and popping the datum off the stack by the second processor. Advantages and other features of the invention will become apparent from the following description and from the claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of a system employing a hardware-based multi-threaded processor. FIG. 2 is a block diagram of a MicroEngine employed in the hardware-based multi-threaded processor of FIG. 1. FIG. 3 is a block diagram showing instruction sets of two threads that are executed on the MicroEngines ofFIGS. 1 and 2. FIG. 4 is a simplified block diagram of the system of FIG. 1 showing selected sub-systems of the processor including a stack module. FIG. 5A is a block diagram showing the memory components of the stack module of FIG. 4. FIG. 5B is a block diagram showing the memory components of an alternate implementation of the stack module of FIG. 4. FIG. 6A is a flow chart of the process of popping a datum from the memory components of FIG. 5A. FIG. 6B is a block diagram showing the memory components of FIG. 5A after the popping process of FIG.6A. FIG. 7A is a flow chart of the process of pushing a datum on the memory components of FIG. 6B. Fig. 7B is a block diagram showing the memory components of FIG. 6B after the pushing process of FIG.7A. FIG. 8 is a block diagram showing memory components used to implement two stacks in one stack module. DETAILED DESCRIPTION Referring to FIG. 1, a system 10 includes a parallel, hardware-based multithreaded processor 12. The hardware-based multithreaded processor 12 is coupled to a bus 14, a memory system 16 and a second bus 18. The bus 14 complies with the Peripheral Component InterconnectInterface, revision 2.1, issued June 1,1995 (PCI). The system 10 is especially useful for tasks that can be broken into parallel subtasks or functions. Specifically hardware-based multithreaded processor 12 is useful for tasks that are bandwidth oriented rather than latency oriented. The hardware-based multithreaded processor 12 has multiple MicroEngines 22 each with multiple hardware controlled threads that can be simultaneously active and independently work on a task. The hardware-based multithreaded processor 12 also includes a central controller 20 that assists in loading microcode control for other resources of the hardware-based multithreaded processor 12 and performs other general-purpose computer type functions such as handling protocols, exceptions, and extra support for packet processing where the MicroEngines pass the packets off for more detailed processing such as in boundary conditions. In one embodiment, the processor 20 is aStrongArm (TM) (StrongArm is a trademark of ARM Limited,United Kingdom) based architecture. The general-purpose microprocessor 20 has an operating system. Through the operating system, the processor 20 can call functions to operate on MicroEngines 22a-22f. The processor 20 can use any supported operating system preferably a real time operating system. For the core processor implemented as a StrongArm architecture, operating systems such as,Microsoft NT real-time, and VXWorks and UC/OS, a freeware operating system available over the Internet at http://www. ucos-ii. com/, can be used. The hardware-based multithreaded processor 12 also includes a plurality of functional MicroEngines 22a22f. Functional MicroEngines (MicroEngines) 22a-22f each maintain a plurality of program counters in hardware and states associated with the program counters.Effectively, a corresponding plurality of sets of threads can be simultaneously active on each of the MicroEngines 22a-22f while only one is actually operating at any one time. In one embodiment, there are six MicroEngines 22a-22f as shown. Each MicroEngines 22a-22f has capabilities for processing four hardware threads. The six MicroEngines 22a-22f operate with shared resources including memory system 16 and bus interfaces 24 and 28.The memory system 16 includes a Synchronous DynamicRandom Access Memory (SDRAM) controller 26a and a StaticRandom Access Memory (SRAM) controller 26b. SDRAM memory 16a and SDRAM controller 26a are typically used for processing large volumes of data, e. g., processing of network payloads from network packets. The SRAM controller 26b and SRAM memory 16b are used in a networking implementation for low latency, fast access tasks, e. g., accessing look-up tables, memory for the core processor 20, and so forth. The six MicroEngines 22a-22f access either theSDRAM 16a or SRAM 16b based on characteristics of the data. Thus, low latency, low bandwidth data is stored in and fetched from SRAM, whereas higher bandwidth data for which latency is not as important, is stored in and fetched from SDRAM. The MicroEngines 22a-22f can execute memory reference instructions to either the SDRAM controller 26a or SRAM controller 16b. Advantages of hardware multithreading can be explained by SRAM or SDRAM memory accesses. As an example, an SRAM access requested by a Thread0, from aMicroEngine, will cause the SRAM controller 26b to initiate an access to the SRAM memory 16b. The SRAM controller controls arbitration for the SRAM bus, accesses the SRAM 16b, fetches the data from the SRAM 16b, and returns data to a requesting MicroEngine 22a22b. During an SRAM access, if the MicroEngine e. g., 22a had only a single thread that could operate, thatMicroEngine would be dormant until data was returned from the SRAM. By employing hardware context swapping within each of the MicroEngines 22a-22f, the hardware context swapping enables other contexts with unique program counters to execute in that same MicroEngine. Thus, another thread e. g., Thread-1 can function while the first thread, e. g., Thread0, is awaiting the read data to return. During execution, Thread-1 may access theSDRAM memory 16a. While Thread-1 operates on the SDRAM unit, and Thread~0 is operating on the SRAM unit, a new thread, e. g., Thread-2 can now operate in the MicroEngine 22a. Thread-2 can operate for a certain amount of time until it needs to access memory or perform some other long latency operation, such as making an access to a bus interface. Therefore, simultaneously, the processor 12 can have a bus operation, SRAM operation and SDRAM operation all being completed or operated upon by oneMicroEngine 22a and have one more thread available to process more work in the data path. The hardware context swapping also synchronizes completion of tasks. For example, two threads could hit the same shared resource e. g., SRAM. Each one of these separate functional units, e. g., the FBUS interface 28, the SRAM controller 26a, and the SDRAM controller 26b, when they complete a requested task from one of theMicroEngine thread contexts reports back a flag signaling completion of an operation. When the MicroEngine receives the flag, the MicroEngine can determine which thread to turn on. One example of an application for the hardwarebased multithreaded processor 12 is as a network processor. As a network processor, the hardware-based multithreaded processor 12 interfaces to network devices such as a media access controller device e. g., a 10/lOOBaseT Octal MAC 13a or a Gigabit Ethernet device 13b. The Gigabit Ethernet device 13b complies with theIEEE 802.3z standard, approved in June 1998. In general, as a network processor, the hardware-based multithreaded processor 12 can interface to any type of communication device or interface that receives/sends large amounts of data. Communication system 10 functioning in a networking application could receive a plurality of network packets from the devices 13a, 13b and process those packets in a parallel manner. With the hardwarebased multithreaded processor 12, each network packet can be independently processed. Another example for use of processor 12 is a print engine for a postscript processor or as a processor for a storage subsystem, i. e., RAID disk storage. A further use is as a matching engine. In the securities industry for example, the advent of electronic trading requires the use of electronic matching engines to match orders between buyers and sellers. These and other parallel types of tasks can be accomplished on the system 10. The processor 12 includes a bus interface 28 that couples the processor to the second bus 18. Bus interface 28 in one embodiment couples the processor 12 to the so-called FBUS 18 (FIFO bus). The FBUS interface 28 is responsible for controlling and interfacing the processor 12 to the FBUS 18. The FBUS 18 is a 64-bit wide FIFO bus, used to interface to Media AccessController (MAC) devices. The processor 12 includes a second interface e. g., a PCI bus interface 24 that couples other system components that reside on the PCI 14 bus to the processor 12. The PCI bus interface 24, provides a high-speed data path 24a to memory 16 e. g., the SDRAM memory 16a.Through that path data can be moved quickly from theSDRAM 16a through the PCI bus 14, via direct memory access (DMA) transfers. The hardware based multithreaded processor 12 supports image transfers. The hardware based multithreaded processor 12 can employ a plurality of DMA channels so if one target of a DMA transfer is busy, another one of the DMA channels can take over thePCI bus to deliver information to another target to maintain high processor 12 efficiency. Additionally, thePCI bus interface 24 supports target and master operations. Target operations are operations where slave devices on bus 14 access SDRAMs through reads and writes that are serviced as a slave to target operation. In master operations, the processor core 20 sends data directly to or receives data directly from the PCI interface 24. Each of the functional units is coupled to one or more internal buses. As described below, the internal buses are dual, 32 bit buses (i. e., one bus for read and one for write). The hardware-based multithreaded processor 12 also is constructed such that the sum of the bandwidths of the internal buses in the processor 12 exceeds the bandwidth of external buses coupled to the processor 12. The processor 12 includes an internal core processor bus 32, e. g., an ASB bus (Advanced System Bus) that couples the processor core 20 to the memory controller 26a, 26c and to an ASB translator 30 described below. The ASB bus is a subset of the so-called AMBA bus that is used with the Strong Arm processor core. The processor 12 also includes a private bus 34 that couples the MicroEngine units to SRAM controller 26b, ASB translator 30 and FBUS interface 28. A memory bus 38 couples the memory controller 26a, 26b to the bus interfaces 24 and 28 and memory system 16 including flashrom 16c used for boot operations and so forth. Referring to FIG. 2, an exemplary one of theMicroEngines 22a-22f, e. g., MicroEngine 22f is shown.The MicroEngine includes a control store 70, which, in one implementation, includes a RAM of here 1,024 words of 32 bit. The RAM stores a microprogram. The microprogram is loadable by the core processor 20. The MicroEngine 22f also includes controller logic 72. The controller logic includes an instruction decoder 73 and program counter (PC) units 72a-72d. The four micro program counters 72a-72d are maintained in hardware. TheMicroEngine 22f also includes context event switching logic 74. Context event logic 74 receives messages (e. g., SEQ#EVENTRESPONSE ; FBIEVENTRESPONSE ; SRAM EVENT RESPONSE ; SDRAM EVENTRESPONSE ; and ASB EVENT RESPONSE) from each one of the shared resources, e. g., SRAM 26a, SDRAM 26b, or processor core 20, control and status registers, and so forth. These messages provide information on whether a requested function has completed. Based on whether or not a function requested by a thread has completed and signaled completion, the thread needs to wait for that completion signal, and if the thread is enabled to operate, then the thread is placed on an available thread list (not shown). TheMicroEngine 22f can have a maximum of e. g., 4 threads available. In addition to event signals that are local to an executing thread, the MicroEngines 22 employ signaling states that are global. With signaling states, an executing thread can broadcast a signal state to allMicroEngines 22. Receive Request Available signal, Any and all threads in the MicroEngines can branch on these signaling states. These signaling states can be used to determine availability of a resource or whether a resource is due for servicing. The context event logic 74 has arbitration for the four (4) threads. In one embodiment, the arbitration is a round robin mechanism. Other techniques could be used including priority queuing or weighted fair queuing.The MicroEngine 22f also includes an execution box (EBOX) data path 76 that includes an arithmetic logic unit 76a and general-purpose register set 76b. The arithmetic logic unit 76a performs arithmetic and logical functions as well as shift functions. The registers set 76b has a relatively large number of general-purpose registers. As will be described in FIG. 6, in this implementation there are 64 general-purpose registers in a first bank, Bank A and 64 in a second bank, Bank B. The general-purpose registers are windowed as will be described so that they are relatively and absolutely addressable. The MicroEngine 22f also includes a write transfer register 78 and a read transfer 80. These registers are also windowed so that they are relatively and absolutely addressable. Write transfer register 78 is where write data to a resource is located. Similarly, read register 80 is for return data from a shared resource. Subsequent to or concurrent with data arrival, an event signal from the respective shared resource e. g., the SRAM controller 26a, SDRAM controller 26b or core processor 20 will be provided to context event arbiter 74 which will then alert the thread that the data is available or has been sent. Both transfer register banks 78 and 80 are connected to the execution box (EBOX) 76 through a data path. In one implementation, the read transfer register has 64 registers and the write transfer register has 64 registers. Referring to FIG. 3, processor 12 has processing threads 41 and 42 executing in MicroEngines 22a and 22b respectively. In other instances, the threads 41 and 42 may be executed on the same MicroEngine. The processing threads may or may not share data between them. For example, in Fig. 3, processing thread 41 receives data 43 and processes it to produce data 44. Processing thread 42 receives and possesses the data 44 to produce output data 45. Threads 41 and 42 are concurrently active. Because the MicroEngines 22a and 22b share SDRAM 16a and SRAM 16b (memory), one MicroEngines 22a may need to designate sections of memory for its exclusive use. To facilitate efficient allocation of memory sections, theSDRAM memory is divided into memory segments, referred to as buffers. The memory locations in a buffer share a common address prefix, or pointer. The pointer is used by the processor as an identifier for a buffer. Pointers to buffers that are not currently in use by a processing thread are managed by pushing the pointers onto a free memory stack. A thread can allocate a buffer for use by the thread by popping a pointer off the stack, and using the pointer to access the corresponding buffer.When a processing thread no longer needs a buffer that is allocated to the processing thread, the thread pushes the pointer to the buffer onto the stack to make the buffer available to other threads. The threads 41 and 42 have processor instruction sets 46,47 that respectively include a PUSH''46a and a "POP"47A instruction. Upon executing either the "PUSH"or the"POP"instruction, the instruction is transmitted to a logical stack module 56 (FIG. 4). Referring to Fig. 4, a section of the processor 9 and SRAM 16b provide the logical stack module 56. The logical stack module is implemented as a linked list ofSRAM addresses. Each SRAM address on the linked list contains the address of the next item on the list. As a result, if you have the address of the first item on the list, you can read the contents of that address to find the address of the r. ext item on the list, and so on.Additionally, each address on the linked list is associated with a corresponding memory buffer. Thus the stack module 56 is used to implement a linked list of memory buffers. While in use, the linked list allows the stack to increase or decrease in size as needed. The stack module 56 includes control logic 51 on theSRAM unit 26b. The control logic 51 performs the necessary operations on the stack while SRAM 16b stores the contents of the stack. One of SRAM registers 50 is used to store the address of the first SRAM location on the stack. The address is also a pointer to the first buffer on the stack. Although the different components of the stack module 56 and the threads will be explained using an example that uses hardware threads and stack modules, the stack can also be implemented in operating system software threads using software modules. Thread 41 and thread 42 may be implemented as two operating system threads which execute"PUSH"and"POP"operating system commands to allocate memory from a shared memory pool. The operating system commands may include calls to a library of functions written in the"C"programming language. In the operating system example, the equivalents of the control logic 51, the SRAM registers 50 and SRAM 16B are implemented using software within the operating system. The software may be stored in a hard disk, a floppy disk, computer memory, or other computer readable medium. Referring to FIG. 5A, SRAM register Ql stores an address (OxC5) of the first item on the stack 60. TheSRAM location (OxC5) of the first item on the stack 60 is used to store the SRAM address (OxAl) of the second item on the stack 60. The SRAM location (OxAl) of the second item on the stack 60 is used to store the address of the third item on the stack 60, etc. The SRAM location (OxE9) of the last item on the stack stores a predetermined invalid address (0x00), which indicates the end of the linked list. Additionally, the addresses of the items (OxC5, OxAl, and OxE9) on the stack 60 are pointers to stack buffers 61a, 61b, 61c contained within SDRAM 16A. A pointer to a buffer is pushed onto the stack by thread 41, so that the buffer is available for use by other processing threads. A buffer is popped by thread 42 to allocate the buffer for use by thread 42. The pointers are used as an address base to access memory locations in the buffers. In addition to stack buffers 61a-c, SDRAM 16A also contains processing buffer 62, which is allocated to thread 41. The pointer to processing buffer 62 is not on the stack because it is not available for allocation by other threads. Thread 41 may later push a pointer to the processing buffer 62 onto the stack when it no longer needs the buffer 62. Although the stack will be discussed with reference to the buffer management scheme above, it can be used without buffers. Referring to Fig. 5B, the SRAM locations OxC5, OxAl, and OxE9 may, respectively, contain data 70a, 70b, and 70c in addition to an address to the next item on the list. Such a scheme may be used to store smaller units of data 70a-c on the stack. In such a scheme, the control logic would assign a memory location within the SRAM for storing the unit of data (datum) that is to be pushed onto the stack. The datum pushed onto the stack may be text, numerical data, or even an address or pointer to another memory location. Referring to FIG. 6A, to pop a datum off the stack stored in SRAM register Q1, thread 42 executes 101 the instruction"POP #1". The pop instruction is part of the instruction set of the MicroEngines 22. The pop instruction is transmitted to control logic 51 over bus 55 for stack processing. Control logic 51 decodes 102 the pop instruction. The control logic also determines 103 the register that contains a pointer to the stack that is referred to in the instruction based on the argument of the pop instruction. Since the argument to the pop instruction is"#1", the corresponding register is Q1. The control logic 51 returns 104 the contents of the Q1 register to the context of processing thread 42.The stack of FIG. 5A would return"OxC5". Processing thread 42 receives 107 the contents of the Q1 register, which is"OxC5", and uses 108 the received content to access data from the corresponding stack buffer 61b by appending a suffix to the content. Control logic 27 reads 105 the content (OxAl) of the address (OxC5) stored in the Q1 register. Control logic 27 stores 106 the read content (OxAl) in the Q1 register to indicate that the OxC5 has been removed from the stack and OxAl is now the item at the top of the stack. Referring to Fig. 6B, the state of the stack after the operations of FIG. 6A will be described. As shown, the register Q1 now contains the address OxAl, which was previously the address of the second item on the stack.Additionally, the location that was previously stack buffer 61b (in FIG. 5A) is now processing buffer 65, which is used by thread 42. Thus, thread 42 has removed stack buffer 61b from the stack 60 and allocated the buffer 61b for its own use. Referring to Fig. 7A, the process of adding a buffer to the stack will be described. Thread 41 pushes processing buffer 62 (shown in FIG. 6B) onto the stack by executing 201 the instruction"PUSH #1 Ox01". The argument 0x01 is a pointer to the buffer 62 because it is a prefix that is common to the address space of the locations in the buffer. The push instruction is transmitted to control logic 51 over the bus 55. Upon receiving the push instruction, the control logic 51 decodes 202 the instruction and determines 203 the SRAM register corresponding to the instruction, based on the second argument of the push instruction. Since the second argument is #1", the corresponding register is Q1. The control logic 51 determines the address to be pushed from the third argument (0x01) of the push instruction. The control logic determines 205 the content of the Q1 register by reading the value of the register location. The value OxAl is the content of theQ1 register in the stack of FIG. 6B. The control logic stores 206 the content (OxAl) of the Ql register in theSRAM location whose address is the push address (0x01). The control logic then stores 207 the push address (0x01) in the Q1 register. Referring to FIG. 7B, the contents of the stack after the operations of FIG. 7A will be described. As shown, the SRAM register Q1, contains the address of the first location on the stack, which is now 0x01. The address of the first location on the stack is also the address of stack buffer 61d, which was previously a processing buffer 62 used by thread 41. The location OxAl, which was previously the first item on the stack, is now the second item on the stack. Thus, thread 41 adds stack buffer 61d onto the stack to make it available for allocation to other threads. Thread 42 can later allocate the stack buffer 61d for its own use by popping it off the stack, as previously described for FIG. 6A. Referring to Fig. 8, a second stack 60b (shown in phantom) may be implemented in the same stack module by using a second SRAM control register to store the address of the first element in the second stack 60b. The second stack may be used to manage a separate set of memory buffers, for example, within SRAM 16b or SDRAM 16a. A first stack 60a has the address of the first element on the stack 60a stored in SRAM register Q1. Additionally, a second stack 60b has the address of its first element stored in register Q6. The first stack 60a is identical to the stack 60 in Fig. 7B. The second stack 60b is similar to previously described stacks. Other embodiments are within the scope of the following claims. Although the stack 60 (shown in FIG.5A) stores the pointer to the first element in a registerQ1, the linked list in SRAM 16B and the buffers in SDRAM 16A, any of the stack module elements could be stored in any memory location. For example, they could all be stored in SRAM 16b or SDRAM 16a. Other embodiments my implement the stack in a continuous address space, instead of using a linked list.The size of the buffers may be varied by using pointers (address prefixes) of varying length. For example, a short pointer is a prefix to more addresses and is, therefore, a pointer to a larger address buffer. Alternatively, the stack may be used to manage resources other than buffers. One possible application of the stack might be to store pointers to the contexts of active threads that are not currently operating. WhenMicroEngine 22a temporarily sets aside a first active thread to process a second active thread, it stores the context of the first active thread in a memory buffer and pushes a pointer to that buffer on the stack. AnyMicroEngine can resume the processing of the first active thread by popping the pointer to memory buffer containing the context of the first thread and loading that context.Thus the stack can be used to manage the processing of multiple concurrent active threads by multiple processing engines. |
The invention relates to rate adjustment of a memory interface. The host system may communicate with the memory system via the interface according to a plurality of data transfer rates. For example, the host system may configure the interface to operate according to a first rate. The host system may switch the interface from the first rate to a second rate in response to one or more commands from the host system satisfying one or more parameters, the one or more parameters are, for example, a threshold number of data associated with a command, a threshold number of commands issued associated with at least the threshold number of data, a threshold number of commands issued and not executed, or any combination thereof. Based on the switching, the host system may communicate with the memory system via the interface according to the second rate. |
1. A device comprising:a controller configured to communicate with a memory system via an interface, wherein the controller is configured to cause the device to:configuring the interface to operate according to a first rate, wherein the first rate is one of a set of rates each corresponding to a respective data transfer rate between the controller and the memory system via the interface;switching the interface from the first speed to a second speed of the set of speeds based at least in part on one or more commands from the controller to the memory system satisfying one or more parameters, The one or more parameters include a threshold amount of data associated with the command, a threshold amount of issued commands associated with at least the threshold amount of data, a threshold amount of issued and not executed commands, or any combination thereof; andData is communicated with the memory system according to the second rate.2. The apparatus of claim 1, wherein the controller is further configured to cause the apparatus to:determining whether a first command of the one or more commands is associated with at least the threshold amount of data, wherein switching the interface from the first rate to the second rate is based at least in part on determining the The first command is associated with at least the threshold amount of data.3. The apparatus of claim 1, wherein the controller is further configured to cause the apparatus to:determining whether a number of issued and outstanding commands contained in the one or more commands satisfies the threshold number of issued and outstanding commands, wherein switching the interface from the first rate to the second rate is based at least in part on determining that the number of issued and not executed commands satisfies the threshold number of issued and not executed commands.4. The apparatus of claim 1, wherein the controller is further configured to cause the apparatus to:tracking a first number of commands issued by the controller to the memory system, the first number of commands including the one or more commands; anddetermining whether the first number of commands comprises at least the threshold number of issued commands associated with at least the threshold amount of data, wherein switching the interface from the first rate to the second rate is at least Based in part on determining that the first number of commands includes at least the threshold number of issued commands associated with at least the threshold amount of data.5. The apparatus of claim 4, wherein the controller is further configured to cause the apparatus to:setting a flag to a first value based at least in part on determining that the first number of commands includes at least the threshold number of issued commands associated with at least the threshold amount of data, the first value indicating that according to the Operating the interface at a second rate, wherein switching the interface from the first rate to the second rate is based at least in part on setting the flag to the first value.6. The apparatus of claim 5, wherein the controller is further configured to cause the apparatus to:setting the flag to a second value based at least in part on expiration of a timer associated with communication inactivity between the controller and the memory system after setting the flag to the first value value, the second value indicating that the interface is operated according to the first rate.7. The apparatus of claim 5, wherein the controller is further configured to cause the apparatus to:Based at least in part after setting the flag to the first value, a second number of commands issued by the controller to the memory system and tracked by the device failed to contain at least the threshold number of At least the threshold number of issued commands associated with the data sets the flag to a second value indicative of operating the interface according to the first rate.8. The apparatus of claim 1 , wherein the first rate corresponds to a first data transfer rate and the second rate corresponds to a second data transfer rate, the second data transfer rate being higher than the first a data transfer rate.9. The apparatus of claim 1, wherein the first rate corresponds to a minimum rate of the set of rates and the second rate corresponds to a maximum rate of the set of rates.10. The apparatus of claim 1, wherein the controller is further configured to cause the apparatus to:switching the interface from the second rate to the first rate based at least in part on a second set of commands from the controller failing to satisfy each of the one or more parameters; andA second data communication is performed with the memory system according to the first rate.11. The device of claim 1, wherein:each of the one or more parameters is included in a set of parameters; andThe controller is configured to cause the apparatus to switch the interface from the first rate to the second rate based at least in part on the one or more commands satisfying any parameter of the set of parameters.12. The apparatus of claim 1 , wherein the set of rates corresponds to a burst mode associated with the interface, the burst mode is different from a low speed mode associated with the interface, and the Burst mode is associated with a higher data transfer rate than the low speed mode.13. A non-transitory computer-readable medium storing code comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to:configuring the interface to operate according to a first rate, wherein the first rate is one of a set of rates each corresponding to a respective data transfer rate between the controller and the memory system via the interface;switching the interface from the first speed to a second speed of the set of speeds based at least in part on one or more commands from the controller to the memory system satisfying one or more parameters, The one or more parameters include a threshold amount of data associated with the command, a threshold amount of issued commands associated with at least the threshold amount of data, a threshold amount of issued and not executed commands, or any combination thereof; andData is communicated with the memory system according to the second rate.14. The non-transitory computer-readable medium of claim 13, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to:determining whether a first command of the one or more commands is associated with at least the threshold amount of data, wherein switching the interface from the first rate to the second rate is based at least in part on determining the The first command is associated with at least the threshold amount of data.15. The non-transitory computer-readable medium of claim 13, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to:determining whether a number of issued and outstanding commands contained in the one or more commands satisfies the threshold number of issued and outstanding commands, wherein switching the interface from the first rate to the second rate is based at least in part on determining that the number of issued and not executed commands satisfies the threshold number of issued and not executed commands.16. The non-transitory computer-readable medium of claim 13, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to:tracking a first number of commands issued by the controller to the memory system, the first number of commands including the one or more commands; anddetermining whether the first number of commands comprises at least the threshold number of issued commands associated with at least the threshold amount of data, wherein switching the interface from the first rate to the second rate is at least Based in part on determining that the first number of commands includes at least the threshold number of issued commands associated with at least the threshold amount of data.17. The non-transitory computer-readable medium of claim 16, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to:setting a flag to a first value based at least in part on determining that the first number of commands includes at least the threshold number of issued commands associated with at least the threshold amount of data, the first value indicating that according to the operating the interface at a second rate, wherein switching the interface from the first rate to the second rate is based at least in part on setting the flag to the first value; andBased at least in part upon expiration of a timer associated with communication inactivity between the electronic device and the memory system or sent by the controller to the memory system after setting the flag to the first value the memory system and a second number of commands tracked by the electronic device fail to include at least the threshold number of issued commands associated with at least the threshold amount of data or both to set the flag to a second value, the second value indicating that the interface is operated according to the first rate.18. A method comprising:configuring the interface to operate according to a first rate, wherein the first rate is one of a set of rates each corresponding to a respective data transfer rate between the controller and the memory system via the interface;switching the interface from the first speed to a second speed of the set of speeds based at least in part on one or more commands from the controller to the memory system satisfying one or more parameters, The one or more parameters include a threshold amount of data associated with the command, a threshold amount of issued commands associated with at least the threshold amount of data, a threshold amount of issued and not executed commands, or any combination thereof; andData is communicated with the memory system according to the second rate.19. The method of claim 18, further comprising:determining whether a first command of the one or more commands is associated with at least the threshold amount of data, wherein switching the interface from the first rate to the second rate is based at least in part on determining the The first command is associated with at least the threshold amount of data.20. The method of claim 18, further comprising:determining whether a number of issued and outstanding commands contained in the one or more commands satisfies the threshold number of issued and outstanding commands, wherein switching the interface from the first rate to the second rate is based at least in part on determining that the number of issued and not executed commands satisfies the threshold number of issued and not executed commands. |
Speed Scaling for Memory Interfacescross referenceThis patent application asserts U.S. Patent Application No. 17/889,660, filed Aug. 17, 2022, entitled "RATE ADJUSTMENTS FOR A MEMORY INTERFACE," by Chunchu et al. Priority to U.S. Provisional Patent Application No. 63/237,306, filed August 26, 2021, entitled "RATEADJUSTMENTS FOR A MEMORY INTERFACE," each of which assigned to its assignees and each of which is expressly incorporated herein by reference in its entirety.technical fieldThe technical field relates to rate scaling of memory interfaces.Background techniqueMemory devices are widely used to store information in various electronic devices such as computers, user devices, wireless communication devices, cameras, digital displays, and the like. Information is stored by programming memory cells within a memory device to various states. For example, a binary memory cell can be programmed to one of two supported states, typically corresponding to a logical one or a logical zero. In some examples, a single memory cell can support more than two possible states, either of which can be stored by the memory cell. To access information stored by a memory device, a component may read or sense the state of one or more memory cells within the memory device. To store information, a component may write or program one or more memory cells within a memory device to a corresponding state.Various types of memory devices exist including magnetic hard disks, random access memory (RAM), read only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), static RAM (SRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), 3-dimensional cross-point memory (3D cross-point), or not (NOR) and not (NAND) memory devices and others. Memory devices can be volatile or nonvolatile. Volatile memory cells, such as DRAM cells, can lose their programmed state over time unless they are periodically refreshed by an external power source. Non-volatile memory cells, such as NAND memory cells, retain their programmed state for extended periods of time, even in the absence of an external power source.Contents of the inventionDescribe a device. The apparatus can include a controller configured to communicate with a memory system via an interface. The controller may be configured to cause the apparatus to: configure the interface to operate according to a first rate, wherein the first rate is each corresponding to a rate between the controller and the memory system via the interface one of a set of rates corresponding to data transfer rates; switching the interface from the first rate to the set based on one or more commands from the controller to the memory system satisfying one or more parameters a second rate in the set of rates, the one or more parameters comprising a threshold amount of data associated with the command, a threshold amount of issued commands associated with at least the threshold amount of data, issued and unexecuted commands a threshold amount of , or any combination thereof; and communicating data with the memory system according to the second rate.A non-transitory computer readable medium is described. The non-transitory computer-readable medium may store code comprising instructions that, when executed by a processor of an electronic device, cause the electronic device: a configuration interface to operate according to a first rate, wherein the first rate is each one of a set of rates corresponding to a respective rate of data transfer between the controller and the memory system via the interface; satisfying one or more based on one or more commands from the controller to the memory system parameters to switch the interface from the first rate to a second rate in the set of rates, the one or more parameters including a threshold amount of data associated with a command, at least the threshold amount of data associated A threshold number of issued commands, a threshold number of issued and not executed commands, or any combination thereof associated with a quantity; and communicating data with the memory system according to the second rate.Describe a method. The method may include configuring the interface to operate according to a first rate, wherein the first rate is one of a set of rates each corresponding to a respective data transfer rate between the controller and the memory system via the interface; switching the interface from the first rate to a second rate of the set of rates based on one or more commands from the controller to the memory system satisfying one or more parameters, the one or the plurality of parameters comprising a threshold amount of data associated with the command, a threshold amount of issued commands associated with at least said threshold amount of data, a threshold amount of issued and not executed commands, or any combination thereof; and according to said Two rates are used for data communication with the memory system.Description of drawings1 illustrates an example of a system supporting rate scaling of a memory interface according to examples disclosed herein.2 illustrates an example of a process flow to support rate adjustment of a memory interface according to examples disclosed herein.3A and 3B illustrate examples of parameter schemes to support rate adjustment of memory interfaces according to examples disclosed herein.4 shows a block diagram of a memory device supporting rate scaling of a memory interface according to examples disclosed herein.5 shows a flowchart illustrating one or several methods of supporting rate scaling of a memory interface according to examples disclosed herein.Detailed waysThe host system and memory system can communicate via an interface (eg, at the host system) according to various modes and data transfer rates. For example, the host system can configure the interface to operate according to a low-speed mode or a high-speed mode (eg, burst mode), and possibly other modes. The host system can additionally configure the interface to operate in a mode according to different data transfer rates. For example, when operating in a given mode, the host system may set the interface to one of a set of gears (e.g., gear rates) associated with the mode, where each gear may correspond to a different data transfer rate . In some examples, the host system can gear the interface based on how often the host system issues commands to the memory system (eg, which cause data to be transferred over the interface, such as read commands or write commands, among other commands). Thus, in some cases, if the host system issues commands relatively infrequently, the host system may set the interface to a relatively low gear (e.g., corresponding to a relatively low data transfer rate), and if the host system issues commands relatively frequently, The host system may then set the interface to a relatively high gear (eg, corresponding to a relatively high data transfer rate).However, in some cases, notching based on command frequency may reduce data rates, increase power consumption, or have one or more other disadvantages associated with the performance of the host system and memory system. For example, during video playback, the host system may issue commands for large amounts of data (e.g., about every 200 milliseconds) relatively infrequently (e.g., approximately every 200 milliseconds) due to the transfer of relatively large amounts of data (e.g., 512 kilobytes (KB) of data) (read fetch command). Thus, in some cases, the host system may set the gear of the interface to a low gear based on infrequently issued commands, which may cause data to be transferred more slowly than operating the interface from a high gear, thereby increasing latency and degrading system performance. Additionally, operating the interface at a low gear can increase power consumption of the host system (eg, memory system) in some cases. For example, a lower gear may correspond to lower instantaneous power consumption than a higher gear, but a higher gear may support faster transfer of data and thus earlier deactivation of one or more of the host system and/or memory system components. Thus, depending on differences in the instantaneous power and duration at which various components are activated, among other factors, using lower gears may in some cases actually result in increased overall power consumption and longer data transfer times than using higher gears. Thus, in some cases, operating the interface at a higher gear but for shorter periods of time may reduce power consumption (eg, although higher gears correspond to increased instantaneous power consumption during data transfer cycles).Techniques, systems, and devices for gear management of communication interfaces that enable improved gear selection schemes are described herein. For example, the host system can configure the interface to operate according to a first gear (eg, a first data transfer rate) and can communicate data via the interface with the memory system according to the first gear. The host system may adjust the gear of the interface in response to one or more commands from the host system satisfying one or more parameters of a set of parameters. For example, the host system can determine whether the size of the command (eg, based on the amount of data transferred by the command) satisfies a threshold size. Additionally or alternatively, the host system may determine whether at least a threshold number of commands within a set of trace commands are of at least a threshold size. Additionally or alternatively, the host system may determine whether a queue depth (eg, number of issued and outstanding commands) satisfies a threshold queue depth. If the host system determines that at least one of the parameters is met (e.g., the size of the command satisfies a threshold size, at least a threshold number of commands have at least a threshold size, the queue depth satisfies a threshold queue depth), then the host system may switch the interface to the second gear (eg, a second data transfer rate) and can communicate data with the memory system according to the second gear. Alternatively, the host system may maintain the interface in the first gear if the host system determines that each of the parameters has not been met. In some examples, the second gear may be higher than the first gear (eg, corresponding to a higher data transfer rate than the first gear).These and other aspects described herein may result in reduced power consumption, reduced data transfer latency, or other types of improved system performance. For example, at least some data transfers can be performed at higher rates and with lower latencies than using other gear management techniques, which can further allow for earlier deactivation of various system components and thereby result in power savings.Features of the present disclosure are first described in the context of the system with reference to FIG. 1 . Features of the present disclosure are described in the context of the process flows and parameter schemes with reference to Figures 2, 3A and 3B. These and other features of the present disclosure are further illustrated by reference to and described in the context of the apparatus diagrams and flow diagrams of FIGS. 4 and 5 related to rate scaling of memory interfaces.1 illustrates an example of a system 100 that supports rate scaling of a memory interface according to examples disclosed herein. System 100 includes host system 105 coupled with memory system 110 .Memory system 110 may be or include any device or collection of devices, where a device or collection of devices includes at least one memory array. For example, memory system 110 may be or include a Universal Flash Storage (UFS) device, an Embedded Multimedia Controller (eMMC) device, a Flash device, a Universal Serial Bus (USB) Flash device, a Secure Digital (SD) Card, Solid State Drive (SSD), Hard Disk Drive (HDD), Dual Inline Memory Module (DIMM), Small Outline DIMM (SO-DIMM) or Non-Volatile DIMM (NVDIMM), among other possibilities.System 100 may be included in a computing device such as a desktop computer, laptop computer, web server, mobile device, vehicle (such as an airplane, drone, train, automobile, or other means of transportation), Internet of Things (IoT) An enabled device, an embedded computer such as one included in a vehicle, industrial equipment, or networked business device, or any other computing device that includes memory and processing means.System 100 may include host system 105 , which may be coupled with memory system 110 . In some examples, this coupling may include an interface with host system controller 106, which may be an example of a controller or control component configured to cause host system 105 to perform various operations according to the examples described herein . Host system 105 may include one or more devices, and in some cases may include a processor chipset and a software stack executed by the processor chipset. For example, host system 105 may include applications configured to communicate with memory system 110 or devices therein. The processor chipset may include one or more cores, one or more caches (such as memory local to or included in the host system 105), a memory controller (such as an NVDIMM controller), and a storage protocol controller (eg Peripheral Component Interconnect Express (PCIe) controller, Serial Advanced Technology Attachment (SATA) controller). For example, host system 105 may use memory system 110 to write data to and read data from memory system 110 . Although one memory system 110 is shown in FIG. 1 , host system 105 may be coupled with any number of memory systems 110 .Host system 105 may be coupled with memory system 110 via at least one physical host interface. In some cases, host system 105 and memory system 110 may be configured to communicate (eg, exchange or otherwise transfer control, addresses, data, and other information) between memory system 110 and host system 105 via a physical host interface using an associated protocol Signal). Examples of physical host interfaces may include, but are not limited to, SATA interfaces, UFS interfaces, eMMC interfaces, PCIe interfaces, USB interfaces, Fiber Channel interfaces, Small Computer System Interface (SCSI), Serial Attached SCSI (SAS), Dual Data Rate (DDR) interface, DIMM interface (such as DDR-capable DIMM slot interface), Open NAND Flash Interface (ONFI) and Low Power Double Data Rate (LPDDR) interface. In some examples, one or more such interfaces may be included in or otherwise supported in the host system controller 106 of the host system 105 and the memory system controller 115 of the memory system 110 between the memory system controller 115 of the system 110 . In some examples, host system 105 can communicate with the memory via a corresponding physical host interface for each memory device 130 included in memory system 110 or via a corresponding physical host interface for each type of memory device 130 included in memory system 110. System 110 is coupled (eg, host system controller 106 may be coupled with memory system controller 115).Memory system 110 may include a memory system controller 115 and one or more memory devices 130 . Memory device 130 may include one or more memory arrays of any type of memory cells, such as non-volatile memory cells, volatile memory cells, or any combination thereof. Although two memory devices 130-a and 130-b are shown in the example of FIG. 1 , memory system 110 may include any number of memory devices 130 . Furthermore, if memory system 110 includes more than one memory device 130, different memory devices 130 within memory system 110 may include the same or different types of memory cells.Memory system controller 115 may couple and communicate with host system 105 (eg, via a physical host interface) and may be an example of a controller or control component configured to cause memory system 110 to perform various operations according to examples described herein. Memory system controller 115 may also couple and communicate (e.g., via an interface) with memory device 130 to perform operations such as reading data, writing data, erasing data, or refreshing data, among other such operations, at memory device 130, It may generally be referred to as an access operation. In some cases, memory system controller 115 may receive commands from host system 105 and communicate with one or more memory devices 130 to execute such commands (eg, at a memory array within one or more memory devices 130). For example, memory system controller 115 may receive commands or operations from host system 105 and may translate the commands or operations into instructions or appropriate commands to achieve the desired access of memory device 130 . In some cases, memory system controller 115 may exchange data with host system 105 and one or more memory devices 130 (eg, in response to or otherwise in conjunction with commands from host system 105). For example, memory system controller 115 may convert responses (eg, data packets or other signals) associated with memory device 130 into corresponding signals for host system 105 .Memory system controller 115 may be configured for other operations associated with memory device 130 . For example, memory system controller 115 may perform or manage operations such as wear leveling operations, obsolete item collection operations, error control operations (such as error detection operations or error correction operations), encryption operations, caching operations, media management operations, Background refreshes, health monitoring, and addresses between logical addresses (e.g., logical block addresses (LBAs)) associated with commands from host system 105 and physical addresses (e.g., physical block addresses) associated with memory cells within memory device 130 translation.Memory system controller 115 may include hardware such as one or more integrated circuits or discrete components, cache memory, or a combination thereof. The hardware may include circuitry with dedicated (eg, hard-coded) logic to perform the operations ascribed herein to memory system controller 115 . Memory system controller 115 may be or include a microcontroller, special purpose logic circuitry such as field programmable gate array (FPGA), application specific integrated circuit (ASIC), digital signal processor (DSP), or any other suitable processor or Handling circuitry.The memory system controller 115 may also include a local memory 120 . In some cases, local memory 120 may include read-only memory (ROM) or other memory that may store operational code (e.g., executable instructions ). In some cases, local memory 120 may additionally or alternatively comprise static random access memory (SRAM) or other memory that may be used by memory system controller 115 for internal purposes, such as in connection with the functions ascribed to memory system controller 115 herein. storage or computation. Additionally or alternatively, local memory 120 may serve as a cache for memory system controller 115 . For example, data may be stored in local memory 120 as it is read from or written to memory device 130, and the data may be subsequently retrieved or manipulated within local memory 120 by host system 105 according to a caching policy ( For example, update) (eg, with reduced latency relative to memory device 130).Although the example of memory system 110 in FIG. 1 has been illustrated as including memory system controller 115 , in some cases memory system 110 may not include memory system controller 115 . For example, memory system 110 may additionally or alternatively rely on an external controller (e.g., implemented by host system 105) or one or more local controllers 135 (which may be internal to memory device 130), respectively, to perform the functions ascribed to memory herein. function of the system controller 115 . In general, one or more functions ascribed herein to memory system controller 115 may in some cases instead be performed by host system 105, local controller 135, or any combination thereof. In some cases, memory devices 130 managed at least in part by memory system controller 115 may be referred to as managed memory devices. An example of a managed memory device is a managed NAND (MNAND) device.Memory device 130 may include one or more arrays of non-volatile memory cells. For example, memory device 130 may include NAND (eg, NAND flash) memory, ROM, phase change memory (PCM), self-selection memory, other chalcogenide-based memories, ferroelectric random access memory (RAM) (FeRAM) , Magnetic RAM (MRAM), NOR (such as NOR flash) memory, spin transfer torque (STT)-MRAM, conductive bridge RAM (CBRAM), resistive random access memory (RRAM), oxide-based RRAM (OxRAM), Electrically Erasable Programmable ROM (EEPROM) or any combination thereof. Additionally or alternatively, memory device 130 may include one or more arrays of volatile memory cells. For example, memory device 130 may include RAM memory cells, such as dynamic RAM (DRAM) memory cells and synchronous DRAM (SDRAM) memory cells.In some examples, a memory device 130 may include (eg, on the same die or within the same package) a local controller 135 that may perform operations on one or more memory cells of the respective memory device 130 . Local controller 135 may operate in conjunction with memory system controller 115 or may perform one or more functions ascribed to memory system controller 115 herein. For example, as illustrated in FIG. 1, memory device 130-a may include a local controller 135-a and memory device 130-b may include a local controller 135-b.In some cases, memory device 130 may be or include a NAND device such as a NAND flash device. Memory device 130 may be or include a memory die 160 . For example, in some cases, memory device 130 may be a package including one or more die 160 . In some examples, die 160 may be a piece of electronic-grade semiconductor cut from a wafer (eg, a silicon die cut from a silicon wafer). Each die 160 may include one or more planes 165, and each plane 165 may include a corresponding set of blocks 170, where each block 170 may include a corresponding set of pages 175, and each page 175 may include a set of memory cells .In some cases, NAND memory device 130 may include memory cells configured to each store one bit of information, which may be referred to as single level cells (SLCs). Additionally or alternatively, NAND memory device 130 may include memory cells configured to each store multiple bits of information, which may be referred to as multi-level cells (MLCs) when configured to store two bits of information each, when configured Known as a tri-level cell (TLC) when each stores three bits of information, a quad-level cell (QLC) when configured to store four bits of information each, or more generally a multi-level memory cell . Multi-level memory cells may provide greater storage density relative to SLC memory cells, but may in some cases involve narrower read or write margins or greater complexity for supporting circuitry.In some cases, a plane 165 may refer to several group blocks 170 , and in some cases concurrent operations may occur within different planes 165 . For example, concurrent operations may be performed on memory cells within different blocks 170 as long as the different blocks 170 are in different planes 165 . In some cases, individual blocks 170 may be referred to as physical blocks, and virtual blocks 180 may refer to a group of blocks 170 within which concurrent operations may occur. For example, concurrent operations may be performed on blocks 170-a, 170-b, 170-c, and 170-d within planes 165-a, 165-b, 165-c, and 165-d, respectively, and block 170- a, 170-b, 170-c, and 170-d may be collectively referred to as a virtual block 180. In some cases, a virtual block may include blocks 170 from different memory devices 130 (eg, blocks in one or more planes including memory device 130-a and memory device 130-b). In some cases, blocks 170 within a virtual block may have the same block address within their corresponding planes 165 (e.g., block 170-a may be "block 0" of plane 165-a, block 170-b may be "block 0" of plane 165-a). b's "block 0", and so on). In some cases, performing concurrent operations in different planes 165 may be subject to one or more constraints, such as performing concurrent operations on memory cells in different pages 175 that have the same page address in their respective planes 165 (e.g., related to command decode , page address decoding circuitry, or other circuitry shared between planes 165).In some cases, block 170 may include memory cells organized into rows (page 175) and columns (eg, strings, not shown). For example, memory cells in the same page 175 can share (e.g., be coupled to) a common word line, and memory cells in the same string can share a common digit line (which can alternatively be referred to as a bit line) (e.g., , coupled with the common digit line).For some NAND architectures, memory cells can be read and programmed (eg, written) at a first granularity (eg, at a page granularity) but erased at a second granularity (eg, at a block granularity). That is, a page 175 may be the smallest unit of memory (e.g., a group of memory cells) that may be independently programmed or read (e.g., concurrently as part of a single program or read operation), and a block 170 may be independently erasable. The smallest unit of memory (eg, a group of memory cells) that is erased (eg, concurrently as part of a single erase operation). Furthermore, in some cases, a NAND memory cell can be erased before it can be overwritten with new data. Thus, for example, old page 175 may not be updated until the entire block 170 including page 175 is erased in some cases.In some examples, host system 105 and memory system 110 may communicate information (eg, data, commands) via an interface, such as a physical host interface. In some cases, memory system controller 115 may be in information communication with one or more of memory devices 130 via an interface. The communication architecture of an interface may include multiple layers (eg, layers of a protocol stack) through which information is communicated. For example, a communication architecture may include an application layer, a UFS Transport Protocol (UTP) layer, a Unipro protocol stack (eg, which includes a UTP layer), and a physical layer (eg, a UFS Interconnect (UIC) layer) or combinations thereof, among other layers. In some examples, the application layer manages Small Computer System Interface (SCSI) commands, task management functions such as command queue control, device power management operations, commands for communicating with the physical layer, and for modifying and/or retrieving commands with the host system 105 ( For example, queue requests for configuration information associated with or memory system 110) and other commands and operations. In some cases, the Unipro protocol stack may be managed by a Device Management Entity (DME). The DME can manage the transfer of commands, operations, requests, etc. to various layers of the communication architecture. For example, the DME can route commands received from higher layers, such as the application layer, to the physical layer and can route commands and data received at the physical layer to the higher layers. In some examples, commands and data may be transferred between devices (eg, between host system 105 and memory system 110, between memory system controller 115 and memory device 130) via the physical layer. In some cases, the physical layer may operate according to the MPHY protocol.In some examples, the interface can operate according to various modes of operation. For example, an interface can operate according to a low speed mode and a high speed mode. In some cases, high speed mode may correspond to a mode of operation in which information is transmitted in bursts of data (eg, and may therefore be referred to as burst mode).Within each mode of operation, the interface can operate according to different data transfer rates (which can be referred to as gears). For example, if operating in the low speed mode, the interface may operate according to one of the first set of gears associated with the low speed mode. Additionally or alternatively, if operating in burst mode, the interface may operate according to one of a second set of gears associated with burst mode. In some examples, a controller (eg, host system controller 106, memory system controller 115) can change the gear of the interface. For example, different commands may be associated with different speed requirements. For example, commands associated with transferring video data (eg, read commands, write commands) may be associated with different speed requirements than commands associated with transferring image data, among other examples. Accordingly, the controller can change the gear of the interface to meet speed requirements associated with different commands. In some cases, the DME may support the controller to change (eg, switch) the gear of the interface. For example, the Unipro stack may include one or more registers that may be read or written via the DME, and a register of the one or more registers may store the current gear of the interface. Thus, the controller (eg, via the DME) can determine the current gear of the interface by reading a register and can change the gear of the interface by writing a new gear to the register.In some examples, the controller may set the gear of the interface based on how often the controller issues commands (eg, based on command density). In some cases, however, setting the gear based on the command frequency can reduce the data rate and performance of the host system 105 and the memory system 110 . For example, during video playback, the controller may issue commands for large amounts of data (read commands) relatively infrequently (eg, about every 200 milliseconds) due to the transfer of relatively large amounts of data (eg, 512KB of data). Accordingly, the controller may set the gear of the interface to a low gear based on low frequency commands, thus slowing down data transfers, increasing latency and degrading system performance. Additionally, in some cases, operating the interface at a low gear can increase power consumption at host system 105 (eg, and memory system 110). For example, a lower gear may correspond to lower instantaneous power consumption than a higher gear, but a higher gear may support faster transfer of data and thus earlier deactivation of one or more of the host system 105 and/or memory system 110 Multiple components. Therefore, using lower gears may result in increased overall power consumption and longer data transfer times than using higher gears.To reduce power consumption and improve system performance, controllers (eg, host system controller 106, memory system controller 115) can support an improved gear selection scheme. For example, the controller may configure the interface to operate according to a first gear and may be in data communication with, eg, memory system 110 via the interface, according to the first gear. The controller may switch gears of the interface in response to one or more commands from the controller satisfying one or more parameters of a set of parameters. For example, the controller may determine whether a size of a command satisfies a threshold size, a queue depth satisfies a threshold queue depth, at least a threshold number of commands within a set of traced commands have at least a threshold size, or a combination thereof. If the controller determines that at least one of the parameters is met (e.g., the size of the command satisfies a threshold size, at least a threshold number of commands have at least a threshold size, the queue depth satisfies a threshold queue depth), the controller may switch the interface to the second gear (eg, a second data transfer rate) and can communicate data with the memory system 110 according to the second gear. Alternatively, the controller may maintain the interface in the first gear if the controller determines that each of the parameters has not been met. In some examples, the second gear may be higher than the first gear (eg, corresponding to a higher data transfer rate than the first gear). In this way, power savings can be achieved because at least some data transfers can be performed at higher rates and with lower latencies, thereby allowing earlier deactivation of various system components.System 100 may include any number of non-transitory computer-readable media that support rate scaling of a memory interface. For example, host system 105, memory system controller 115, or memory device 130 may include or otherwise have access to store instructions for performing the functions ascribed herein to host system 105, memory system controller 115, or memory device 130 (eg, firmware) on one or more non-transitory computer-readable media. For example, such instructions, when executed by host system 105 (e.g., host system controller 106), memory system controller 115, or memory device 130 (e.g., local controller 135), may cause host system 105, memory system controller 115 Or memory device 130 performs one or more associated functions described herein.2 illustrates an example of a process flow 200 to support rate adjustment of a memory interface according to examples disclosed herein. Process flow 200 may be performed by components of a host system, such as host system 105 described respectively with reference to FIG. 1 . For example, process flow 200 may be performed by a controller of a host system, such as host system controller 106 described with reference to FIG. 1 . Process flow 200 can depict a process for selecting a data transfer rate, which can be implemented to reduce latency, increase data rate, increase system performance, and reduce power consumption, among other benefits. Aspects of process flow 200 may be implemented by a controller as well as other components. Additionally or alternatively, aspects of process flow 200 may be implemented as instructions stored in memory (eg, firmware stored in memory coupled to host system controller 106 ). For example, the instructions, when executed by a controller (eg, host system controller 106 ), may cause the controller to perform the operations of process flow 200 .In the following description of process flow 200, operations may be performed in a different order or at different times. Some operations may also be omitted from process flow 200 and other operations may be added to process flow 200 .At 205, an interface (e.g., a physical host interface) for transferring information between the controller and a memory system (e.g., the memory system 110 described with reference to FIG. position in the first gear) operation. For example, a controller, such as host system controller 106, can configure the interface to operate according to a first rate. The first rate may correspond to a first data transfer rate between the controller and the memory system. In some examples, the controller may configure the interface to operate according to the first rate by issuing a DME command that writes the first rate (eg, a value or index corresponding to the first rate) to a register indicating the current rate of the interface.In some examples, a rate group may correspond to a burst mode (eg, high-speed mode) associated with an interface. For example, the controller may operate the interface according to various modes of operation such as low speed mode and burst mode, and each mode of operation may be associated with multiple rates (eg, gears). In some cases, the controller may first configure the interface to operate according to the low speed mode (eg, after the host system is powered on or reset). The controller can switch the interface to operate according to burst mode (eg, during normal operation of the host system) and can (eg, first) select and configure the first rate of the interface. In some examples, the first rate may correspond to a relatively low data transfer rate (eg, a relatively low gear) associated with burst mode. For example, the first rate may correspond to the lowest rate (eg, lowest gear) in the set of rates.At 210, it can be evaluated whether a command from the controller to the memory system is associated with at least a threshold amount of data. For example, the controller may determine whether the size of the command satisfies (eg, is greater than, greater than, or equal to) a threshold size. The size of the command may correspond to the amount of data communicated (eg, transferred) based on the command (eg, the amount of data read from the memory system, the amount of data written to the memory system). Accordingly, the controller may determine whether the amount of data associated with (eg, transmitted in response to) the command satisfies a threshold amount of data (eg, 4KB of data or some other amount of data). In some examples, the command may be an unissued command. In some examples, the command may correspond to a command that the controller may then issue.If at 210 the controller determines that the size of the command fails to meet the threshold size, then the controller may perform 215 . At 215, it may be evaluated whether the number of issued and outstanding commands from the controller to the memory system satisfies a threshold number of issued and outstanding commands. For example, the controller may determine a queue depth of a queue of issued commands from the controller that remain outstanding (eg, by the memory system), where the queue depth corresponds to the number of commands included in the queue. The controller may determine whether the queue depth satisfies (eg, is greater than, greater than, or equal to) a threshold queue depth. That is, the controller may determine whether the number of issued and outstanding commands from the controller to the memory system meets a threshold number of issued and outstanding commands. Additional details related to determining whether a threshold queue depth is met are described below with reference to FIG. 3B.If at 215 the controller determines that the queue depth fails to meet the threshold queue depth, then the controller may execute 220 . At 220, the value of the history flag (eg, stored at the controller) can be evaluated. For example, the controller may determine whether the history flag is set to a first value (eg, bit value '1', bit value '0'). For example, a controller may be configured to track history associated with commands issued from the controller to (eg, executed by) the memory system. To track history, the controller may track a first number of issued (eg, and executed) commands and may determine how many commands within the first number of issued commands have at least a threshold size. For example, the controller may determine (eg, track) whether at least a threshold number of commands of the first number of issued commands are associated with at least the threshold amount of data. That is, the controller may determine whether at least a threshold number of the first number of issued commands have a size that satisfies (eg, is greater than, greater than, or equal to) the threshold size.The controller may set the history flag to a first value if the first number of issued commands includes at least a threshold number of commands having at least a threshold size. Alternatively, if the first number of issued commands fails to contain at least a threshold number of commands having at least a threshold size, the controller may set the history flag to a second value (eg, bit value ' 0', bit value '1'). In some examples, the controller may set the history flag to the second value in response to expiration of a timer associated with communication inactivity between the controller and the memory system. Accordingly, at 220, the controller may determine the value of the history flag and whether the history flag is set (eg, previously set by the controller) to a first value or a second value. Additional details related to setting the history flag are described below with reference to FIG. 3A.If at 220 the controller determines that the history flag is set to the second value, the controller may perform 225 . At 225, data can be transferred between the controller and the memory system via the interface according to the first rate. For example, the controller may be responsive to determining that the size of the command fails to meet the threshold size, the queue depth fails to meet the threshold queue depth, and the history flag is set to a second value (e.g., the first number of issued commands failed to include at least a threshold number of commands of at least a threshold size) to prevent the interface from switching to the second rate. For example, the second rate may correspond to a relatively high data transfer rate (eg, a relatively high gear) associated with burst mode, such as a maximum rate in a set of speeds (eg, a highest gear in a set of gears). The failure of the size of the command to meet the threshold size, the failure of the queue depth to meet the threshold queue depth, and the history flag being set to the second value may indicate to the controller that a relatively small amount of data (e.g., if present) will be between the controller and the memory system Transfers (eg, in the near future, due to currently issued and not executed commands, due to next issued commands) or communication activity levels between the controller and the memory system may be relatively low. Therefore, no high data transfer rate is required to transfer any such data. Therefore, in order to save power, the controller can maintain the configuration of the interface to operate according to the first rate and can transmit data according to the first rate (eg, if present).If at 210 the controller determines that the size of the command satisfies the threshold size, the controller may perform 230 to 245 as follows. Additionally or alternatively, if at 215 the controller determines that the queue depth satisfies the threshold queue depth, the controller may perform 230 to 245 as follows. Additionally or alternatively, if at 220 the controller determines that the history flag is set to the first value, the controller may perform 230 to 245 as follows. That is, if the controller determines that the size of the command satisfies the threshold size, the queue depth satisfies the threshold queue depth, the history flag is set to the first value, or any combination thereof, the controller may perform 230 to 245 as follows.At 230, the interface can be switched from the first rate to the second rate. For example, the controller may switch the interface from the first rate to the second rate in response to determining that the size of the command meets the threshold size, the queue depth meets the threshold queue depth, the history flag is set to the first value, or any combination thereof. For example, the size of the command satisfies the threshold size, the queue depth satisfies the threshold queue depth, and/or the history flag is set to the first value may indicate to the controller that a relatively large amount of data will be transferred between the controller and the memory system (e.g., in the near future) In the future, the level of communication activity between the controller and the memory system may be relatively high due to currently issued and not executed commands, due to next issued commands) or between the controller and the memory system. Thus, a high data transfer rate may enable the controller and memory system to transfer any data faster, thereby allowing earlier deactivation of various system components and reducing power consumption. Accordingly, the controller may switch the interface from a first rate to a second rate corresponding to a higher data transfer rate than the first rate (eg, the maximum rate in the rate set). In some examples, the controller may switch the interface from the first rate to the second rate by issuing a DME command that writes a second rate (e.g., a value or index corresponding to the second rate) to a register indicating the current rate of the interface. rate.At 235, data can be transferred between the controller and the memory system according to the second rate. For example, the controller may communicate data via the interface with the memory system according to the second rate based on switching the interface to the second rate.At 240, the interface can be switched from the second rate to the first rate. For example, the controller may determine that a second set of one or more commands from the controller instructs the controller to switch the interface from the second rate to the first rate. For example, the controller may determine that the second set of commands (e.g., commands to be issued next) failed to meet the threshold size, the queue depth associated with the second set failed to meet the threshold queue depth, and the history flag was set to the second value (eg, based on the second group). Accordingly, the controller can switch the interface from the second rate to the first rate. In some examples, the controller may switch the interface from the second rate to the first rate by issuing a DME command that writes the first rate (e.g., a value or index corresponding to the first rate) to a register indicating the current rate of the interface. rate.At 245, second data (eg, associated with a second set of one or more commands) can be transferred between the controller and the memory system according to the first rate. For example, the controller may communicate the second data with the memory system according to the first rate as a result of switching the interface from the second rate to the first rate.3A illustrates an example of a parameter scheme 300 supporting rate adjustment of a memory interface according to examples disclosed herein. Parameter scheme 300 may be implemented by components of host system 105 described with reference to FIG. 1 . For example, parameter scheme 300 may be implemented by a controller of a host system, such as host system controller 106 described with reference to FIG. 1 . Parameter scheme 300 can be implemented by the controller to support data transfer rate selection schemes, which can be implemented to reduce latency, increase data rates, increase system performance, and reduce power consumption, among other benefits.Parameter scheme 300 depicts history 305 that may indicate whether the value of flag 320 was set to a first value or a second value. For example, history 305 may correspond to a set of commands issued (eg, executed, not executed, or both) from the controller to the memory system tracked by the controller. In some examples, the number of commands included in history 305 (eg, tracked by the controller) may be indicated by a maxCommand parameter stored at the controller (eg, in a register). In some cases, the controller may set the value of the maxCommand parameter (eg, and may change the value of the maxCommand parameter at any time during operation). In some other cases, the value of the maxCommand parameter may be a defined value (eg, a value programmed during controller manufacture).History 305 may include commands 310 and/or commands 315 . For example, the history may include command 310-a, command 310-b, command 310-c, command 310-d, command 315-a, command 315-b, and command 315-c. Command 310 may correspond to a command having a corresponding size that satisfies (eg, is greater than, greater than, or equal to) a threshold size. That is, command 310 may correspond to a command that causes at least a threshold amount of data to be transferred between the controller and the memory system. Command 315 may correspond to a command having a corresponding size that fails to meet a threshold size. It should be noted that, for illustration, FIG. 3A depicts history 305 as including both commands 310 and 315 , however, the principles disclosed herein may be adapted and applied to history 305 including any number of commands 310 and 315 .The controller may set the value of the flag based on the number of commands 310 contained in the history 305 . For example, the controller may be configured to track whether the number of commands 310 meets (eg, is greater than, greater than, or equal to) a threshold number. If the controller determines that the number of commands 310 satisfies the threshold number, the controller may set the flag 320 to a first value (e.g., bit value '1', bit value '0'), where the first value indicates the controller's interface ( eg a physical host interface) operates according to a first rate (eg in burst mode). Alternatively, if the controller determines that the number of commands 310 fails to meet the threshold number, the controller may set the flag 320 to a second value (eg, bit value '0', bit value '1'), wherein the second value indicates The interface operates according to a second rate (eg, in burst mode). In some cases, the controller may configure the interface. In some examples, the first rate may correspond to a higher data transfer rate than the second rate. In some examples, the first rate may correspond to a maximum rate for burst mode and the second rate may correspond to a minimum rate for burst mode.In some examples, the value of the threshold amount may be indicated by a perfCommand parameter stored at the controller (eg, in a register). That is, the number of commands 310 included in history 305 that caused the controller to set flag 320 to the first value may be indicated by the perfCommand parameter. In some cases, the controller may set the value of the perfCommand parameter (eg, and may change the value of the perfCommand parameter at any time during operation). In some other cases, the value of the perfCommand parameter may be a defined value (eg, a value programmed during controller manufacture).In some examples, the controller may set the value of the flag 320 based on a timer 325 associated with communication inactivity between the controller and the memory system. For example, timer 325 may run when no data is being transferred between the controller and the memory system. Thus, expiration of timer 325 may indicate that no data was transferred between the controller and the memory system for at least the duration of timer 325 . In some examples, an interface may be in an idle state if no data is actively being transmitted over the interface. In response to expiration of timer 325, controller may set flag 320 to a second value, which indicates that the interface is operating according to a second (eg, lower) rate. In some cases, the controller may clear commands 310 and 315 included in history 305 in response to timer 325 expiring and may begin tracking new commands issued after timer 325 expires.By tracking the history 305, the controller can reduce the frequency of rate changes of the interface. For example, history 305 may indicate how recently commands 310 having at least a threshold size have been issued. A high frequency of commands 310 may indicate that another command 310 is more likely to be issued. Thus, even if the size of the next issued command fails to meet the threshold size and the queue depth of the queue of issued and unexecuted commands fails to meet the threshold queue depth, the controller may enable the interface when at least a threshold number of commands 310 have been issued recently. Maintain at the first (eg, higher) rate. Since the frequency of switching the rate of the interface is reduced, this may result in a reduction in latency associated with switching the rate of the interface, thereby improving system performance.3B illustrates an example of a parameter scheme 330 supporting rate adjustment of a memory interface according to examples disclosed herein. Parameter scheme 330 may be implemented by components of host system 105 described with reference to FIG. 1 . For example, parameter scheme 330 may be implemented by a controller of the host system, such as host system controller 106 described with reference to FIG. 1 . Parameter scheme 330 can be implemented by the controller to support data transfer rate selection schemes, which can be implemented to reduce latency, increase data rates, increase system performance, and reduce power consumption, among other benefits.Parameter scheme 330 depicts queue 335 . Queue 335 may indicate whether the controller's interface (e.g., a physical host interface) is to operate according to a first rate (e.g., burst mode) corresponding to a high (e.g., maximum) data transfer rate or according to a first rate corresponding to a low (e.g., minimum) data transfer rate. ) data transfer rate (eg, burst mode) second rate operation. For example, the controller may issue commands to the memory system before previously issued commands complete (eg, execute). Such issued commands may be included in queue 335 and may be executed as the memory system becomes available to execute the commands. For example, queue 335 may include commands 340 corresponding to issued and unexecuted commands from the controller to the memory system. For example, queue 335 may include command 340-a, command 340-b, and command 340-c each corresponding to an issued and unimplemented command (eg, although any number of commands 340 included in queue 335 is possible). In some examples, the number of commands 340 contained in queue 335 may be referred to as the queue depth of queue 335 .The controller may determine whether the queue depth of queue 335 satisfies (greater than, greater than, or equal to) threshold 345 and may determine the data transfer rate of the interface based on determining whether threshold 345 is satisfied. For example, if the queue depth satisfies threshold 345, the controller may set (eg, switch) the interface to the first rate or may maintain the interface at the first rate (eg, if the interface is currently set to the first rate). Alternatively, if the queue depth fails to meet the threshold 345, the controller may set (e.g., switch) the interface to the second rate or may maintain the interface at the second rate (e.g., if the interface is currently set to the second rate) . In the example of FIG. 3B , queue 335 may have a queue depth of three commands 340 and threshold 345 may correspond to a queue depth of two commands 340 . Accordingly, the controller may determine that the queue depth satisfies the threshold 345 and may set (eg, or maintain the interface at) the interface to a first rate corresponding to a high (eg, maximum) data transfer rate.4 shows a block diagram 400 of a host system 420 supporting rate scaling of memory interfaces according to examples disclosed herein. Host system 420 may be an example of aspects of the host system described with reference to Figures 1-3. Host system 420 or its various components may be an example of means for performing various aspects of rate scaling of a memory interface, as described herein. For example, host system 420 may include configuration component 425, switch component 430, communication component 435, parameter component 440, command component 445, flag component 450, or any combination thereof. Each of these components can communicate with each other directly or indirectly (eg, via one or more buses).Configuration component 425 may be configured to or otherwise support means for configuring the interface to operate according to a first rate, where the first rate is one of a set of rates each corresponding to a respective data transfer rate between the controller and the memory system via the interface one of. The switch component 430 can be configured to or otherwise support a method for switching the interface from a first speed to a second speed in a speed set based at least in part on one or more commands from the controller to the memory system satisfying one or more parameters The means, the one or more parameters include a threshold amount of data associated with the command, a threshold amount of issued commands associated with at least the threshold amount of data, a threshold amount of issued and not executed commands, or any combination thereof. The communication component 435 can be configured as or otherwise support means for communicating data with the memory system according to the second rate.In some examples, parameter component 440 may be configured as or otherwise support means for determining whether a first command of the one or more commands is associated with at least a threshold amount of data, wherein switching the interface from a first rate to a second rate The second rate is based at least in part on determining that the first command is associated with at least a threshold amount of data.In some examples, parameter component 440 may be configured as or otherwise support means for determining whether the number of issued and not executed commands contained in one or more commands satisfies a threshold number of issued and not executed commands, wherein the interface Switching from the first rate to the second rate is based at least in part on determining that the number of issued and outstanding commands satisfies a threshold number of issued and outstanding commands.In some examples, the command component 445 can be configured as or otherwise support means for tracking a first number of commands issued by the controller to the memory system, the first number of commands comprising one or more commands. In some examples, parameter component 440 may be configured to or otherwise support means for determining whether a first number of commands includes at least a threshold number of issued commands associated with at least a threshold amount of data, wherein the interface is converted from the first rate Switching to the second rate is based at least in part on determining that the first number of commands includes at least a threshold number of issued commands associated with at least a threshold amount of data.In some examples, flag component 450 can be configured to, or otherwise support, be used to set the flag to a second number based at least in part on determining that the first number of commands includes at least a threshold number of issued commands associated with at least a threshold amount of data. means of a value, the first value indicating to operate the interface according to the second rate, wherein switching the interface from the first rate to the second rate is based at least in part on setting the flag to the first value.In some examples, the flag component 450 can be configured to or otherwise support an expiration of a timer associated with communication inactivity between the controller and the memory system at least in part after setting the flag to the first value. A means to set the flag to a second value indicating to operate the interface according to the first rate.In some examples, the flag component 450 can be configured or otherwise supported to be configured to fail based at least in part on a second number of commands issued by the controller to the memory system and tracked by the host system after setting the flag to the first value. Means for issuing at least a threshold number of commands associated with at least the threshold amount of data to set the flag to a second value can include means for operating the interface according to the first rate.In some examples, the first rate corresponds to a first data transfer rate and the second rate corresponds to a second data transfer rate that is higher than the first data transfer rate.In some examples, the first rate corresponds to the smallest rate in the set of rates and the second rate corresponds to the largest rate in the set of rates.In some examples, switch component 430 can be configured to, or otherwise support, be used to switch the interface from the second rate to First rate member. In some examples, the communication component 435 can be configured as or otherwise support means for communicating the second data with the memory system according to the first rate.In some examples, one or more parameters are each included in a set of parameters. In some examples, the switch component 430 can be configured as or otherwise support means for switching the interface from the first speed to the second speed based at least in part on the one or more commands satisfying any parameter in the set of parameters.In some examples, the rate group corresponds to a burst mode associated with the interface, the burst mode is different than the low speed mode associated with the interface, and the burst mode is associated with a higher data transfer rate than the low speed mode.5 shows a flowchart illustrating a method 500 of supporting rate scaling of a memory interface according to examples disclosed herein. The operations of method 500 may be implemented by a host system or components thereof as described herein. For example, the operations of method 500 may be performed by the host system described with reference to Figures 1-4. In some examples, the host system can execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the host system may use dedicated hardware to perform aspects of the described functions.At 505, the method may include configuring the interface to operate according to a first rate, wherein the first rate is one of a set of rates each corresponding to a respective data transfer rate between the controller and the memory system via the interface. Operation 505 may be performed according to examples disclosed herein. In some examples, aspects of operation 505 may be performed by configuration component 425 described with reference to FIG. 4 .At 510, the method may include switching the interface from a first speed to a second speed in a speed set based at least in part on one or more commands from the controller to the memory system satisfying one or more parameters, the one or more parameters comprising A threshold amount of data associated with the command, a threshold amount of issued commands associated with at least the threshold amount of data, a threshold amount of issued and not executed commands, or any combination thereof. Operation 510 may be performed according to examples disclosed herein. In some examples, aspects of operation 510 may be performed by switch assembly 430 described with reference to FIG. 4 .At 515, the method can include communicating data with the memory system according to the second rate. Operation 515 may be performed according to examples disclosed herein. In some examples, aspects of operation 515 may be performed by communication component 435 described with reference to FIG. 4 .In some examples, an apparatus described herein may perform one or several methods, such as method 500 . An apparatus may include features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for: configuring the interface to operate according to a first rate, where the first rate is each one of a set of rates corresponding to a respective data transfer rate between the controller and the memory system via the interface; the interface is based at least in part on one or more commands from the controller to the memory system satisfying one or more parameters To switch from a first rate to a second rate in a rate set, the one or more parameters include a threshold amount of data associated with the command, a threshold amount of issued commands associated with at least the threshold amount of data, issued and unexecuted commands or any combination thereof; and communicating data with the memory system according to a second rate.Some examples of the methods 500 and apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for determining whether a first command of the one or more commands can be associated with at least a threshold amount of data , wherein switching the interface from the first rate to the second rate may be based at least in part on determining that the first command may be associated with at least a threshold amount of data.Some examples of the method 500 and apparatus described herein may further include operations, features, circuitry for determining whether the number of issued and not executed commands contained in one or more commands satisfies a threshold number of issued and not executed commands , logic, means, or instructions, wherein switching the interface from the first rate to the second rate may be based at least in part on determining that the number of issued and not executed commands satisfies a threshold number of issued and not executed commands.Some examples of the method 500 and apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for tracking a first number of commands issued by the controller to the memory system, the first number of The commands include one or more commands; and determining whether the first number of commands includes at least a threshold number of issued commands associated with at least the threshold amount of data, wherein switching the interface from the first rate to the second rate may be based at least in part on determining The first number of commands includes at least a threshold number of issued commands associated with at least the threshold amount of data.Some examples of the method 500 and apparatus described herein may further include for setting a flag to a first value based at least in part on determining that the first number of commands includes at least a threshold number of issued commands associated with at least a threshold amount of data The operations, features, circuitry, logic, means, or instructions of the method wherein the first value indicates operating the interface according to a second rate, wherein switching the interface from the first rate to the second rate may be based at least in part on setting a flag to the first value.Some examples of the methods 500 and apparatus described herein may further include, after setting the flag to the first value, for setting the An operation, feature, circuitry, logic, component, or instruction that is flagged to a second value indicating that the interface is operated at a first rate.Some examples of the method 500 and apparatus described herein may further include a method for failing to include a second number of commands issued by the controller to the memory system and tracked by the apparatus at least in part after setting the flag to the first value. At least a threshold number of operations, features, circuitry, logic, components, or instructions associated with at least the threshold amount of data issuing commands to set the flag to a second value, the second value indicating operating the interface according to the first rate.In some examples of the method 500 and apparatus described herein, the first rate corresponds to a first data transfer rate and the second rate corresponds to a second data transfer rate that is higher than the first data transfer rate.In some examples of the method 500 and apparatus described herein, the first rate corresponds to a minimum rate in the set of rates and the second rate corresponds to a maximum rate in the set of rates.Some examples of the method 500 and apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for failing to satisfy one or more parameters based at least in part on the second set of commands from the controller switching the interface from the second rate to the first rate; and communicating a second data with the memory system according to the first rate.In some examples of the method 500 and apparatus described herein, one or more parameters may each be included in a set of parameters. Some examples of methods 500 and apparatus described herein may further include operations, features, circuitry for switching an interface from a first rate to a second rate based at least in part on one or more commands satisfying any parameter in a set of parameters , logic, component or instruction.In some examples of the method 500 and apparatus described herein, the rate group corresponds to a burst mode associated with the interface, the burst mode is different from the low speed mode associated with the interface, and the burst mode is higher than the low speed mode. associated with the data transfer rate.It should be noted that the methods described above describe possible implementations and that operations and steps may be rearranged or otherwise modified and other implementations are possible. Furthermore, portions from two or more of the methods may be combined.Describe a device. The apparatus may include a controller configured to communicate with a memory system via an interface, wherein the controller is configured to cause the apparatus to: configure the interface to operate according to a first rate, wherein the first rates are respectively corresponding to one of a set of rates at a corresponding data transfer rate between the controller and the memory system via the interface; based at least in part on one or more commands from the controller to the memory system satisfying one or more parameters to switch the interface from the first rate to a second rate of the set of rates, the one or more parameters comprising a threshold amount of data associated with the command, a data a threshold number of issued commands, a threshold number of issued and not executed commands, or any combination thereof associated with at least the threshold number; and communicating data with the memory system according to the second rate.In some examples of the apparatus, the controller may be further configured to cause the apparatus to determine whether a first command of the one or more commands can be associated with at least the threshold amount of data, wherein the Switching of the interface from the first rate to the second rate may be based at least in part on determining that the first command may be associated with at least the threshold amount of data.In some examples of the apparatus, the controller may be further configured to cause the apparatus to determine whether the number of issued and not executed commands included in the one or more commands satisfies all of the issued and not executed commands The threshold number, wherein switching the interface from the first rate to the second rate may be based at least in part on determining that the number of issued and not executed commands satisfies the threshold number of issued and not executed commands.In some examples of the apparatus, the controller may be further configured to cause the apparatus to: track a first number of commands issued by the controller to the memory system, the first number of commands comprising the one or more commands; and determining whether the first number of commands includes at least the threshold number of issued commands associated with at least the threshold amount of data, wherein the interface is transferred from the first rate Switching to the second rate may be based at least in part on determining that the first number of commands includes at least the threshold number of issued commands associated with at least the threshold amount of data.In some examples of the apparatus, the controller may be further configured to cause the apparatus to include at least the threshold amount of commands associated with at least the threshold amount of data based at least in part on determining the first amount of commands issuing a command to set a flag to a first value indicating to operate the interface according to the second rate, wherein switching the interface from the first rate to the second rate may be at least Based in part on setting the flag to the first value.In some examples of the apparatus, the controller may be further configured to cause the apparatus after setting the flag to the first value based at least in part on a relationship between the controller and the memory system setting the flag to a second value indicating that the interface is to be operated according to the first rate upon expiration of a timer associated with communication inactivity therebetween.In some examples of the apparatus, the controller may be further configured to cause the apparatus after setting the flag to the first value based at least in part on and a second number of commands tracked by the device fail to include at least the threshold number of issued commands associated with at least the threshold amount of data to set the flag to a second value, the second A value indicates that the interface is operated according to the first rate.In some examples of the apparatus, the first rate corresponds to a first data transfer rate and the second rate corresponds to a second data transfer rate, the second data transfer rate being higher than the first data transfer rate rate.In some examples of the apparatus, the first rate corresponds to a minimum rate of the set of rates and the second rate corresponds to a maximum rate of the set of rates.In some examples of the apparatus, the controller may be further configured to cause the apparatus to fail to satisfy each of the one or more parameters based at least in part on a second set of commands from the controller. switching the interface from the second rate to the first rate; and communicating a second data with the memory system according to the first rate.In some examples of the apparatus, the one or more parameters may each be included in a set of parameters and the controller may be configured to cause the apparatus to satisfy the any of a set of parameters to switch the interface from the first rate to the second rate.In some examples of the apparatus, the set of rates corresponds to a burst mode associated with the interface, the burst mode is different from a low speed mode associated with the interface, and the burst mode Associated with a higher data transfer rate than the low speed mode.The information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof . Some figures may illustrate a signal as a single signal; however, a signal may represent a bus of signals, where the bus may have various bit widths.The terms "electronically communicate", "conductive contact", "connect" and "couple" may refer to a relationship between components that enables the flow of signals between components. Components may be considered to be in electronic communication (or in conductive contact or connection or coupling) with each other if there is any conductive path between the components that can at any time support the flow of signals between the components. At any given time, a conductive path between components that are in electronic communication with each other (or are in conductive contact or connection or coupling) can be an open circuit or a closed circuit, based on the operation of the device comprising the connected components. The conductive paths between connected components may be direct conductive paths between components, or the conductive paths between connected components may be indirect conductive paths that may include intermediate components such as switches, transistors, or other components. In some examples, signal flow between connected components may be interrupted for a period of time, eg, using one or more intermediate components such as switches or transistors.The term "coupled" refers to the condition of transitioning from an open-circuit relationship between components (where signals cannot currently travel between components via conductive paths) to a closed-circuit relationship between components (where signals can travel between components via conductive paths) . If a component, such as a controller, couples other components together, the component induces a change that allows signals to flow between the other components through conductive paths that previously did not permit signal flow.The term "isolation" refers to a relationship between components in which signals cannot currently flow between components. Components are isolated from each other if there is an open circuit between them. For example, if a switch positioned between two components is open, the components separated by the switch are isolated from each other. If the controller isolates the two components, the controller causes a change that prevents the signal from flowing between the components using the conductive path that previously permitted the signal to flow.The terms "if", "when", "based on" or "based at least in part on" are used interchangeably. In some instances, the terms "if", "when", "based on" or "based at least in part on" are used to describe a conditional action, a conditional procedure, or a link between parts of a procedure, then the terms may be interchangeable of.The term "responsive to" may refer to a condition or action occurring at least in part, if not entirely, as a result of prior conditions or actions. For example, a first condition or action may be performed and a second condition or action may occur due at least in part to the occurrence of a prior condition or action (whether one or more other intermediate conditions or actions occur directly after or after the first condition or action after).Additionally, the term "directly in response to" or "directly in response to" may refer to a condition or action occurring directly as a result of a prior condition or action. In some examples, a first condition or action may be performed and a second condition or action may occur directly as a result of previous conditions or actions occurring, regardless of whether other conditions or actions occurred. In some examples, a first condition or action may be performed and a second condition or action may occur directly as a result of the prior condition or action occurring such that no other intermediate condition or action occurs between or between the earlier condition or action and the second conditional action. A number of one or more intermediate steps or actions occur between an earlier condition or action and a second condition or action. Unless otherwise specified, any condition or action described herein as being performed "based on," "at least in part based on," or "in response to" some other step, action, event, or condition may additionally or alternatively (e.g., in place of example) is performed "directly in response to" or "directly in response to" such other condition or action.The devices discussed herein, including memory arrays, may be formed on semiconductor substrates such as silicon, germanium, silicon germanium alloys, gallium arsenide, gallium nitride, and the like. In some cases, the substrate is a semiconductor wafer. In some other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or an epitaxial layer of semiconductor material on another substrate. The conductivity of a substrate or a subregion of a substrate can be controlled by doping with various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping can be performed by ion implantation or by any other doping method during the initial formation or growth of the substrate.A switching component or transistor discussed herein may represent a field effect transistor (FET) and include a three-terminal device including a source, drain and gate. Terminals may be connected to other electronic components through conductive material such as metal. The source and drain can be conductive and can comprise heavily doped (eg, degenerate) semiconductor regions. The source and drain can be separated by a lightly doped semiconductor region or a channel. If the channel is n-type (ie, the majority carriers are electrons), then the FET may be referred to as an n-type FET. If the channel is p-type (ie, the majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be covered by an insulating gate oxide. Channel conductivity can be controlled by applying a voltage to the gate. For example, applying a positive or negative voltage to an n-type FET or a p-type FET, respectively, can cause the channel to become conductive. A transistor may be "on" or "activated" if a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. A transistor may be "off" or "deactivated" if a voltage less than the transistor's threshold voltage is applied to the transistor gate.The description set forth herein describes example configurations in conjunction with the drawings and does not represent all examples that may be implemented or are within the scope of the claims. The term "exemplary" is used herein to mean "serving as an example, instance or illustration" rather than "preferred" or "over other examples". The detailed description contains specific details to provide an understanding of the described technology. However, these techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order not to obscure the concepts of the described examples.In the drawings, similar components or features may have the same reference label. Furthermore, various components of the same type can be distinguished by having the reference label followed by a hyphen and a second label that distinguishes similar components. If only a first reference sign is used in the specification, the description applies to any of similar components having the same first reference sign, regardless of the second reference sign.The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and the appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring or combinations of any of these. Features implementing functions may also be physically located at various locations, including being distributed such that portions of functions are implemented at different physical locations.For example, the various illustrative blocks and components described in connection with this disclosure may be implemented with a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware designed to perform the functions described herein. components or any combination thereof to implement or perform. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may be implemented as a combination of computing devices (eg, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).As used herein (contained in the claims), "or" is used in a list of items (such as a list of items beginning with a phrase such as "at least one of" or "one or more of") An inclusive list is indicated such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (ie, A and B and C). Also, as used herein, the phrase "based on" should not be construed as referring to a closed set of conditions. For example, an exemplary step described as "based on condition A" may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should be interpreted in the same manner as the phrase "based at least in part on."Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. Non-transitory storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example and not limitation, non-transitory computer-readable media may include RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disk (CD) ROM or other optical disk storage, magnetic disk storage, or other magnetic storage A device or any other non-transitory medium that can be used for carrying or storing desired program code means in the form of instructions or data structures and that can be accessed by a general purpose or special purpose computer or a general purpose or special purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology (such as infrared, radio, and microwave), then the coaxial cable, Fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the media definition. Disk and disc, as used herein, includes CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.The description herein is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.